Friday, July 12, 2024
- Advertisement -
More

    Latest Posts

    US FCC Proposes New Rules Requiring Disclosure of AI-Generated Content in Political Ads

    US Federal Communications Commission (FCC) Chairperson Jessica Rosenworcel released a circular proposing new rules related to the use of AI in political advertising on May 22, 2024. If adopted, the rules would require TV and radio broadcasters to disclose if AI-generated content was used during an advertisement. The FCC’s ‘Notice of Proposed Rulemaking’ has been circulated amongst the Commissioners and will be adopted if it receives a majority of votes, following which the FCC would take public comments on the matter. 

    If adopted, this proposal aims to increase transparency by:

    • Seeking comment on whether to require an on-air disclosure and written disclosure in broadcasters’ political files when there is AI-generated content in political ads,

    • Applying the disclosure rules to both candidate and issue advertisements,

    • Requesting comment on a specific definition of AI-generated content, and

    • Applying the disclosure requirements to broadcasters and entities that engage in origination programming, including cable operators, satellite TV and radio providers and section 325(c) permittees.

    “As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used,” said Chairperson Rosenworcel. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”

    Notably, the circular does not propose prohibiting AI-generated content but mandates its disclosure.

    The proposal has been issued due to concerns surrounding AI-generated misinformation, especially ‘deepfakes,’ which are AI-generated synthetic media that mimic the appearance, voice or likeness of real people and depict them in situations or actions that did not occur. 

    Deepfakes, especially as a part of political communication, have been a major concern across the world. 

    Sexually explicit deepfakes of the artist Taylor Swift were widely circulated on social media platforms earlier this year, leading to a condemnation from the White House. US lawmakers went on to propose a bill titled Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, which targets “intimate digital forgeries” depicting an identifiable person without their consent, allowing such persons to collect financial damages from anyone who “knowingly produced or possessed” the forged content with the intent to share it.

    Detecting deepfakes is challenging, as numerous experts have commented. In January this year, Medianama held an event on deepfakes and democracy, hosting a group of experts to come together and discuss the wide-ranging implications of deepfakes; many of the experts stressed the insufficiency of current methods of detecting deepfakes.

    Similar concerns were raised in the Lok Sabha next month, as MPs demanded to know the government’s strategy in tackling deepfakes. The Ministry of Information and Technology (MeitY) did not announce any new strategy or policy to address these concerns and the National Crime Records Bureau (NCRB) does not record instances of deepfakes separately. However, MeitY Secretary S Krishnan said that the IT Ministry is open to a separate act regulating deepfakes, at the AI Alliance Delhi NCR conference held on May 17.

    Also Read:

    The post US FCC Proposes New Rules Requiring Disclosure of AI-Generated Content in Political Ads appeared first on MEDIANAMA.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.