Last Friday, 20 technology platforms agreed to better label and curtail AI-generated disinformation that’s being spread online to deceive voters during a busy election year. They pledged to provide “swift and proportionate responses” to deceptive AI content about the election, including sharing more information about “ways citizens can protect themselves from being manipulated or deceived.”
This voluntary commitment, signed by Google, Microsoft, Meta, OpenAI, TikTok and X, among others, does not outrightly ban the use of so-called political “deepfakes” — false video or audio depictions — of candidates, leaders and other influential public figures. Nor do the platforms agree to restore the sizable teams they had in place to safeguard election integrity in 2020. Even at those previous levels, these teams struggled to stop the spread of disinformation about the election result, helping to fuel violence at the US Capitol Building as Congress prepared to certify President Joe Biden’s victory.
In response, the platforms have pledged to set high expectations in 2024 for how they “will manage the risks arising from deceptive AI election content,” according to the joint accord. And their actions will be guided by several principles, including prevention, detection, evaluation and public awareness.
If the platforms want to prevent a repeat of 2020, they need to be doing much more now that technology has made it possible to dupe voters with these deceptively believable facsimiles... [read the full commentary at CNN]
No comments:
Post a Comment