EU outlines possible safety guidelines for generative AI in elections


The European Commission has issued guidelines for very large online platform providers (VLOPs) and very large online search engines (VLOSEs) to mitigate systemic risks to electoral processes.

The guidelines include specific guidance and proposals to mitigate risks that may arise from generative AI. They are based on Regulation (EU) 2022/2065 (Digital Services Act or DSA).

Examples of generative AI risks cited by the European Commission include deceiving voters or manipulating electoral processes by creating false and misleading synthetic content about political actors, and misrepresenting events, polls, contexts, or narratives.

Generative AI systems can also produce false, incoherent, or fabricated information, affectionately called “hallucinations” in AI lingo, which can misrepresent reality and potentially mislead voters.



AI content needs (better) labeling

To mitigate the risks associated with generative AI, providers should ensure that content generated by GenAI systems is identifiable to users, for example through watermarking.

Providers should also give users standard interfaces and easy-to-use tools for tagging AI-generated content. These identifiers should be easily recognizable to users, including in advertising.

Meta announced such a feature for its social platforms just a few days ago. Most major AI companies have also agreed to the C2PA standard for image tagging.

The EU also wants providers to ensure that AI-generated information is based on reliable sources as much as possible, and to alert people to potential errors in the generated content. The generation of inaccurate content should be minimized.

In addition, public media literacy should be strengthened, and providers should engage with relevant national authorities and other local stakeholders to escalate election-related issues and discuss solutions.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top