EU releases guidelines on election security for social media giants and other entities covered by DSA

by

in

1. The European Union published draft election security guidelines for large platforms regulated under the Digital Services Act to mitigate systemic risks like political deepfakes while protecting fundamental rights.
2. Platforms like Facebook, Google Search, Instagram, and YouTube are required to identify and mitigate risks related to electoral processes, including disinformation targeting democratic processes.
3. The guidelines recommend enhanced content moderation resources, transparency around AI-powered content recommendations, and cooperation with oversight bodies to address election security risks.

The European Union has published draft election security guidelines for technology platforms with more than 45 million monthly active users in the region. These platforms, such as Facebook, Google, Instagram, TikTok, and YouTube, are regulated under the Digital Services Act and have a legal duty to mitigate systemic risks like political deepfakes while safeguarding fundamental rights.

The guidelines prioritize election security as a key area of enforcement for very large online platforms and search engines. Platforms are expected to deploy capable content moderation resources in various languages, respond effectively to risks, and act on reports by third-party fact-checkers to prevent the spread of misinformation targeting democratic processes.

Platforms are required to balance political content moderation by distinguishing between political satire, which is protected free speech, and malicious political disinformation that could influence voters. The EU standard demands “reasonable, proportionate, and effective” mitigation measures to reduce risks related to electoral processes.

The guidelines recommend that platforms give users meaningful choices over algorithmic and AI-powered recommender systems. Platforms should downrank disinformation targeting elections and deploy measures to prevent the spread of generative AI-based disinformation. They are encouraged to engage in transparency, adversarial testing, and red-teaming to enhance their ability to identify and address risks.

To address election security risks, platforms are advised to reinforce internal processes and resources, particularly in the lead-up to elections. They should have dedicated teams with the necessary expertise, including language knowledge, to effectively mitigate risks. Platforms are also urged to cooperate with oversight bodies, share information, and establish communication channels for tips and risk reporting during elections. Failure to comply with the guidelines could result in penalties of up to 6% of global annual turnover under the DSA.

Source link