EU increases monitoring of major platforms for potential risks of artificial intelligence ahead of elections

by

in

1. The European Commission has sent formal requests for information to Google, Meta, Microsoft, Snap, TikTok and X regarding how they handle risks related to generative AI.
2. The requests are being made under the Digital Services Act and relate to risks such as the dissemination of deepfakes and manipulation of services that can mislead voters.
3. The EU is planning stress tests after Easter to assess platforms’ readiness to deal with generative AI risks ahead of the June European Parliament elections.

The European Commission has sent formal requests for information to Google, Meta, Microsoft, Snap, TikTok, and other tech giants regarding how they are handling risks associated with generative AI. This is being done under the Digital Services Act (DSA), as these platforms are considered very large online platforms and are required to mitigate systemic risks related to AI technologies generating false information, deepfakes, and other misleading content.

The Commission is requesting information on risk assessments and mitigation measures related to generative AI’s impact on electoral processes, dissemination of illegal content, protection of fundamental rights, gender-based violence, protection of minors, and mental well-being. They are planning stress tests after Easter to test the platforms’ readiness to deal with generative AI risks, especially ahead of the European Parliament elections.

The EU is concerned with the growing risks of misleading deepfakes being produced during elections as the cost of producing synthetic content decreases. The upcoming election security guidelines will focus on specific risk situations targeting major platforms, building an ecosystem of enforcement structures to address generative AI risks. Smaller platforms and tool makers enabling the generation of synthetic media are also on the EU’s radar for risk mitigation.

Platforms have until April 3 to provide information related to election security, with the EU aiming to finalize election security guidelines by March 27. The Commission is seeking to broaden the regulatory impact by indirectly pressuring smaller platforms through larger platforms and self-regulatory mechanisms such as the Disinformation Code and the AI Pact.

Source link