Facebook and Instagram owner Meta says it will form a team to tackle deceptive artificial intelligence (AI) content in the upcoming EU elections in June.It is concerned by how generative AI - tech which can fake videos, images and audio - might be used to trick voters.It comes on the same day Home Secretary James Cleverly told the Times some people will use AI-generated fakes to try to influence a general election.But an industry expert said the plans could be seen as "lacking teeth".
The BBC has asked Meta if it has such plans for upcoming UK and US elections.
The announcement comes two weeks after Meta signed an agreement with other big tech firms committing to fighting such content.
The European Parliament vote will be held from 6 to 9 June this year.
Social media rival TikTok announced in February it would be launching so-called "Election Centres" in local languages within its app for each of the 27 EU members, which will host authoritative information.
Meta head of EU affairs Marco Pancini said in a blog post that the firm, which also owns WhatsApp and Threads, would launch "an EU-specific Elections Operations Centre" that would "identify potential threats and put specific mitigations in place across our apps and technologies in real time".
"Since 2016, we've invested more than $20bn (£15.7bn) into safety and security and quadrupled the size of our global team working in this area to around 40,000 people," he said.
"This includes 15,000 content reviewers who review content across Facebook, Instagram and Threads in more than 70 languages - including all 24 official EU languages."
He said this meant bringing together experts from a range of different teams across the company, including those working in engineering, data science and law.
'Serious limitations'
But the announcement has shortcomings, according to Deepak Padmanabhan from Queen's University Belfast, who has co-authored a paper on elections and AI.
"Most of its planned strategy could be observed to lack teeth in substantive ways," he said.
One of the issues he has with Meta's strategy is how the firm plans to deal with AI-generated images, which he said "could be intrinsically unworkable".
He asked what would happen in a situation where realistic AI-generated images appear to show protesters clashing with police.
"Proving it to be fake requires that we are sure that there was no such attack by the policemen pictured on the farmers pictured - this may be infeasible both for technology or for human experts," he said.
"How can any technology label this as fake or real?
"Thus, it is not very clear as to how effective Meta's generative AI strategy could be - at the very least, there are serious limitations."
Meta, which currently works with 26 fact-checking organisations across the EU, said it would bring on board three more partners based in Bulgaria, France and Slovakia to help deal with the threat.
The role of these organisations is not to deal with content which is intended to suppress voting - these kinds of posts are banned - but rather to debunk content that is spreading misinformation, including when they involve AI-generated elements.
Mr Pancini said these types of posts would be given warning labels and made less prominent, as well as not being allowed in ads.
Ads cannot question the legitimacy of the vote, prematurely claim victory, or question "the methods and processes of election".
But he said the firm's work was a result of collaboration, and it would require further co-ordination in the future.
"Since AI-generated content appears across the internet, we've also been working with other companies in our industry on common standards and guidelines," he said.
"This work is bigger than any one company and will require a huge effort across industry, government, and civil society."
Source: BBC