From May on, Meta will start marking material on Facebook and Instagram that was made by AI, the tech giant said. Before, the company's strategy was to delete this kind of computer-made material. "Made with AI" marks will be added to photos, music, and videos that were made with artificial intelligence, the company said in a blog post on Friday.
These marks will be put on automatically when Meta finds "industry-shared signals" of AI content, or users can choose to display them if they say that something they posted was made with AI.
Meta explained that a more noticeable mark could be used if the content has "a particularly high risk of materially misleading the public on a matter of importance."
The "manipulated media" policy of Meta only covers videos that were "created or altered by AI to make a person appear to say something they did not say." Instead of being labeled, content that breaks this code is taken down.
This dragnet now includes pictures, sounds, and videos of people "doing something they did not do." In contrast to the previous method, this one is less strict and will allow the content in question to stay online.
This dragnet now includes pictures, sounds, and videos of people "doing something they did not do." In contrast to the previous method, this one is less strict and will allow the content in question to stay online.
The organization said, "Our manipulated media policy was written in 2020, when realistic AI-generated content was rare and videos were the main concern." "The last four years, and especially the last year, people have made other realistic AI-generated content, like audio and photos, and this technology is changing quickly."
Recently, US regulators banned "robocalls" made by AI after a computer-generated Joe Biden told people in New Hampshire to not vote in the state's Democratic primary election. Meanwhile, the White House has promised to "deal with" the problem of non-consensual porn after fake nude photos of pop star Taylor Swift went viral on social media. Speaking out on the problem, former US President Donald Trump also said that US media sources were using AI to make him look bigger in photos.
Big Tech companies like Meta are not the only ones using labels to fight fake material. TikTok started asking users to name their own AI-generated content last year. Other users can report material they think was made by AI. Last month, YouTube added a similar scheme based on integrity.
Due to important elections in the EU in June and the US in November, politicians have asked tech companies to do something about "deepfakes" made by AI, which they say could be used to falsely influence voters. More than a dozen other business leaders, including Microsoft, Meta, and Google, promised earlier this year to "help prevent deceptive AI content from interfering with this year's global elections."
Although platforms like TikTok and YouTube currently use respect systems, they might soon have to switch to Meta's policy. Following a part of the EU's AI Act that goes into effect next summer, tech companies will be punished if they do not find and label AI-made content, even if it is text "published with the purpose to inform the public on matters of public interest."
These marks will be put on automatically when Meta finds "industry-shared signals" of AI content, or users can choose to display them if they say that something they posted was made with AI.
Meta explained that a more noticeable mark could be used if the content has "a particularly high risk of materially misleading the public on a matter of importance."
The "manipulated media" policy of Meta only covers videos that were "created or altered by AI to make a person appear to say something they did not say." Instead of being labeled, content that breaks this code is taken down.
This dragnet now includes pictures, sounds, and videos of people "doing something they did not do." In contrast to the previous method, this one is less strict and will allow the content in question to stay online.
This dragnet now includes pictures, sounds, and videos of people "doing something they did not do." In contrast to the previous method, this one is less strict and will allow the content in question to stay online.
The organization said, "Our manipulated media policy was written in 2020, when realistic AI-generated content was rare and videos were the main concern." "The last four years, and especially the last year, people have made other realistic AI-generated content, like audio and photos, and this technology is changing quickly."
Recently, US regulators banned "robocalls" made by AI after a computer-generated Joe Biden told people in New Hampshire to not vote in the state's Democratic primary election. Meanwhile, the White House has promised to "deal with" the problem of non-consensual porn after fake nude photos of pop star Taylor Swift went viral on social media. Speaking out on the problem, former US President Donald Trump also said that US media sources were using AI to make him look bigger in photos.
Big Tech companies like Meta are not the only ones using labels to fight fake material. TikTok started asking users to name their own AI-generated content last year. Other users can report material they think was made by AI. Last month, YouTube added a similar scheme based on integrity.
Due to important elections in the EU in June and the US in November, politicians have asked tech companies to do something about "deepfakes" made by AI, which they say could be used to falsely influence voters. More than a dozen other business leaders, including Microsoft, Meta, and Google, promised earlier this year to "help prevent deceptive AI content from interfering with this year's global elections."
Although platforms like TikTok and YouTube currently use respect systems, they might soon have to switch to Meta's policy. Following a part of the EU's AI Act that goes into effect next summer, tech companies will be punished if they do not find and label AI-made content, even if it is text "published with the purpose to inform the public on matters of public interest."