Seven companies—including OpenAI, Microsoft, Google, Meta, Amazon, Anthropic, and Inflection—will do developing technology to better reflect AI-generated content. This will help make it safer to share AI-generated text, videos, images, and photos without misleading others about their content, Biden’s administration hopes.
It is not yet known how the watermark will be used, but it will be included in the content so that users can identify the origin of the AI tools used in the creation.
Deepfakes have become an emerging concern for internet users and policy makers alike as technology companies grapple with how to deal with the use of AI tools.
Earlier this year, the image generator Midjourney was used to create fake footage of Donald Trump’s arrest, which later went viral. Although it was obvious to many that the photos were fake, Midjourney decided to take action ban the user who created it. Maybe if the watermark had existed then, the user, Bellingcat founder Eliot Higgins, wouldn’t have faced such problems because he said he wasn’t trying to be clever or deceive others but just enjoying Midjourney.
There are also misuses of AI tools, however, where a watermark can help save internet users from pain and conflict. Earlier this year, it was report that a speech-generating AI program was used to defraud people of thousands of dollars, and last month, The FBI has warned of the increasing use of deep AI-generated content in criminal activity.
White House he said The watermark will help “build with AI growth but reduce the risk of fraud and fraud.”
OpenAI said in a blog that they agree to “develop robust methods, including the origin and / or watermarking of audio or visual systems,” and “tools or APIs to determine whether something was created by their system.” This will work for most AI-generated content, with rare exceptions, such as not setting a default voice for AI agents.
“Audio that is easily distinguishable from the real thing or that is designed to be easily recognized as produced by a company’s AI machine – such as the default voice of an AI assistant – is outside of this commitment,” OpenAI said.
Google he said that in addition to watermarking, “it will also include metadata” and “other new methods” to “enhance reliable information.”
As concerns over the misuse of AI, President Joe Biden met with the tech industry today. This should help Mr. Biden and Congress get more information before they pass an executive order and bipartisan legislation in an attempt to regulate the advancing AI technology.
In a blogMicrosoft praised the Biden administration for creating a “foundation to help put the promise of AI ahead of its risks” and “bringing technology companies together to solve the problems that will make AI safe, secure and beneficial to people.”
“None of us can achieve AI alone,” the Google blog said.
More AI security was promised
On top of creating watermarks for AI-generated content, tech companies have volunteered. he announced and the White House on Friday.
Between them, tech companies have agreed to test the ins and outs of AI systems before they are released. He also said he would invest heavily in cybersecurity and share information across companies to help mitigate AI threats. These risks include everything from AI facilitating bias or discrimination to lowering barriers to developing advanced devices, the OpenAI blog said. Microsoft’s blog also reviewed the White House’s pledges, including support for expanding a national registry of vulnerable AI systems.
OpenAI said the tech companies that make this “are an important part of advancing the good governance of AI, both in the US and around the world.” The creator of ChatGPT, GPT-4, and DALL-E 2 also promised to “sell research in areas that can help inform the law, such as ways to assess potential risks in AI models.”
Meta’s global president, Nick Clegg, echoed OpenAI’s statement, calling the tech industry “an important first step in ensuring AI security is implemented.”
Google described the event as “an important step in bringing companies together to ensure that AI works for everyone.”
The White House hopes that raising AI standards will improve safety, security, and trust in AI, according to a senior official. by the Financial Times. “This is very important to the president and the team here,” the official said.