Technology companies commit to generating safe content created by AI

The United States government called on technology companies Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI to commit to creating AI safer content

Washington called on representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, as well as OpenAI, to commit to working for the creation of secure and transparent content based on artificial intelligence.

In the midst of the rapid advance in the use of artificial intelligence, fears, risks and conflicts related to veracity and transparency arise. So the call from the White House is considered relevant.

Fortunately, these companies “committed to work in particular on AI-powered content marking systems in order to reduce the risks of fraud and misinformation.”

The risks are increased by the use of generative AI, which results in highly realistic creations, difficult to differentiate from real elements, such as images or messages widely disseminated.

In this sense, the companies involved have also committed to “test their software internally and externally before launch, invest in cybersecurity and share relevant information about their tools with authorities and researchers, including possible failures.”

Another government requirement that they must comply with is the development of reliable techniques that allow users to know “when the content has been generated by AI, such as a watermark system,” according to an official statement.

M.Pino

Source: swissinfo

(Reference image source: Steve Johnson, Unsplash)

Visit our news channel on Google News and follow us to get accurate, interesting information and stay up to date with everything. You can also see our daily content on Twitter and Instagram

You might also like