Regulation of AI languages must go to large models

The CEO of OpenAI assured that the regulation of small models "does not make sense"

The CEO of OpenAI, Sam Altman, believes that regulation for small language models powered by Artificial Intelligence (AI) “does not make sense” and is committed to regulating large models such as, in this case, ChatGPT, because they are “those who can really do harm.”

This was expressed by the executive director of the company that creates tools such as Whisper and Dall-E2, in a colloquium held this Monday at IE University, where he was accompanied by OpenAI research scientist Mo Bavarian; Owners Scaleup Program Executive Director Joe Haslam; and the co-founder and CEO of Clibrain, Elena González-Blanco.

During the presentation, the CEO of OpenAI remarked that regulation is “very important”, but he has also emphasized that over-regulating “is not good” since, in the case of small language model companies, it would cut their ability to innovation and creation.

Similarly, Altman stressed the importance of privacy and the concern that exists around this issue at all levels. In this sense, he assured that the company will continue “working with other governments around the world” to improve language models driven by AI and protect privacy.

In addition, he pointed out that from OpenAI they want to carry out technology with which “people are comfortable and safe.” “We want to do what people want. We want to create the product that people want to use”, in reference to the work they have been carrying out with ChatGPT to improve aspects such as privacy, data security and information quality .

Source: dpa

(Reference image source: Unsplash+)

Visit our news channel on Google News and follow us to get accurate, interesting information and stay up to date with everything. You can also see our daily content on Twitter and Instagram

You might also like