Project Q*: OpenAI Worked on an AI Model that Threatens Humanity

World | November 25, 2023, Saturday // 11:01|  views

Amid the turmoil in OpenAI's leadership, it became clear that the creators of ChatGPT were working on an artificial intelligence project that could seriously threaten humanity. Several of its employees have warned about this in a letter to the company's management.

The news was reported by Reuters, with the agency referring to its sources, but with the clarification that it did not have access to the letter and neither those working at OpenAI nor its management have commented on the subject.

Before OpenAI chief Sam Altman spent four days in what Reuters called "exile", company officials expressed concern that a model being developed could pose a threat to the world. The model is called Q* and - it claims - is able to solve simple math problems that it sees for the first time. This is a model that does not work on a statistical principle, like ChatGPT, but on the principle of rules and logic.

The ability of artificial intelligence to solve tasks is considered a key stage in its development, which would lead to the emergence of so-called generative artificial intelligence - this is a powerful machine that can perform a wide range of tasks in a way characteristic of human intelligence, but at a level that greatly exceeds it.

Many experts believe that companies developing artificial intelligence should slow down the development of these technologies until the risks to humanity are clarified and prevented.

In the past year since the launch of ChatGPT, Sam Altman has repeatedly warned in meetings with world leaders, as well as in a hearing before a committee of the US Senate, about the risks of the development of Artificial Intelligence.

"My biggest fear is that we, the tech industry, are going to cause serious trouble to the world. That could happen in a lot of different ways. If anything with technology goes wrong, it's going to go terribly wrong! And we want to be open about it. We want to work with the government to prevent this from happening," said OpenAI CEO Sam Altman.

OpenAI promises to work for safe artificial intelligence that benefits people. While the European Union has drafted its own AI law, the US is still debating how and whether to introduce regulation.

"A whole chorus of voices is calling for regulation of artificial intelligence. But there needs to be some clarity about what that means, which aspects should we regulate? Which aspects are risky? Is it job destruction? Is it prejudice and discrimination? Because for each separate aspect needs a separate approach to regulation. It's very important not to swing a huge hammer just because everything looks like a nail to us. It's important to have nuances, because otherwise the regulation may not solve the particular problem," said Prof. Sarah Kreps of Cornell University.

According to some analysts, the reason for the personnel upheaval at the company is due to serious disagreements regarding how fast and with what precautions to move forward in the development of artificial intelligence.

We need your support so Novinite.com can keep delivering news and information about Bulgaria! Thank you!


Tags: OpenAI, Altman, AI, chatgpt

Back  

» Related Articles:

Search

Search