Artificial intelligence could escape human control – EURACTIV.de

The UK fears that artificial intelligence could escape human control. The technology would also pose other risks and could be used, for example, to produce biological weapons.

The UK will host an AI Security Summit on 1-2 November. The meeting, the first of its kind, is part of British Prime Minister Rishi Sunak’s drive to put the UK at the center of global technology competition and secure a place in international AI governance.

Earlier this week the British government published this summit program, which focuses heavily on frontier models of AI. This is a concept that… comes from the most important technology companies active in this fielddefine high-performance models that can adapt to a variety of tasks.

While it is still not clearthat will participate on behalf of the EU, the national representatives meeting in the Telecommunications Working Group, a technical body of the EU Council, received on Monday (October 16) a first draft of the communiqué that will be delivered at the end of the summit. must be accepted.

This second version of the draft conclusions, available to Euractiv, mentions the opportunities and risks of AI. In particular, attention is drawn to the danger that AI models are so complex that they could become independent actors and emancipate themselves from human supervision.

“The most important of these risks arises from potential control or intentional misuse issues where AI systems may attempt to increase their own influence and reduce human control, and these issues arise in part from a lack of full understanding of these capabilities that “become” what the text says.

Writing confirms a report from guardianciting concerns from Downing Street that powerful AI models could be used to develop biological weapons or escape human control entirely.

In particular, the statement notes that such risks affect areas such as cybersecurity and biotechnology and could result in intentional or unintentional damage that could reach catastrophic levels.

“Given the rapid and uncertain development of AI, and in the context of accelerating investment in the technology, we reiterate that it is particularly urgent to deepen our understanding of these potential risks and measures to address them,” the document says. .

In the draft conclusions, the signatories commit to taking international action to address AI risks and working with all relevant stakeholders to ensure AI safety. At the same time, attention is drawn to the responsibility of boundary model developers to act transparently and responsibly to reduce risks.

The document also calls on countries to collaborate on common benchmarks for assessing the safety of AI and to develop a common, evidence-based understanding of the potential risks of AI. In particular, an international scientific network should be created to investigate the safety of frontier AI models.

At EU level, AI law is close to finalizing the regulatory approach to this emerging technology, although the discussion on how to address the best performing models is still open.

The AI ​​law enters the negotiation phase

The European Parliament overwhelmingly adopted its position on the AI ​​framework on Wednesday (June 14). This paves the way for inter-institutional negotiations that will lead to the conclusion of the world’s first comprehensive law on artificial intelligence.

[Bearbeitet von Nathalie Weatherald]

Regina Anderson

"Extreme gamer. Food geek. Internet buff. Alcohol expert. Passionate music specialist. Beeraholic. Incurable coffee fan."

Leave a Reply

Your email address will not be published. Required fields are marked *