Wednesday, October 22, 2025

Hundreds of Public Figures, Including Apple Co-Founder Steve Wozniak and Virgin’s Richard Branson Urge AI Superintelligence Ban

Worased Boontipchayakun | Istock | Getty Images

A group of prominent figures, including artificial intelligence and technology experts, has called for an end to efforts to create ‘superintelligence’ a form of AI that would surpass humans on essentially all cognitive tasks. Over 850 people, including tech leaders like Virgin Group founder Richard Branson and Apple co-founder Steve Wozniak, signed a statement published Wednesday calling for a pause in the development of superintelligence…….Continue reading

By: Dylan Butts

Source: CNBC

.

Critics:

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. Philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”; for example, the chess program Fritz is not superintelligent despite being “superhuman” at chess because Fritz cannot outperform humans in other tasks.

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology to achieve radically greater intelligence.

Several future study scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. The hypothetical creation of the first superintelligence may or may not result from an intelligence explosion or a technological singularity.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may allow them to—either as a single being or as a new species—become much more powerful than humans, and displace them.

Several scientists and forecasters have been arguing for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies. The creation of artificial superintelligence (ASI) has been a topic of increasing discussion in recent years, particularly with the rapid advancements in artificial intelligence (AI) technologies.

Recent developments in AI, particularly in large language models (LLMs) based on the transformer architecture, have led to significant improvements in various tasks. Models like GPT-3, GPT-4, Claude 3.5 and others have demonstrated capabilities that some researchers argue approach or even exhibit aspects of artificial general intelligence (AGI).

However, the claim that current LLMs constitute AGI is controversial. Critics argue that these models, while impressive, still lack true understanding and rely primarily on memorization. Philosopher David Chalmers argues that AGI is a likely path to ASI. He posits that AI can achieve equivalence to human intelligence, be extended to surpass it, and then be amplified to dominate humans across arbitrary tasks.

More recent research has explored various potential pathways to superintelligence: Scaling current AI systems – Some researchers argue that continued scaling of existing AI architectures, particularly transformer-based models, could lead to AGI and potentially ASI. Novel architectures – Others suggest that new AI architectures, potentially inspired by neuroscience, may be necessary to achieve AGI and ASI.

Hybrid systems – Combining different AI approaches, including symbolic AI and neural networks, could potentially lead to more robust and capable systems. Artificial systems have several potential advantages over biological intelligence: Speed – Computer components operate much faster than biological neurons. Modern microprocessors (~2 GHz) are seven orders of magnitude faster than neurons (~200 Hz).

Scalability – AI systems can potentially be scaled up in size and computational capacity more easily than biological brains. Modularity – Different components of AI systems can be improved or replaced independently. Memory – AI systems can have perfect recall and vast knowledge bases. It is also much less constrained than humans when it comes to working memory. Multitasking – AI can perform multiple tasks simultaneously in ways not possible for biological entities.

Recent advancements in transformer-based models have led some researchers to speculate that the path to ASI might lie in scaling up and improving these architectures. This view suggests that continued improvements in transformer models or similar architectures could lead directly to ASI.

Some experts even argue that current large language models like GPT-4 may already exhibit early signs of AGI or ASI capabilities. This perspective suggests that the transition from current AI to ASI might be more continuous and rapid than previously thought, blurring the lines between narrow AI, AGI, and ASI.

However, this view remains controversial. Critics argue that current models, while impressive, still lack crucial aspects of general intelligence such as true understanding, reasoning, and adaptability across diverse domains. The debate over whether the path to ASI will involve a distinct AGI phase or a more direct scaling of current technologies is ongoing, with significant implications for AI development strategies and safety considerations.

Despite these potential advantages, there are significant challenges and uncertainties in achieving ASI: Ethical and safety concerns – The development of ASI raises numerous ethical questions and potential risks that need to be addressed. Computational requirements – The computational resources required for ASI might be far beyond current capabilities. Fundamental limitations – There may be fundamental limitations to intelligence that apply to both artificial and biological systems.

Unpredictability – The path to ASI and its consequences are highly uncertain and difficult to predict. As research in AI continues to advance rapidly, the question of the feasibility of ASI remains a topic of intense debate and study in the scientific community. Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen.

At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able “to simulate learning and every other aspect of human intelligence” by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone. 

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, standard deviation 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years).

These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

In a 2022 survey, the median year by which respondents expected “High-level machine intelligence” with 50% confidence is 2061. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers. In 2023, OpenAI leaders Sam Altman, Greg Brockman and Ilya Sutskever published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.

In 2024, Ilya Sutskever left OpenAI to cofound the startup Safe Superintelligence, which focuses solely on creating a superintelligence that is safe by design, while avoiding “distraction by management overhead or product cycles”. Despite still offering no product, the startup became valued at $30 billion in February 2025. In 2025, the forecast scenario “AI 2027” led by Daniel Kokotajlo predicted rapid progress in the automation of coding and AI research, followed by ASI.

In September 2025, a review of surveys of scientists and industry experts from the last 15 years reported that most agreed that artificial general intelligence (AGI), a level well below technological singularity, will occur before the year 2100. A more recent analysis by AIMultiple reported that, “Current surveys of AI researchers are predicting AGI around 2040.

In the last 4 hours
In the last 6 hours
Saturday
In the last month

Leave a Reply

No comments:

Post a Comment

Hundreds of Public Figures, Including Apple Co-Founder Steve Wozniak and Virgin’s Richard Branson Urge AI Superintelligence Ban

Worased Boontipchayakun | Istock | Getty Images A group of prominent figures, including artificial intelligence and technology experts, has ...