Thousands of people including Musk signed the open letter "Suspending Giant Artificial Intelligence Experiments". See the specific web page: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Artificial intelligence systems with intelligence comparable to humans can have profound risks to society and humanity, which has been confirmed by a large number of studies [1] and supported by top AI laboratories [2]. As stated in the widely accepted Asilomar Principles of Artificial Intelligence, advanced AI has the potential to bring about profound transformations in the history of life on Earth and should therefore be planned and managed with commensurate attention and resources. Sadly, even in recent months, AI labs have been locked in a runaway race to develop and deploy increasingly powerful digital intelligences that even their creators cannot understand, predict, or be reliable This level of planning and management has not been achieved.
With contemporary AI systems now capable of competing with humans in general-purpose tasks [3], we must ask ourselves: Should we allow machines to flood our information channels, spreading propaganda and disinformation? Should we automate all jobs, including the fulfilling ones? Should we develop non-human intelligences that may eventually surpass and replace us? Should we risk losing our grip on civilization? These decisions cannot be left to unelected tech leaders. Strong AI systems can only be developed if we are confident that the impact of the AI system will be positive and the risks will be manageable. This confidence must be well-founded and increases with the potential impact of the system. OpenAI's recent statement on general artificial intelligence states: "At some point, it may be necessary to obtain independent review before starting to train future systems, and each attempt at training a state-of-the-art system should Consensus." We agreed. That time is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months to stop training AI systems more powerful than GPT-4. This suspension should be public, verifiable, and include all key players. If such a moratorium cannot be implemented quickly, the government should step in and impose a ban.
AI labs and independent experts should use this pause to jointly develop and implement an advanced, shared set of safety protocols for AI design and development that should be rigorously audited and overseen by independent external experts. These protocols should ensure that systems adhering to them are secure beyond reasonable doubt [4]. This does not mean pausing AI development, but simply taking a step back from the dangerous race to develop larger, more unpredictable black-box models with emergent capabilities.
AI research and development should refocus on how to make today's powerful, state-of-the-art systems more precise, safe, explainable, transparent, robust, consistent, trustworthy, and loyal.
At the same time, AI developers must work with policymakers to dramatically accelerate the development of AI governance systems. These should include, at a minimum: new, capable regulators dedicated to AI; oversight and tracking of high-capacity AI systems and massive computing power; provenance and watermarking systems to help distinguish real from synthetic and track model leaks; robust Auditing and certification of the ecosystem; accountability for damages caused by AI; adequate public funding of technical AI safety research; and adequately resourced institutions to deal with serious economic and political disruptions (especially to democracies) brought about by AI.
Humanity can share a prosperous future with AI. Having succeeded in creating robust AI systems, we can now enjoy an "AI summer," a period in which we can reap the fruits of clear benefits for all and the opportunity for society to adapt. Society has already hit the pause button on other technologies that could have disastrous effects on society [5]. We can do the same here. Let's enjoy a long AI summer instead of jumping into the fall unprepared.




