Is AI already conscious? Artificial Intelligence Professor Explains: Skynet is not a movie plot, the risks and opportunities of AI taking away jobs, the myth of AGI...

This article is machine translated
Show original

The wave of Artificial Intelligence (AI) is sweeping the globe at an unprecedented speed, with each advancement from the astonishing capabilities of large language models to the profound impacts on future social structures capturing widespread attention.

In a recent interview with GQ, Graham Morehead, an AI and Machine Learning professor at Gonzaga University, provided an in-depth analysis of key issues including AI types, development history, current hot topics like Grok's rise, AI's impact on the job market, the race for Artificial General Intelligence (AGI), and the future prospects and ethical challenges of AI.

Current Status and Key Players of AI

Professor Morehead first clarified that AI can be broadly divided into two types: Type 1 AI is similar to human intuition, capable of quickly processing large amounts of information and responding rapidly based on emotions or pattern recognition, such as neural networks; Type 2 AI is more oriented towards systematic logical computation, solving problems step by step.

Professor Morhead mentioned that Google's 2017 paper "Attention is All You Need" and its BERT model laid an important foundation for subsequent developments. However, what truly ignited public enthusiasm was OpenAI's ChatGPT.

Subsequently, models like Meta's (formerly Facebook) Llama, Mistral's eponymous model from a French company, and China's DeepSeek emerged, creating a landscape of competing powers. This intense "AI arms race" not only accelerates technological iteration but also signals that AI's commercial implementation and market penetration will further speed up, with major tech companies already integrating AI into their core products and services to seize market opportunities.

Deep Impact of AI on Employment Market and Society: Challenges and Transformation Coexist

Regarding concerns about AI potentially replacing human jobs, Professor Morhead frankly stated: "AI will indeed replace many jobs." However, he also cited the example of Automated Teller Machines (ATM), pointing out that their proliferation did not lead to a net reduction in bank teller positions, but instead created new job demands and service models.

He emphasized that AI's prevalence will force us to rethink the nature of work and encourage individuals to proactively learn how to use AI to assist their work, such as delegating repetitive or time-consuming tasks to AI to enhance personal productivity and creativity. In the future, new emerging professions like "managers" who coordinate multiple AI agents are expected to proliferate.

Notably, AI's social impact extends far beyond employment. Professor Morehead warned that since AI primarily learns from internet data, biases, discriminatory speech, and even misinformation (like "flat earth theory") existing on the internet could be replicated and amplified by AI.

He reminded users to always maintain a critical mindset towards AI-generated content, "being extremely cautious." Another unavoidable challenge is the massive energy consumption brought by AI development. For instance, the large AI computing center "Colossus" in Memphis, Tennessee, has a peak power consumption of about 50 megawatts and requires enormous water resources for cooling.

Professor Morehead predicts that if the number of global AI training centers continues to surge in the next decade, the electricity demand of the AI industry alone could match the total electricity consumption of a developed country, posing a severe challenge to global energy supply and environmental sustainability. Moreover, as the authenticity of AI-generated content becomes increasingly sophisticated, the lack of effective regulation and traceability mechanisms could severely erode historical authenticity and social trust.

Competition and Ethical Challenges of Artificial General Intelligence (AGI)

In the ultimate goal of AI development, Artificial General Intelligence (AGI) is undoubtedly the most tantalizing milestone. Professor Morehead explained that most current AI is still "narrow AI," excelling only in specific tasks.

AGI refers to AI with broad cognitive abilities equal to or surpassing humans, capable of understanding, learning, and adapting to entirely new complex environments. He mentioned that although pioneers at the 1956 Dartmouth Conference optimistically expected AGI to be achieved in just 20 years, reality proved its difficulty far exceeded expectations.

However, once AGI is born, it might be followed by Artificial Superintelligence (ASI) with intelligence far beyond human comprehension. Professor Morehead likened ASI to a "virtual Einstein" capable of achieving scientific breakthroughs in an extremely short time that would take humans tens of thousands of years, such as solving mysteries of time travel or anti-gravity.

In this race for the future of AGI, the United States and China are undoubtedly in the lead. Professor Morehead observed that both countries possess top-tier AI research capabilities, with China having an advantage in the number of AI researchers and STEM graduates. However, the two countries differ in their AI deployment philosophies: China's AI applications to some extent serve its "social governance and monitoring system," while the US focuses more on using AI to empower individuals, enhancing creativity and productivity.

He directly stated that in this competition, "there is no second place," and the country or entity that first grasps ASI will obtain an immeasurable strategic advantage, which also makes AGI R&D filled with complex geopolitical considerations and potential risks. With the potential realization of AGI/ASI, related ethical dilemmas are increasingly prominent.

When asked whether AI should enjoy rights, Professor Morehead, based on the judgment that AI currently lacks consciousness, emotions, or self-will, gave a negative answer. Regarding the "Skynet" AI out-of-control threat often seen in science fiction, he believes this is more of a "human choice," and the international community should work together to establish norms to ensure AI development remains controllable and beneficial to humans.

Especially in sensitive areas such as AI weaponization, it must be ensured that responsible humans are in the final decision-making link.

AI's Future Outlook: Opportunities and Risks Coexist

Looking ahead to the next decade, Professor Morhead holds a cautiously optimistic view of AI's potential. He expects AI to bring revolutionary changes in many fields, especially in biomedicine and healthcare.

He cited the AlphaFold model developed by Google DeepMind as an example, which successfully predicted the 3D structure of almost all known proteins, greatly accelerating new drug research and disease understanding, promising breakthrough therapies for various stubborn diseases like cancer and Alzheimer's, and even improving overall metabolic health and extending human lifespan.

AI's application in mental health is also emerging. As early as 1965, an early AI program called ELIZA could already play the role of an "AI therapist," providing emotional support to users. Professor Morehead believes that although AI itself lacks empathy, interactions with an AI therapist can provide individuals with a framework for introspection and emotional sorting.

However, opportunities often coexist with risks. The authenticity of AI-generated content continues to improve, especially "Deepfake" technology, which poses potential threats to personal reputation, social trust, and even national security. Professor Morehead points out that currently, one can distinguish between real and fake by analyzing details in AI-generated images that do not conform to physical laws (such as light and shadow, object persistence).

But he warns that as technology advances, this identification method will become increasingly difficult. In the future, it may be necessary to rely on more advanced AI detection tools, as well as technical means such as embedding encrypted signatures or digital watermarks at the source of credible content to ensure information authenticity. Facing AI's rapid development, Professor Morehead emphasizes the importance of improving public "AI literacy." He suggests that people view AI as a knowledgeable but occasionally biased "expert friend": actively utilizing its capabilities while maintaining critical thinking and avoiding blind obedience.

More importantly, everyone should draw their own ethical boundaries for AI use. Even if AI can write or create art, humans should cherish and practice their own creativity and independent thinking. The true value of AI lies in liberating humans from tedious, repetitive labor to explore more creative and meaningful fields.

He also reminds again that current AI is based on big data pattern recognition and probability prediction (such as predicting the next most likely word element), and it does not truly "understand" the meaning of words, let alone distinguish between "factual truth" and "popular views." Therefore, while relying on AI, independent judgment and verification are crucial.

Human society needs to establish sound ethical norms, legal frameworks, and educational systems while actively embracing technological innovation to ensure this powerful technology can truly serve the well-being of all humanity, guiding us towards a more intelligent and wise future.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments