Artificial Intelligence

The Potential Consequences of AGI

This is the second in a series of three articles dedicated to the subject of AI. The outline of each article has been generated by AI software (using our choice of keywords) which provided a framework for the writing. Much of what you read on the internet is now generated by AI, which is why you see so much duplication. Our goal is to retain original thought, using AI for structure.

1. Introduction to Artificial Intelligence

3. AGI: A Tiger by the Tail

Written by Terry Clayton and Elizabeth Harris

Artificial General Intelligence

Artificial General Intelligence (AGI) has captured the imagination of scientists, technologists, and futurists alike. AGI’s reach is far above narrow AI which includes robotics and computer vision currently being deployed in factories, homes, and wide-ranging digital systems (see our July blog post Introduction to AI). AGI, although not yet fully developed, transcends the narrow, task-specific functions of today's AI, and envisions a future where intelligent systems can perform any intellectual task that a human being can do. So what we’re talking about here is not just an evolution, it is a revolution beyond anything that has happened before.

All technology is basically neutral. The application of that technology determines its effects. Like any powerful technology, AGI brings with it potential risks, challenges and a complex narrative. Chief among the risks and challenges are the existential dangers it poses to humanity, which can be amplified or mitigated depending on the context in which it is developed and deployed.

The discourse around AGI often focuses on the technical aspects, but the economic, societal and political dimensions are equally important. Like all technology, its execution does not happen in a vacuum; it is deeply influenced by prevailing socio-economic and political ideologies. In this article, we first explore the existential dangers of AGI, its promising possibilities, and the context of two ideologies that currently underlie and shape our socio-economic agreements, namely ‘neoliberalism’ and ‘modern liberalism’.

AI’s Promising Potential

There is an entire, scientific school of thought that declares AI’s enormous potential. We believe it is valid – if well managed and regulated. Let’s take a look at some of the promises embedded in that potential.

AI is good for business: Narrow AI as we currently know it enhances efficiency and throughput, while creating new opportunities for revenue generation, cost savings and job creation. The kind of job creation isn’t quite as clear or as outwardly discussed. The promise seems to be that AI will free humans from boring, repetitive tasks so we can be creative and to do what we do best. This concept is already being implemented in the Scandinavian countries with the adoption of universal basic income, and without the use of AI as a driver. AI will encourage a gradual evolution in the job market which, with the right preparation, could be quite positive. People will still work, but work will be redesigned and we’ll work better with the help of AI. The use of AI promises unparalleled combination of human and machine as the new normal in the workforce of the future.

AI can be applied to solving meaningful problems beyond our current capabilities. It is available 24/7 to process data relative to some of the existential issues we are facing such as climate change, pollution, and disease mitigation. It can do so in record time far faster and more efficiently than humans and, depending on the quality of the data subsets it is programmed for, it is potentially capable of eliminating human error

All of science benefits from the use of AI. Narrow AI is already assisting medical science with diagnosis, treatment and prognostication and the use of it in other scientific channels and applications can only be imagined. The scale of complex data generated and processed far exceeds our capacity to understand and analyze it. Its algorithms make vast amounts of data usable for analysis quickly, and in environments that are impossible for humans to physically handle.

The capability and speed of AI promises potential solutions to some of the most staggering problems we are facing; problems that we have never before faced in our history. The capability and speed at which it will influence our evolution as a species is, at the moment, incomprehensible. Historically, the closest parallel in our evolution is when we went from being nomads to farmers, a change that spanned over three thousand years. AI will escalate that speed of change to less than a generation.

Potential Dangers of AGI

“Here we stand in the middle of this new world with our primitive brain, attuned to the simple cave life, with terrific forces at our disposal, which we are clever enough to release but whose consequences we cannot comprehend. ~ Albert Szent-Gyorgyi

In addition to the socio-economic and political context, AGI poses several existential risks to humanity. These are inherent in the technology itself, causing the spotlight to shine directly on its advancement and application.

Every new technology is touted as the ‘latest and greatest,’ promising increased convenience, the ability to make our lives easier, our systems more efficient, and our society more robust. Nuclear energy is one such example. The scientists and mathematicians that worked on the atomic bomb believed it to be necessary to stop Hitler and advance the US as a world power, but after leading its creation and seeing the devastating effects atomic energy had on humanity, Robert Oppenheimer called for international controls. He was blacklisted for doing so. Three Mile Island, Chernobyl, and Fukushima reactors all fulfilled efficient energy promises, until they melted down causing unprecedented calamities that humans struggled, and almost failed, to mitigate. The Deepwater Horizon oil spill, the Bhopal catastrophe, and other lethal tragedies were caused by equipment failure, human error, and the unexpected, cascading effects of both. It’s no secret that there are inevitable and unintended consequences that accompany any new technology. Each of these incidents began as creations that initially appeared to be desirable, necessary, exciting, and critical to economic and cultural progress; and there was some truth in those claims. But realistic planning for unintended consequences has never been one of our strong points.

One of the key existential dangers of AGI is the fixation on efficiency. In the pursuit of economic productivity, we may create AGI systems that prioritize efficiency over human well-being. Bureaucrats and professionals could easily become simply interfaces for algorithms. The medical profession is venturing into it as we write this. That’s how your doctor can easily come up with diagnosis and the proper medications in the course of one office visit, or even more curiously, one ‘telemedicine’ visit.

Although AI has the potential for unbiased decision making (as long as datasets and testing are unbiased), there is significant risk when it comes to social bias and inequality. AGI systems are trained on data generated by human societies. Without strict regulation and accountability, they can, and predictably will, perpetuate and amplify existing social biases, inadvertently or – in the wrong hands – intentionally. Given the varied values and biases of cultures across the globe, this alone could well become a gunnysack full of snakes.

Political overreach is another probable in what could easily become a ‘post-truth’ world. The impact of deep-fake images, the rapid spread of false narratives, and other intentional, reality-bending assemblages that we saw influence people’s minds via social media in our elections has already been demonstrated. The power and capabilities of AGI could easily be used by authoritarian regimes to suppress dissent and control populations. This risk is particularly pronounced in societies with weak democratic institutions and little respect for human rights. Once people’s minds are intentionally formed…well…It’s not hard to see how we could end up in an AGI controlled & dominated world.

The hard truth is that we simply don’t know, and can’t predict, the impact of AI on human beings, let alone the larger world. Unregulated, AGI could lead to the dehumanization of our societies and a loss of deeper levels of reality and humanity.

The Unsettling Merger of Modern Liberalism, Neo Liberalism and AGI

Before reading further, we want to clarify the term ‘liberalism.’ Liberalism is an ideology, philosophical view, and political tradition which holds that freedom and independence are the primary political values. Its roots are in the Western Age of Enlightenment, but the term now encompasses a diversity of political thought.

Encyclopedia Britannica defines ‘neoliberalism’ as characterized by “its belief in sustained economic growth as the means to achieve human progress, its confidence in free markets as the most-efficient allocation of resources, its emphasis on minimal state intervention in economic and social affairs, and its commitment to the freedom of trade and capital.” Capitalism is the primary tool of neoliberalism. Neoliberalism has been the dominant global economic paradigm of the twentieth century, shaping our economies and our approaches to technological expansion and implementation. It’s a complex ideology which can have a profound impact on the advancement and usage of AGI.

In contrast to the neoliberal approach, Modern Liberalism opposes corporate monopolies, opposes cuts to the social safety net, and supports a role for government in reducing inequality, increasing diversity, providing access to education, ensuring healthcare, regulating economic activity, and protecting the natural environment. Modern liberalism places greater emphasis on social equity and the public good. Again, the Scandinavian countries are good examples of this. In this type of democratic society, the development of AGI would likely be subject to greater public oversight and regulation. This could potentially mitigate some of the risks associated with AGI, but it also presents its own set of challenges. Some would argue that the democratic process is too slow to deal with imminently dangerous problems, and that speed is of the essence to solving these problems effectively. But we cannot turn a blind eye to the fact that historically, the need for order can easily lead to authoritarianism, which, as human nature so clearly shows, always wrecks extreme havoc before imploding on itself.

The Rise of AGI in a Neoliberal Society

One of the key advantages of neoliberalism is its emphasis on innovation and competition. The belief in the efficiency of free markets has led to significant technological advancements, including in the field of artificial intelligence. By reducing barriers and fostering competition, neoliberalism has created an environment conducive to rapid technological progress. So fast, in fact, that we already cannot keep up how rapidly change is occurring.

In a neoliberal society, the escalation of AGI is primarily driven by market forces. Because tech companies are motivated by the astronomical potential profits, they invest heavily in AGI research and development. The competition between these companies fuels rapid technological innovation. But a market-driven approach can also amplify some of the inherent risks of AGI.

The market-driven approach to AGI can easily lead to a narrow focus on efficiency and profitability. Because of that, one of the key concerns lies with the current lack of regulation. In a neoliberal society, regulation is often viewed as a barrier to innovation and economic growth. As a result, AGI progress may proceed with minimal oversight, and thereby increasing the risk of unintended consequences or misuse. Fixation on efficiency can sideline important ethical considerations and lead to the advancement of AGI systems that prioritize economic productivity over human and environmental well-being. This would significantly exacerbate the dehumanization of our societies and the loss of deeper levels of reality and humanity

The Rise of AGI in a Modern Liberal Society

While a socially democratic approach can mitigate some of the risks associated with AGI, it cannot eliminate them entirely. The existential dangers of AGI, such as the potential for misuse or the risk of an intelligence explosion, are integral to the technology and cannot be entirely eliminated through policy interventions. Greater regulation can ensure that AGI development is guided by ethical considerations and the public interest. But it is rightfully argued that excessive regulation can stifle innovation and slow down technological progress. Moreover, while a modern liberal approach can mitigate some of the risks associated with AGI, it cannot eliminate them entirely.

The potential risks associated with AGI are not theoretical. In Chapter 13 of my book, Facing the Moment, I reference what author L. S. Stavrianos called the dynamic tension between self-interest and the interest of the collective, or “the retarded lead syndrome.” He explains that the trappings of control cause those wielding it to ignore the larger good in favor of their own interests and further rationalize why they cannot change. In fact, they fight against efforts that will cause them to change, even if it means their downfall or demise. The retarded lead syndrome describes the phenomenon where technological advancements outpace our ability to understand and manage their implications. We are at the edge of that right now.

There is more than enough evidence to argue that the assumptions of supremacy that humans stubbornly cling to must be traded in for the reality of life, which is a dissipating, self-regulating system in which we all play a part. Control is power, and power is addictive. Like any other addiction, the retarded lead syndrome is complex and seldom ends well for the addict or those they influence. Although it may be socially manageable for a time, when it comes to putting the entire planet in peril, it is non-negotiable. Reality (natural law, societal law, science-based facts) becomes the instrument of intervention. But what happens when we can no longer distinguish between AGI contrived reality and human-based reality?

Possible Solutions to the Existential Dangers of AGI

Despite the risks, there are potential solutions. These include regulation, oversight, accountability, and transparency. Regulation is necessary to ensure that AGI progress is guided by ethical considerations and the public interest. Oversight and accountability ensure compliance and that those who develop and deploy AGI are held responsible for any harm caused. Transparency will ensure that the public is informed about AGI and can participate in decision-making processes.

The future of AGI will be significantly influenced by the prevailing socio-economic and political ideologies. Ultimately, the challenge is to strike a balance between rapid technological progress and social responsibility on a global level. This will require international cooperation in policy and regulation.

Given the global nature of AGI and its potential impacts, countries must work together to develop common policies and regulations for AGI which cannot be done in the context of nation states, If we are organized into competing nation-states, as we are now, the temptation will be to rely on AGI to make competition-based decisions concerning our very survival. Collaboration and cooperation around this technology can only be regulated effectively as a global community. We’re going to have to get much smarter than we are now geo-politically. Policies and regulations must strike a balance between encouraging innovation and managing risks. International cooperation is critical not only in mitigation of global problems but in ensuring that the benefits of AGI are shared equitably. This will require a global effort to address issues of social bias and inequality and to assure that AGI is developed and deployed in a manner that benefits all of humanity. The first step is to create a regulatory body.

The existential dangers of AGI are real and significant. However, they are not insurmountable. Whether we lean towards neoliberalism or modern liberalism, we must mindfully establish that our approach to AGI be guided by the principles of social responsibility and the public good.

To stay informed about things that seldom get talked about in depth, we invite you to invite your circle of influence to sign up for Terry’s newsletter. Together, we can navigate the challenges and opportunities of the rapidly -changing world in which we live.

DISCLOSER: The outline for this article was generated in less than 5 minutes by AI software. We used select keywords to generate the article, which we diced down to the most relevant points we wanted to make. We back-filled with our own words, ideas and commentary about AI in order to create edited content and the final message. The facts were fact-checked.

We chose to do it this way because we wanted you to understand through experience how AI is being used through most media channels and industries already. The technology continues to get smarter and we wanted you to experience it before we disclosed the process so you are better able to apply critical thinking going forward into this ‘brave new world.’