Global compliance

AGI: A Tiger by the Tail

This is the third in a series of three articles dedicated to the subject of AI. The outline of each article has been generated by AI software (using our choice of keywords) which provided a framework for the writing. Much of what you read on the internet is now generated by AI, which is why you see so much duplication. Our goal is to retain original thought, using AI for structure.

1. Introduction to Artificial Intelligence

2. The Potential Consequences of AGI

Written by Terry Clayton and Elizabeth Harris

“Success in creating AI will be the biggest event in human history. Unfortunately, it might also be the last, unless we learn to avoid the risks.” ~ Stephen Hawking

As a species, we have been evolving for only six million years from other proto-primates, and only 200,000 years as modern Homo sapiens. The end of the last ice-age took several thousand years for us to adjust and adapt to change. If we are to survive and thrive in a world with all the challenges facing us, the required changes will have to take place by mid-century. Because technology builds on itself and expands exponentially, narrow AI and Artificial General Intelligence (AGI) will usher in those changes within a decade.

So you can see the allure of Artificial Intelligence (AI) and Artificial General Intelligence (AGI); two of the most groundbreaking advancements ever developed in the field of. To be clear, AI is a general term that refers to machine learning that simulates the processes of human intelligence. These processes include learning, reasoning, problem-solving, perception, and language understanding. Narrow AI (robotics, et al) is task specific. AGI, on the other hand, has the capability to apply knowledge across a wide array of tasks at a level equal to, and beyond, that of human beings, with potentially unpredictable results.

This technology is a game changer. The potential for unprecedented benefits and malicious applications, the ethical implications, and the need for regulations are issues that have sparked debates among scholars, policymakers and the general public alike. Because narrow AI has already become part of our everyday lives (cell phones, smart cars, use throughout an increasing number of industries), we are not going to give up its benefits and conveniences. The promise of ‘a better life’ is the driver; the same driver that has insured our survival since the beginning of time.

This genie is simply not going back into the bottle. Therefore it is crucial that we understand AI and the importance of a framework to guide its development and use. At the behest of scientists, both Europe and the United States have taken first steps. The EU has proposed the Artificial Intelligence Act and the US created the Blueprint for an AI Bill of Rights.

Understanding the AI Bill of Rights

The AI Bill of Rights, as signed recently by President Biden, is a set of principles designed to guide the creation and use of this new technology. It outlines the rights of AI and AGI entities and the responsibilities of their creators and users, from the right to transparency and accountability, to that of integrity and dignity. Although some consider it toothless, the AI Bill of Rights isn’t just a theoretical document. It has practical implications for everyone involved in the AI and AGI ecosystem. It is a map to ensure that AI and AGI are developed and used in a way that respects human rights, promotes social good, and mitigates potential harm.

It provides a common framework that can be adopted by countries around the world, fostering international collaboration in its development and use. It promotes transparency in AI and AGI systems by mandating that these systems be explained in a manner that is understandable to humans. It promotes dialogue and cooperation among stakeholders, including governments, academia, industry, and civil society. It facilitates the sharing of best practices, the harmonization of standards, and the coordination of efforts to address the challenges. Finally, the AI Bill of Rights encourages inclusivity and diversity, ensuring that the benefits of AI and AGI are shared widely and that the risks are managed collectively.

Read the full text for the Blueprint for an AI Bill of Rights.

How the AI Bill of Rights Impacts Future AI Development

The AI Bill of Rights is designed to play a pivotal role by providing clear guidelines and promoting transparency and accountability in the AI and AGI ecosystem, which builds public trust in these technologies and their applications. This, in turn, encourages further investment and research in the field of AI and AGI. Furthermore, it makes certain that the benefits of this technology are shared widely, that they promote inclusivity and ensure they are used to bring about positive societal change. Finally it provides a framework for accountability that holds those who create and deploy AI and AGI systems are held accountable. This fosters a culture of responsibility and integrity.

The Importance of Regulatory Models

Regulatory models serve as a framework to ensure the safe, ethical, and responsible use of technology, help to prevent misuse of, and to mitigate, potential risks and harms. These models in AI are crucial for several reasons. First, they provide guidelines to ensure that they are built with respect for human rights and ethical considerations. Second, they provide a framework for accountability by providing mechanisms for redress in case of harm or wrongdoing. If an AI system causes harm, regulation provides avenues for those affected to seek justice. Lastly, regulatory models provide a basis for international cooperation in the governance of AI and AGI.

International cooperation is needed to establish regulatory models that will work. The devil is in the details. As to what constitutes an independent body, we need to back up a bit and recognize that there are three dominant forces that significantly influence our global structures, namely capitalism, authoritarianism and democracy. Regulatory regimes for environmental management, business, trade, transportation, health, safety, human rights, et al, were conceived, and operate within, these forces. They are already in place, with mixed reviews as to their overall success. They are works in progress, with much room for improvement. The upside is that there are models out there and we don’t have to start from scratch when it comes to international cooperation. (One of the more successful is the Montreal Project that concerns the successful reduction of ozone depleting chemicals.) Learn more.

Since the United Nations as a regulatory body is already in place for global oversight on a number of issues, it follows that it is positioned well to ensure global cooperation. The urgent questions of our time demand that we cooperate, and regulation of AI is one of those urgent questions. You’ll find a complete discussion on regulation and oversight, at the World Economic Forum.

The Future of AI and the AI Bill of Rights

International cooperation in general, and specifically the US Blueprint for an AI Bill of Rights, is crucial for the future of the technology and quite possibly our ability to help solve the problems for our species and our planet. It provides a comprehensive framework to guide the development and use of these technologies. It insists on ethical AI and AGI, promotes transparency and accountability, and encourages worldwide cooperation. It is a step towards a future where AI and AGI are used responsibly and ethically for the benefit of all. It is a testament to our collective commitment to harness the power of this technology for social good, while mitigating the risks and challenges.

As we collectively continue to explore the potential of AI and AGI, let us remember that the AI Bill of Rights is not just a document. It is a roadmap to a future where AI and AGI are used to enhance our lives, not to diminish them. It is a testament to our commitment to the ethical and responsible use of technology. Let us also remember that new technologies are neutral. Their use depends on the direction of humans who are developing them. With AGI there is now the possibility that machines will be developing technology beyond the control, and even without, the knowledge of humans.

In our opinion, humans are actually more dangerous than AI. We have the track record to prove it. In the developed world, the rationale for greed is to ‘shoot first and sort it out later,’ but in the face of powerful technologies such as AGI, this is a dangerous attitude. We have a history of following the edict that greed is good because it fuels advancement. Yet here we are, facing environmental, governmental, societal and other worldwide problems caused by heedless growth, driven to advance without regard for the costs.

Whether or not we expire or inspire will be reflected in this current generation. So the question of who/what is the tiger and who/what is the tail is front and center… and not easily solved. AI and AGI hold great appeal. Machine learning can respond lightning- fast with calculated decisions that would take humans years to figure out. Those decisions, of course, are predicated upon the quality of the data used for those calculations, which is gathered from the data-dense Cloud. Computer programmers have an acronym...GIGO…garbage in, garbage out…output is only as good as the input. Considering that human nature is what it is, regulation is our best defense.

“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.” —Elon Musk warned at MIT’s AeroAstro Centennial Symposium

DISCLOSER: This is the 3rd article in a three-part series on Artificial Intelligence. Its outline, and some of the text, was generated by AI (machine learning). We’ve edited the content, adding our thoughts and commentary.

Your comments are welcome.