Ripon Forum


Vol. 57, No. 3

View Print Edition

In this edition

With each new week seeming to bring a new advancement in artificial intelligence, the latest edition of The Ripon Forum examines the role of Congress in regulating AI and how our lives and our world may be reshaped and impacted in the years ahead.

The Role of Congress in Regulating Artificial Intelligence

As policymakers begin to tackle the issue of AI, it is vital that we maintain the agility of our technology and strike a careful balance between protecting consumers and protecting innovation.

How AI is Reshaping the Battlefield

Data, advanced algorithms, computing power – these are the weapons that will determine the fight for information.

AI and the Future of Schooling

Advances in artificial technology create new opportunities to tackle persistent challenges in schooling. But we must be clear-eyed about the technology and how it is used.

How AI is Reshaping Transportation

AI has emerged as a transformative force in transportation, one that will affect both how we use transportation – the demand side – and how we supply transportation facilities and services.

How AI can Reshape Lawmaking in the U.S. Congress

The integration of AI into the lawmaking process has the potential to significantly reshape the way laws are created and implemented — just ask ChatGPT.

Memo to Washington: AI Needs Your Full Attention … Now!

The development and deployment of this one specific type of AI technology — large generative models such as GPT4 — is outpacing our ability to understand their strengths and limitations.

Bring Back Conference Committees

Like so many aspects of the legislative process, the Conference Committee has fallen victim to the dramatic shift of congressional power to party leadership.

Should America Continue to Accept Asylum Seekers? Yes.

America has always been a land of refuge and will continue to be so. That is the easy part of any debate about refugee and asylum issues.

Should America Continue to Accept Asylum Seekers? No.

Today, the U.S. has needlessly made the administration of providing refugee protection confusing by creating two separate paths and processes: An alien overseas applies for refugee protection, while an alien at our border or inside the U.S. applies for asylum.

Ripon Profile of María Elvira Salazar

The Representative of Florida’s 27th Congressional District discusses her time in Congress and her legislative priorities.

Memo to Washington: AI Needs Your Full Attention … Now!

“Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by — but typically operate quite differently from — the ways people use their nervous systems and bodies to sense, learn, reason, and take action.”

So begins the 2016 report of the One Hundred Year Study on Artificial Intelligence that I led.  It is particularly important to understand from this definition that AI is not any single thing, but rather a collection of many different technologies.  Specifically, GPT-4, the most recent and most powerful generative AI model released by OpenAI, is one of many existing AI-based systems, each with different capabilities, strengths, and weaknesses.

The report continues:

“Unlike in the movies, there is no race of superhuman robots on the horizon….  And while the potential to abuse AI technologies must be acknowledged and addressed, their greater potential is, among other things, to make driving safer, help children learn, and extend and enhance people’s lives.”

Though much has changed in the seven years since this report was released, I still stand by these words.  If you’ve spent any time interacting with ChatGPT (and if you haven’t, you must!), I suspect that you’ve been very impressed with its capabilities.  It, and other similar systems, are able to generate text and images that are amazingly realistic.  But even so, they do not come close to fully replicating human intelligence, let alone surpassing it.  And as such, there is little risk that they will soon get out of control and pose an imminent “existential” threat to humankind — at least not anywhere near to the degree that nuclear weapons and pandemics already do.

The development and deployment of this one specific type of AI technology — large generative models such as GPT4 — is outpacing our ability to understand their strengths and limitations.

Nonetheless, I, along with many of my colleagues, recently signed an open letter that called for a public and verifiable pause by all developers of “AI systems more powerful than GPT-4.”

While I did not draft the letter myself, and do not believe that such a global, verifiable pause is remotely realistic, I signed in order to call attention to the potential for bad actors (i.e., human beings) to abuse AI technologies and to urge that efforts to understand and control the “imminent” threats be accelerated.

In my opinion, the development and deployment of this one specific type of AI technology — large generative models such as GPT4 — is outpacing our ability to understand their strengths and limitations.  A flurry of innovation is still uncovering how they can be used (and misused), and to understand their likely social, economic, and political impacts.  We are all readjusting to a world in which realistic-sounding text, and realistic-looking images and videos, may have been created by a machine.  This upends long-held assumptions about our world, calling into question deeply ingrained notions such as “seeing is believing.”

As a result, we need to speed up and increase investments in research into understanding current models and how they can be constrained without losing their abilities.  To be clear, I do not at all advocate slowing down technological progress on AI.  The opportunity costs are too great.  But I implore everyone in a position to do so to urgently speed up societal responses, or “guardrails.”

As a result, we need to speed up and increase investments in research into understanding current models and how they can be constrained without losing their abilities.

One way to appreciate the urgency is to reflect back on the rollout of other disruptive technologies.  The Model T Ford was introduced in 1908, and it took more than 50 years to get to the point of 100 million automobiles in the world.  During those decades, we, as a society, gradually built up the infrastructure to support them and make them (relatively) safe, including road networks, parking structures, insurance, seat belts, air bags, traffic signals, and all sorts of other regulations and traffic laws.  Today, most people would agree that the benefits of automobiles outweigh their (not insignificant) risks and harms.  The same goes for things like electricity (which has caused many fires), airplanes (which have been used as lethal weapons), and many other technologies that have gradually become ubiquitous and shaped modern society.

ChatGPT reached 100 million users in a matter of weeks, rather than years or decades.  If the pace of the release of new, more powerful LLMs (Large Language Models) are going to continue as it has (or even accelerate), then we urgently need to speed up efforts to understand their implications (both good and bad) and craft appropriate, measured responses.

An essential role for universities in this effort is to rapidly increase the size of the AI-literate workforce, not only to satisfy demand of private industry, but more importantly to help infuse governments and policy bodies with people trained in the details of AI.  I help lead multiple efforts at The University of Texas at Austin towards this end, including the Computer Science department’s new online Masters in AI and a university-wide interdisciplinary grand challenge project on defining, evaluating, and building “Good Systems.”

But these efforts are only one piece of the puzzle.  I thus call on everyone in a position to influence our society’s response to this challenging moment in technological progress to engage deeply and help make sure we get it right. Policymakers need to take all actions possible to minimize the chances that the harms will outweigh the benefits for any group of individuals and for society as a whole; and of course we always keep our eyes open towards any developments that could lead to loss of control.

But most importantly, we need to fully support progress in development and understanding of AI technologies that have the potential to improve our nation and the world in so many ways!

Dr. Peter Stone holds the Truchard Foundation Chair in Computer Science at the University of Texas at Austin. He is Associate Chair of the Computer Science Department, as well as Director of Texas Robotics.