Digital Media Center
Bryant-Denny Stadium, Gate 61
920 Paul Bryant Drive
Tuscaloosa, AL 35487-0370
(800) 654-4262

© 2024 Alabama Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

How To Make AI The Best Thing To Happen To Us

liuzishan
/
Getty Images/iStockphoto

Many leading AI researchers think that in a matter of decades, artificial intelligence will be able to do not merely some of our jobs, but all of our jobs, forever transforming life on Earth.

The reason that many dismiss this as science fiction is that we've traditionally thought of intelligence as something mysterious that can only exist in biological organisms, especially humans. But such carbon chauvinism is unscientific.

From my perspective as a physicist and AI researcher, intelligence is simply a certain kind of information-processing performed by elementary particles moving around, and there is no law of physics that says one can't build machines more intelligent than us in all ways. This suggests that we've only seen the tip of the intelligence iceberg and that there is an amazing potential to unlock the full intelligence that is latent in nature and use it to help humanity flourish — or flounder.

If we get it right, the upside is huge: Since everything we love about civilization is the product of intelligence, amplifying our own intelligence with AI has the potential to solve tomorrow's thorniest problems. For example, why risk our loved ones dying in traffic accidents that self-driving cars could prevent or succumbing to cancers that AI might help us find cures for? Why not grow productivity and prosperity through automation and use AI to accelerate our research and development of affordable sustainable energy?

I'm optimistic that we can thrive with advanced AI as long as we win the race between the growing power of our technology and the wisdom with which we manage it. But this requires ditching our outdated strategy of learning from mistakes. That helped us win the wisdom race with less powerful technology: We messed up with fire and then invented fire extinguishers, and we messed up with cars and then invented seat belts. However, it's an awful strategy for more powerful technologies, such as nuclear weapons or superintelligent AI — where even a single mistake is unacceptable and we need to get things right the first time. Studying AI risk isn't Luddite scaremongering — it's safety engineering. When the leaders of the Apollo program carefully thought through everything that could go wrong when sending a rocket with astronauts to the moon, they weren't being alarmist. They were doing precisely what ultimately led to the success of the mission.

So what can we do to keep future AI beneficial? Here are are four steps that have broad support from AI researchers:

1. Invest in AI safety research. How can we transform today's buggy and hackable computers into robust AI systems that we can really trust? Be grateful that the last time your computer crashed, it wasn't controlling your self-driving car or your power grid. As AI systems get closer to human levels, can we make them learn, adopt and retain our goals? Suppose you tell your future self-driving car to take you to the airport as fast as possible and arrive covered in vomit and chased by helicopters, complaining that that's not what you asked for. If your car answers "but that's exactly what you asked for," then you've appreciated how hard — and important — it is for machines to fully understand our goals. Through the National Science Foundation and other agencies, AI safety research should be made a national priority. The Equifax hack affecting about half of all Americans was merely the latest reminder that unless we up our game, all our AI-powered technology can be turned against us.

2. Ban lethal autonomous weapons. We're on the verge of starting an out-of-control arms race in AI-controlled weapons, which can weaken today's powerful nations by making cheap, convenient and anonymous assassination machines available to everybody with an ax to grind, including terrorist groups. Let's stigmatize this with an international AI arms control treaty, just as we've already stigmatized and limited biological and chemical weapons, keeping biology and chemistry beneficial.

3. Ensure that AI-generated wealth makes everyone better off. AI progress can produce either a luxurious leisure society for all or unprecedented misery for an unemployable majority, depending on how the AI-produced wealth is taxed and shared. Many economists argue that automation is eroding the middle class and polarizing our nation. Ensuring that future technology makes everyone better off isn't merely ethical, but also crucial to safeguarding a healthy democracy.

4. Think about what sort of future we want. When a student walks into my office for career advice, I'll ask where she sees herself in the future. If she were to reply "perhaps in a cancer ward, or in jail," I would slam her planning strategy: I want to hear her envision a future she is excited about about, after which we can discuss strategies for getting there while avoiding pitfalls. Yet humanity itself is making this very mistake: From The Terminator to The Hunger Games, our futuristic visions are almost all dystopian. Instead of getting paralyzed by fear like paranoid hypochondriacs, we all need to join the conversation about what sort of future with technology to steer toward. If we have no clue what we want, we're unlikely to get it.


Max Tegmark is a professor of physics at the Massachusetts Institute of Technology and the author of Life 3.0: Being Human in the Age of Artificial Intelligence. You can follow him on Twitter: @tegmark.

Copyright 2023 NPR. To see more, visit https://www.npr.org.

Max Tegmark
News from Alabama Public Radio is a public service in association with the University of Alabama. We depend on your help to keep our programming on the air and online. Please consider supporting the news you rely on with a donation today. Every contribution, no matter the size, propels our vital coverage. Thank you.