Delightfully Geeky President Obama on Artificial Intelligence: “You Have to Have Somebody Near the Power Cord”

obamaito1

When I first got here I always imagined the Situation Room would be this supercool thing, like Tom Cruise in Minority Report, where he’d be moving around stuff. It’s not like that at all.

Can’t we keep President Obama … please?

On top of all his general wonderfulness, intelligence, warmth, and humor, our current Commander-in-Chief is a total geek who, not unlike certain parties around Oohlo Central, has his eye on what’s happening with artificial intelligence. Pardon me, sir, but are you watching Westworld?

The president recently sat down for a marvelous discussion with MIT’s Media Lab director, Joi Ito, and Wired magazine to discuss AI and what he is and isn’t concerned about and, along the way, dropped a bunch of cool science fiction references. Here are a few of the highlights, but you should definitely carve out a little time to read the entire piece.

On being a huge Star Trek fan:

I was a sucker for Star Trek when I was a kid. They were always fun to watch. What made the show lasting was it wasn’t actually about technology. It was about values and relationships. Which is why it didn’t matter that the special effects were kind of cheesy and bad, right? They’d land on a planet and there are all these papier-mâché boulders. [Laughs.] But it didn’t matter because it was really talking about a notion of a common humanity and a confidence in our ability to solve problems…Star Trek, like any good story, says that we’re all complicated, and we’ve all got a little bit of Spock and a little bit of Kirk [laughs] and a little bit of Scotty, maybe some Klingon in us, right? But that is what I mean about figuring it out. Part of figuring it out is being able to work across barriers and differences. There’s a certain faith in rationality, tempered by some humility. Which is true of the best art and true of the best science.

He is aware of the dangers of AI, but also knows the stage we’re at …

In science fiction, what you hear about is generalized AI, right? Computers start getting smarter than we are and eventually conclude that we’re not all that useful, and then either they’re drugging us to keep us fat and happy or we’re in the Matrix. My impression, based on talking to my top science advisers, is that we’re still a reasonably long way away from that.

… and what — and who — the danger really is:

If you’ve got a computer that can play Go, a pretty complicated game with a lot of variations, then developing an algorithm that lets you maximize profits on the New York Stock Exchange is probably within sight. And if one person or organization got there first, they could bring down the stock market pretty quickly, or at least they could raise questions about the integrity of the financial markets. Then there could be an algorithm that said, ‘Go penetrate the nuclear codes and figure out how to launch some missiles.’ If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems. I think my directive to my national security team is, don’t worry as much yet about machines taking over the world. Worry about the capacity of either nonstate actors or hostile actors to penetrate systems, and in that sense it is not conceptually different than a lot of the cybersecurity work we’re doing.

On the impossible decisions that a self-driving car can’t really make, despite “perfect” logic …

The technology is essentially here. We have machines that can make a bunch of quick decisions that could drastically reduce traffic fatalities, drastically improve the efficiency of our transportation grid, and help solve things like carbon emissions that are causing the warming of the planet. But Joi made a very elegant point, which is, what are the values that we’re going to embed in the cars? There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules?

… because part of what makes us human is our faults; “neurodiversity” is essential to creativity:

Part of what makes us human are the kinks. They’re the mutations, the outliers, the flaws that create art or the new invention, right? We have to assume that if a system is perfect, then it’s static. And part of what makes us who we are, and part of what makes us alive, is that we’re dynamic and we’re surprised. One of the challenges that we’ll have to think about is, where and when is it appropriate for us to have things work exactly the way they’re supposed to, without surprises?

President Obama does, thank goodness, have a healthy respect for generalized AI, which Ito mentions could happen within 10 years:

And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man.

doloresfly

Read more of the fascinating discussion at Wired.

Cindy Davis

Cindy Davis

Cindy Davis has been writing about the entertainment industry for ​over seven years, and is the ​Editor-in-Chief at Oohlo, where she muses over television, movies, and pop culture. Previous Senior News Editor at Pajiba, and published at BUST.

You may also like...