AI: The good, the bad, or the ugly?
Jia Jia (innovation consultant; New York): Reading about Google’s AlphaGo crushing world champion Lee Se-dol made me unexpectedly afraid. I’d not given much thought to AI and had assumed that warnings from the likes of Elon Musk, Stephen Hawking and Bill Gates was cautionary but no cause for alarm.
But knowing how the computer plays is unnerving. After his defeat in the first game, Lee commented of AlphaGo that “It did not play like a human at all.” In fact, AlphaGo had made an early mistake but didn’t lose its cool as a human would. On the third game it withstood every unexpected attack and then played implacably leisurely, essentially ignoring attempts by Lee to attack it and force a move, because it was following its probability of success. It all felt like the Go version of Ex Machina.
I guess the main question is two-fold:
1) What good and bad can ppl do with AI
2) Will AI become self-aware and if it does, what will happen?
I’d be less concerned about AI if we lived in a society that valued emotions, not just productivity and winning. Emotions make us human—they make empathy possible and with it, responsibility.
Anna (author and music/arts publisher; London): I am currently reading Elon Musk bio by Ashlee Vance. Here is an excerpt from page 3: “the negotiations hadn’t begun, and Musk was already dishing. He opened up about the major fear keeping him up at night: namely that Google’s cofounder and CEO Larry Page might well have been building a fleet of artificial-intelligence-
Let’s not be too hasty to pit AI against humans or to conflate human morality with machine prioritization…
Daniel (Architectural Engineer; New York): The American commentator Michael Redmond made a very interesting remark after either the second or third match. He said that we should welcome AlphaGo because it has potential to provide new and creative approaches to playing the game of go, just as famous human players have in the the game’s long history. This comment rejects the premise that AI is adversarial to humanity. We should treat AlphaGo not as a foreboding sign of humanity’s limitations, but as a welcome addition to the game’s legacy.
It’s also worth noting that although AlphaGo ended up winning the four of five matches, there were still moments mid-game when the AlphaGo‘s developers saw that the AlphaGo calculated a probability of winning of around 50%. This should highlight Lee Sedol’s skill, that in spite of AlphaGo‘s vast dataset of game sequences, Lee’s own brain was still able to put up a fight. So I wouldn’t call this a clear and decisive victory for AI over humans either. All in all, these matches should provide reasons for people to continue playing go, rather than give up thinking a computer will always beat them.
I can’t say I can give definitive answers to your two questions. I think they rightly highlight morality questions AI theorists have been arguing for a while. I would like to reframe the questions though: Instead of talking about morality, let’s call it prioritization. How does AI prioritize one criteria over another when making a decision? With Big Data and the Internet of Things, we humans are facing this problem. Technology has given us so much data that we don’t really know what to do with it. Computers can give us evidence that would have taken us ages to find, but we are left to weigh quantitative evidence against more qualitative ones.
…Yet prioritization is surely informed by ideology, which is about morality?
Jia Jia: The point about decision-making in the era of big data is really interesting. And the degree to which these decision criteria are embedded in the algorithms themselves. Because they have to be, at some level, and when they are, they reflect the ideologies of the human coders. This all reminds me of moral reasoning class in college. What does it mean to maximize utility? Is it ok to kill 10 people in order to save 100? Can you decide to stop investing in services for disadvantaged minorities (e.g., installing ramps for disabled people) in order to use that money to benefit the majority? Of course, none of these dilemmas are new—our governments deal with them everyday. But when AI automates policy implementation, our moral philosophies will have to be woven into the AI programming. To put it in stereotypical terms, will the US code AI that massively protects the individual’s rights, even when it slow down social progress? And will China code AI that maximizes the distribution of utility among the population, while curbing individual advantages?
One other question I have is about legal decision-making. Right now, the most that AI can do is read cases. So in the best scenario, lawyers don’t have to read 5000 pages a wk to find a relevant detail—the AI can do it for them. But what happens when AI cognition gets more sophisticated and the AI has been trained longer on legal expertise? Will we get to a situation where the AI understands what it reads to the extent that it can recommend a legal judgment? And perhaps that judgment will be better-informed that a human’s ever could be…
Anna: It has been said that only humans can be inhuman, hence I believe the coding of the AI that Jia Jia mentioned is the key to the problem. I am by far not an expert on the topic but if the AI replicates human psyche then it has a long way to go. What makes some people good and others evil? What makes some good people commit horrible acts?
We need a new economics to truly tap the full potential of AI…
Daniel: Another angle to consider is what AI means for the economy: The article points to how a lot of jobs in the very near future are likely to be automated. The time is now to figure out where humans belong in that kind of economy and what value humans bring.
Jia Jia: I think we need a new economics that is not focused on productivity and underpinned by financial markets. Why is it that automation is always equated with job loss and job loss is negative? Hasn’t the entire history of technological advancement been about automating certain tasks so we get freed up to [INSERT X]. And the reason I write “INSERT X” is because I don’t think that “WORK” is the answer. It’s just been the dominant ideology since the Industrial Revolution and rise of modern economic theory, where, bizarrely, the one thing that Marxism, Capitalism and puritanical Christian values agree on is the value of work.
Contrast this with the fact that for thousands of years, it was considered good to lead a leisurely life in which others did your work for you—the aristocracy used to disdain work. Our current worship of work is compounded by the financial market’s demand for endless growth which…frankly speaking…I have trouble attributing to anything other than greed. I’m all for innovation and progress but the natural equivalent of unlimited growth is cancer. Anyhow, so what will people do if they don’t work? Pay attention to the things that really nurture us a society…like art, community-building, education and care-giving perhaps?
Perhaps we’re getting ahead of ourselves….after all, gaming is different from reality.
Daniel: Another key premise in the discussion is that we are, in the end, talking about a game with well established rules. Though there are a lot of possible sequences in Go, more than chess, the game is ultimately a closed problem. Even before we talk about morality, it’s hard enough for AI today to pass the Turing Test, or even play games like Zelda. There has been a neural network developed to play Mario, but if you think about it, Mario is a rather linear game. Points aside, the primary way to win in Mario is just to get to the other side. There has yet to be AI that can do stuff you need in Zelda: learn to use the variety of weapons, get the items, use those items to unlock gates in the right order, etc, etc.
Perhaps the best way to view things for now within the go-playing world is that this is a rivalry. Lee Sedol has reflected “All the traditional or classical beliefs we have had about Go so far, I have come to question them a little bit based on my experience with AlphaGo. So I think I have more studying to do down the road.” Reminds me of what Magic Johnson has said about Michael Jordan: “Jordan’s a great player, definitely one of the best. But he didn’t have a Larry Bird. Someone on the other team that pushed him. Larry and I got so much better from playing each other.” (paraphrased)
Jia Jia: I guess to end, there are two optimistic things I’ve come across. One is that chess master Garry Kasparov, who was beaten by IBM’s Deep Blue, apparently said that the most powerful chess players today are chimeras—human-AI pairs. Together, they play better than any human or AI alone. Even in these games, both computer and human pushed each other to play moves (37 and 78 respectively) that were “one-in-ten-thousand.” Which speaks to the point about new creative potential. So I’m waiting to see if the same would happen to Go and to life in general. Also, SF writer Isaac Asimov seemed to have a very optimistic view on AI. His “I, Robot” stories are all about the moral logic of intelligent robots and he kind of concludes that AI’s logical endpoint is to become a benevolent nanny to mankind, taking care of us while making sure it doesn’t intrude. Which is like the positive version of the Matrix.
In the end, as always, Silicon Valley nails it.
Tania (biomedical engineer; Boston): I’m surely more of an engineer than a computer scientist but the future looks to me more like this epic clip from HBO’s Silicon Valley…
Tags: AI Google morality technology
2 Comments
[…] From Asimov to AlphaGo […]
With a effectively implemented BC plan, your corporation has
a excessive probability of surviving aand minimizing the impacgs of disruptions.