Melanie Mitchell, a professor of complexity at the Santa Fe Institute and a professor of computer science at Portland State University, acknowledges the powerful accomplishments of “black box” deep learning neural networks. But she also thinks that artificial intelligence research would benefit most from getting back to its roots and exchanging more ideas with research into cognition in living brains. This week, she speaks with host Steven Strogatz about the challenges of building a general intelligence, why we should think about the road rage of self-driving cars, and why AIs might need good parents.
Listen on Apple Podcasts, Spotify, Android, TuneIn, Stitcher, Google Podcasts, or your favorite podcasting app, or you can stream it from Quanta.
Melanie Mitchell: You know, you give it a new face, say, and it gives you an answer: “Oh, this is Melanie.” And you say, “Why did you think that?” “Well, because of these billions of numbers that I just computed.” [LAUGHTER]
Steve Strogatz [narration]: From Quanta Magazine, this is The Joy of x. I’m Steve Strogatz. In this episode: Melanie Mitchell.
[INTRO MUSIC PLAYING]
Mitchell: And I’m like, “Well, I can’t under— Can you say more?” And they were like, “No, we can’t say more.”
Steve Strogatz: Isn’t that unnerving, that it’s this great virtuoso at these narrow tasks, but it has no ability to explain itself?
Strogatz: Melanie Mitchell is a computer scientist who is particularly interested in artificial intelligence. Her take on the subject, though, is quite a bit different from a lot of her colleagues’ nowadays. She actually thinks that the subject may be adrift and asking the wrong questions. And in particular, she thinks that it would be better if artificial intelligence could get back to its roots in making stronger ties with fields like cognitive science and psychology, because these artificially intelligent computers, while they’re smart, they are smart in a way that is so different from human intelligence.
Melanie’s been intrigued by these questions for really quite a long time, but her journey got started in earnest when she stumbled across a really big and really important book that was published in 1979.
Mitchell: So, I majored in mathematics. I didn’t really know what I wanted to do. I was teaching high school math and I read a book called Gödel, Escher, Bach: An Eternal Golden Braid, which is a very unwieldy title [LAUGHTER] by Douglas Hofstadter. Ultimately, the book was about how intelligence, something like intelligence can emerge from something like the brain, as it has like 100 billion neurons. And individual neurons are not themselves intelligent, so somehow this phenomenon that we don’t understand very well but that we value very much is emergent from that complex system. And that’s what Gödel, Escher, Bach was really all about, how that could possibly happen.
So, I was just blown away by this, and I thought, “This is…” You know, and it was also about how we might be able to do this in machines. So, I read this book. It’s a very long book, maybe 700 pages or something. [LAUGHTER]
Mitchell: It’s very complicated, but I was just… It just transformed my life, and I decided I had to become an AI researcher. I’d never taken a single computer science class.
Strogatz: Having read this book, what did you decide to do? I mean, what were you going to do about it?
Mitchell: At the end of the year, I decided I was going to stop teaching, and I was living in New York, but I decided I would move to Boston. And I got a job doing, actually, some computer work at an astronomy lab at Harvard. That enabled me to take some computer science classes. [LAUGHTER]
Mitchell: But amazingly I was actually on the campus at M.I.T. one day and I saw a poster advertising a talk about Douglas Hofstadter. I was like, “Oh, my God, I have to go to this. This is incredible.”
Mitchell: Because I had actually already applied… I had applied to graduate school and I’d applied to where he was at Indiana University. And I had written to him, asking him, you know, if I could come work with him, and he never replied. And so, I went to the talk, and I waited in line to talk with him, you know, with a huge crowd of people, [LAUGHTER] most of them similarly wanting to go into AI. I talked to him and he — you know, he was very nice, but nothing really came of it.
So, I started calling his office, and I would leave messages for him. And he never answered them. [LAUGHTER] So, I figured — he was never there when I called, so I figured if he’s not there during the day, he must be there at night. So, I called him once at like 10:00 p.m., and he answered the phone right away. He was super — in a super good mood. [LAUGHTER] And I asked him if I could come and talk to him about being his student, and he said, “Sure, come by tomorrow.”
Strogatz: There’s some good general intelligence right there.
Mitchell: Exactly, exactly. [LAUGHTER]
Strogatz: But — so, really — so, wait. He’s… So, he’s on sabbatical or whatever it was. He’s in Boston. You just say, “Could I come over?” “Yeah, come over.”
Mitchell: Yeah, so I came over, and we chatted, and he invited me to come work for him.
Strogatz: Wait a second. There has to be a little bit that you’re leaving out there. Did he say, “Who are you?” I mean, he —
Mitchell: Well, we chatted for a while, and I talked about my background and, you know, why I was interested in what he was doing, and he told me a little bit about the projects he was working on. And he hired me first as a summer intern, and then he was moving to the University of Michigan, so I did a last-minute application for graduate school there, and they accepted me, and I went.
Strogatz: Now, during this whole thing, was your family in the loop?
Mitchell: A little bit, yeah. They were… I mean, my family had already kind of gotten used to me being — doing unconventional things.
Mitchell: Yeah. [LAUGHTER]
Strogatz: What were some other ones in your past?
Mitchell: The first year after college, I spent a year as a volunteer in the peace movement in Northern Ireland. Now, my family completely freaked out about that. They thought that was insane.
Strogatz: Just too dangerous?
Mitchell: Yeah. But actually, I mean, the funny thing was… Because I spent a year doing that, and when I was coming back for a job in New York City, all the people I knew in Belfast were saying, “How can you go to such a dangerous city?” [LAUGHTER] So anyway, the thing that — my experience in Northern Ireland, particularly, intrigued Doug Hofstadter.
Strogatz: Really? Huh.
Mitchell: I think that much more than any scientific experience I had had.
Strogatz: I get the feeling from reading his stuff that he would admire fearlessness.
Mitchell: And maybe the fearlessness to call him up in the middle of the night. [LAUGHTER]
Strogatz: That, too.
Mitchell: Maybe no one else had tried that.
Strogatz: Something. Okay, whatever — you did the right thing. And then what happened? So, then you’re working with him — well, so he’s in Michigan. He moved from Indiana. You moved to be a grad student in Michigan.
Mitchell: Yeah. So yeah, so I went through graduate school, and my project was to develop a computer program that could make analogies. So, this was Hofstadter’s biggest interest, and he thought this was really the important thing in intelligence, was the ability to make analogies. And, you know, he didn’t mean analogies like, you know, we used to have on the SAT: Shoe is to foot as glove is to blank. [LAUGHTER] They used to have…
Mitchell: He was thinking more of the kinds of unconscious analogies that we make all the time. When your friend tells you a story, and you say, “Oh, yeah, the same thing happened to me.” They say something about, you know, they had an argument with their wife about loading the dishwasher, and you say, “Yeah, the same thing happened to me.” But you didn’t have an argument with their wife about loading their dishwasher. You’re mapping this into your world.
Mitchell: And this kind of thing — this is invisible to us in our normal daily life, that we’re just constantly making analogies. And this is what he wanted to try and model in a computer.
Strogatz: When Melanie and her old advisor Douglas Hofstadter speak of analogies, they’re using the term in a way that’s a bit different from how most of us would use it. I would say what they really have in mind are analogous situations, like when one situation is like another one in some way. For instance, you know, it could be something that reminds you of something else. Like you hear a piece of music and you think, “Oh, yeah, that reminds me of that old Beatles’ song.” There’s just something about it that reminds you — it could be played in a totally different way, but you recognize the essential similarity, that it’s basically the same thing.
Mitchell: Analogy is the process of kind of getting to the gist of what you mean. And that’s something we humans are very good at, but nobody’s figured out how to get computers to be good at that yet.
Strogatz: Hmm. And you used the word “mean,” of course, which is the essence here, isn’t it? That — what is meaning? “That’s not what I meant,” or “I did mean that.”
Mitchell: Right, and that’s hard to define. What is meaning? You know, I can understand what you mean when you’re talking to me because we share kind of a common background of knowledge about the world. And common culture, and similar brains probably, you know, as we’re both humans. We’ve had kind of similar experiences. But how do you give a computer the ability to do that? That’s very hard.
Strogatz: So, you said your dissertation project was about that, to try to develop a — what, a software that could…?
Mitchell: Yeah. So, the first problem then is to say, “Well, what problem do we have to solve?” So, Doug Hofstadter had developed this — what he called the — the program was called Copycat, because his idea was that doing analogy is being a copycat. It’s like doing the same thing but in a different situation. He had developed this sort of domain or world for Copycat that involved analogies between strings of letters.
Strogatz: So, Melanie in this Copycat program tried to devise a kind of program for exploring analogies but inside a computer. You can’t really ask a computer to try to do analogies that are as complicated as, say, musical comparisons or finding analogous situations in real life. Those are just too rich for artificial intelligence to deal with at the present time. But what they can do is solve puzzles, analogy puzzles about letter strings. Like, here’s one that’s a classic in Copycat. ABC is to PQR as ABD is to what? So, you know, that’s an interesting little puzzle. You can think about how you would answer it. But computers can think about it, too, in these very restricted worlds of just sequences of letters. And what’s interesting is that by watching how they answer, we can start to learn how they make analogies. We can start to understand what they think about.
And it’s not that we’re so interested in letter strings. I mean, Melanie’s point is that we’re interested in this broad problem of how you can probe analogies inside an artificial intelligence and thereby get to the essence of what meaning is in real life.
Another thing about Melanie is that she provides a much-needed corrective to a lot of the hype that we’re hearing nowadays about artificial intelligence. I can remember a conversation with her that I had on Twitter where I was carrying on about an advance having to do with a program called AlphaZero, a chess-playing program that had played some really beautiful, intuitive games. And it got me thinking, and I even wrote about it in The New York Times, that maybe these programs can go beyond playing chess to actually doing science. Well, Melanie poured some serious cold water on that idea and helped straighten me out.
Mitchell: This is one of the most surprising things about artificial intelligence today, is that how well these systems can do in these particular narrow tasks that if a human could do that, well, we would say, “Boy, that person is really brilliant.” And we would assume that they could be brilliant in many areas. And if they have incredible intuition about Go, we would think, “Well, they probably could probably have intuition about other things.” But the strange thing is that these machines don’t seem to be able to transfer what they’ve learned or their brilliance about chess or Go, or go into any other areas — transcribing spoken language and mapping out routes for us in our cars, and all of the things that these machines do so well. It seems like you’d need general intelligence to do these things well. But it turns out you don’t, that AlphaGo and AlphaZero don’t seem to be able to transfer what — their kind of brilliance to any other domain than the one that they’ve been trained on.
Now, one thing about — you know, you sort of implied that if a machine could be so brilliant, and have so much incredible intuition about playing chess, for example, that maybe it could do the same about science. But chess and science are very different.
Mitchell: You know, chess has specific rules and has kind of discrete states, meaning the chessboard is in a certain state at any time, and there’s only a relatively small number of possible moves you could make at any turn.
Strogatz: True, yep. True.
Mitchell: So, in that sense, it’s very different from the real world, in which there’s just a seemingly infinite number of possibilities, and it’s not constrained in the same way. Back in the early days of AI, a lot of people believed that if a machine could play chess, it would have to have general intelligence, human-level intelligence.
Strogatz: So, when you refer to general intelligence, is that — as opposed to what, narrow intelligence, or what?
Mitchell: Yeah, I guess that’s kind of a buzzword in AI, and it means kind of the goal of AI, the original goal at least, was to have a machine that could be like a human, in that the machine could do many tasks and could learn something in one domain, like if I learned how to play checkers maybe that would help me learn better how to play chess or other similar games, or even that I could use things that I’d learned in chess in other areas of life, that we sort of have this ability to generalize the things that we know or the things that we’ve learned and apply it to many different kinds of situations. But this is something that’s eluded AI systems for its entire history.
Strogatz: I’m not sure I really understand the idea. That if a person or a computer is said to have general intelligence, they will be good at many things that will require intelligence? That’s the idea, right? If you’re smart, you’re smart.
Mitchell: That’s the idea. If you’re smart, you’re smart. And I would say even further, if you’ve learned something, you’ve learned something. So, you know, if I say that I’ve learned, for example, to recognize faces, then I can recognize faces even if the lighting is different or if somebody’s grown a mustache or is wearing glasses or, you know, any number of possible alterations, that we’re pretty good at adapting to changes in the world.
Strogatz: I see. So, like, when we are challenged by these online systems that ask us to prove we’re not a robot by recognizing the letters, and they’re all gunked up and distorted and they’ve got slashes through them and stuff, that’s called CAPTCHA, right? This technology —
Strogatz: That’s something we find kind of ridiculously easy. That’s still hard for computer vision?
Mitchell: Yes, yes.
Strogatz: But that’s not, for us, because we know how to recognize letters?
Mitchell: Well, the letters — you know, one thing you might’ve noticed is that there’s fewer CAPTCHAs that use the letters and more that use pictures.
Strogatz: Yeah, lately, right, yeah.
Mitchell: And that’s because the ones that have used the letters, those are now vulnerable to computers. Computers can now solve those.
Strogatz: Really? Okay.
Mitchell: But the ones that use pictures are much harder.
Strogatz: Is that because of neural network’s getting better? Or the new style, or AI or what?
Mitchell: Partially, yeah, and other — not only neural networks but also other techniques in vision. But people have cracked the letter CAPTCHAs, but the image CAPTCHAs where it says, like, you know, “Click every box where you see a car,” those kind, those are harder for machines. You know, the general images — recognizing things in images is a much harder problem than recognizing letters.
Strogatz: Okay, so you’re saying this is sort of something where human beings have a big, big edge at the moment, and that’s because we have general intelligence, or what?
Mitchell: Well, we’re certainly better at visual recognition than machines today. But let me just go back to the checkers versus chess example, because probably that was a bad example. But let me give you one interesting example. DeepMind, the company that did AlphaGo, also built a machine that could learn to play these Atari videogames. The idea was here you have games like Pong or Breakout, the old ’70s Atari videogames where you… And then Breakout’s a good example, because in Breakout you use a joystick to move a little paddle that hits a ball. You know, this is all in software, of course. And the ball is bouncing off the paddle to destroy bricks.
Strogatz: Oh, okay, uh-huh.
Mitchell: And so, they trained machines that could do much, much better than humans at these games. That was one of the main reasons that Google acquired DeepMind, because they were so impressed by this Atari videogame example. But then, people started playing around with it and showed that… So, say you take Breakout, the game with the paddle that you’re moving around. Okay, now suppose that you move the paddle up a few pixels on the video screen. Now, that’s a new game. Humans wouldn’t see it, really, as a new game, exactly. They would just say, “Okay, you moved the paddle up. It doesn’t really matter. I can still play.” But the machine that had learned to play it with the original paddle position couldn’t adapt because it hadn’t really learned what we humans learn about the game. It hadn’t learned sort of the concept of a paddle or the concept of a ball. It only learned about patterns of pixels.
Strogatz: Oh, yeah, uh-huh. Interesting.
Mitchell: But it was able to do much better than any human on what it had learned. But when you changed the game a little bit, it wasn’t able to adapt its intelligence. So, that kind of adaptation is getting at what we mean by general intelligence. It’s taking the concepts, learning concepts that are useful, and being able to adapt those concepts to new situations.
Strogatz: I see, truly new situations. Hmm.
Mitchell: And we can talk about this in more real-world cases like self-driving cars, if you want, where it really shows up.
Strogatz: Oh, okay. Yeah.
Mitchell: You know, if you are walking across the street, and you have a dog on a leash, does it know that the dog is going to come with you? There’s so much that we humans know that we don’t even know that we know about the way the world works. You know, when we see people crossing the street, we kind of know what their intention is. We know which way they’re going, and you can really read body language pretty well. But it’s hard for machines to learn that kind of thing, and in fact, one of the most problematic things I learned about self-driving cars — that they have trouble figuring out what counts as an obstacle. Like, if you see a big cinderblock in front of your car, you’re not going to drive over it.
Strogatz: That’s right.
Mitchell: But if you see a tumbleweed in front of your car, you might drive over it. But there are so many possible things that could be obstacles that, you know, we humans use the knowledge that we have, kind of our general intelligence, to figure out what we should stop for, whereas self-driving cars often stop suddenly where humans don’t expect them to. The most common accident involving self-driving cars is somebody rear-ends them.
Mitchell: Because they stop unexpectedly.
Strogatz: Oh, that’s interesting. Huh.
Mitchell: They’re unpredictable. So — and that’s not that — you know, the person who rear-ended them is at fault, of course. But it’s like people don’t expect cars to drive like that.
Strogatz: Right, huh. But so — I suppose if it’s all cars that are self-driving, they’ll understand each other.
Mitchell: That’s right. So, if it were all cars that were self-driving, there were no humans around, and including pedestrians because they’re hard to predict, everything would be just fine. But it’s the mix of humans and self-driving cars, and self-driving cars — you know, also humans don’t like to ride in them because it’s very jerky, I think, a lot of the time.
Mitchell: That’s what I’ve read.
Strogatz: Oh, I’m surprised to hear that. That’s not going to be good for me.
Mitchell: No. And one of the problems is that, you know, the human engineers have to set kind of the threshold that says, like, “How certain should you be of an obstacle to make you stop?” And if you lower that threshold, then it keeps stopping all the time when there’s no obstacle. If you raise that threshold, then it might hit somebody, which actually happened, you know, with Uber driving in Arizona. They had the threshold quite high, and it hit somebody. So, where should that threshold be?
Well, we humans — I don’t think we work that way, exactly. We do a lot more — we know a lot more. We are able to use our knowledge to figure out when we should stop. And we make mistakes, but it’s just, you know, the kinds of mistakes we make are very different from self-driving cars.
Strogatz: What are some other things that we ought to be thinking about in terms of dangers? Because there is such a trend towards AI. We’re putting Alexas in our houses, and other things that talk and listen, and —
Mitchell: As you say, we have these machines in our homes and in a lot of our devices, our cars and so on that are collecting data about us and are sending it to their companies that are doing something with that data, and we don’t always know what they’re doing. And we might not always like what they’re doing. So, that’s definitely a concern. We know that in some countries, a lot of this data’s being used to do a lot of surveillance and it’s really worrisome for civil rights advocates.
One example is facial recognition. That’s getting a lot of attention these days, as the question is, you know, should… It’s really, in some sense, a boon for law enforcement. They can use facial recognition. Like, now the technology is such that you can pick out a face in a crowd, like, you know, at a football game or something in the stands and say, “Oh, match that to criminal database,” and, like, “Yes, this person matches.” There’s a lot of possible applications in law enforcement, security, letting people go on airplanes.
But there’s a lot of concerns. First of all, these systems can make errors, and the fact that they’re powered by these deep neural networks that are very complicated and not transparent in their decision-making means that we don’t really understand all the errors that these systems make. But it’s also been shown that these systems can be biased, in that facial recognition systems tend to be better — make less errors on, say, people with lighter skin, or males are — they’re better on males than females, or they’re better on younger people than older people. You know, there’s all these different biases that have come out.
Strogatz: Huh. Is that based on the data sets that they’re trained on? That there’s, like, overrepresented of white, male, younger people?
Mitchell: It’s partially based on the data sets. There’s also some just, really, intrinsic biases throughout the whole procedure. Like, cameras are tuned better to lighter skin, it turns out. They pick up features of lighter skins better, so that the facial recognition systems can then, you know, use better features to identify them. So, there’s a lot of pushback on the use of facial recognition for various applications in securing and policing, et cetera. And some cities, like San Francisco and Oakland and a few other cities, have banned it.
Strogatz: Okay, just banned it outright.
Mitchell: Just banned it outright for use in, like, law enforcement and other kinds of government applications.
Strogatz: I’m struck by this thing you said, the “non-transparency” of their decision-making process. In your world, do you call it the interpretability problem or something like that, or explainability…?
Mitchell: Explainability, interpretability. People use different terms, but the idea is that the most successful method in AI these days are called deep neural networks. They’re sort of loosely inspired by the brain, in that they have simulated neurons and simulated connections between neurons that have different strengths.
And there can be billions of these simulated neurons and simulated connections, and they have numbers that are associated with them that are learned by the system from lots and lots of data. People are working really hard on trying to get better ways to explain what’s going on, but it’s still often quite difficult.
Strogatz: You hear this term a lot nowadays, about deep neural networks. Well, “neural networks” refers to an artificial version of real neural networks, the real networks of neurons that we all have in our brains and in our nervous systems. But in this context deep neural networks means artificial ones made of silicon. They’re made of transistors or they could just be pure software, but the thing is they’re modeled on the real architecture of the human brain, in that you have a lot of little elements, and they’re all hooked together in these tremendously intricate webs. So, what makes them deep is that they have many layers, sort of like the way the brain is organized. Like, if you think about the visual system, light comes in, it hits your eye, goes back through your optic nerve to another layer of cells. And so, a network is deep to the extent that it has many layers.
Even though these networks are modeled after a human brain, in a way they’re nothing like the human brain. We have a lot of trouble making sense of the solutions that they come up with, and they can’t explain it themselves.
Mitchell: It’s almost incomprehensible how to make sense of what the computer’s doing, but we have these high-level programming languages that allow us to specify in human-understandable terms sort of what the computer is doing. The dream of neuroscience is to come up with something analogous to that, the high-level programming language of the brain, if you will, that makes sense — that allows us to make sense of what the system is doing in terms that we can understand. And that’s something that we don’t know how to do with AlphaGo, is to probe what its concepts are.
Strogatz: Right, right.
Mitchell: And if it even has any. [LAUGHTER] I mean, it clearly has something. It’s doing something. It’s doing — you know, it’s a combination — I think of AlphaGo as a combination of intuition and search, because it’s doing a lot of search, a lot more sort of look-ahead than a human would do. But it’s also combining that with a kind of higher-level concepts that — about, like, what — is this a good kind of situation to be in?
Strogatz: I’m a little surprised that you would allow yourself to use the word intuition to describe it.
Mitchell: AlphaGo seems to have some intuition about chess or Go or, you know — AlphaZero, I should say, you know, when it’s been trained. You know, it has to learn from playing itself on millions of games, but it’s learned something about how to make a judgment, and I guess that’s kind of intuition. This is something I talk about in my book, which is, like, why do you want your kid to join the chess club at school? [LAUGHTER]
Strogatz: Okay. Yes, good question.
Mitchell: Well, it’s not because necessarily you care that much about them learning chess. You know, most kids are not going to make a living playing chess. But it’s because you think that by playing chess, it’s an activity that’s going to cause your child to learn to think better, to be smarter in some way. But AlphaZero, the chess player, hasn’t learned to think better or be smarter in anything except for chess. So, it’s like the idiot savant that can play chess and be the best in the world, but it can’t do anything else.
Strogatz: After the break, what’s the best way to be a good parent to an artificially intelligent system? Should we treat them more like our own kids? That’s ahead.
[MUSIC PLAYS FOR BREAK]
Mitchell: This is actually a very old idea in AI, and it was first — it was brought up even by Alan Turing in his 1950 paper, where he proposed the Turing test, that he said probably we should raise an AI system the way we raise a child. We shouldn’t just program them to do narrow things. We shouldn’t just let them have these very narrow lives, but we should let them be a child. And this is now getting a lot of traction, this idea. In fact, here’s something that you may find amusing. There’s a big push through DARPA, the Defense Advanced Research Project Agency, which funds a lot of AI research, called machine common sense. And the program is to get a machine to have the common sense of an 18-month-old baby.
Mitchell: And so, people are building computer programs that are meant to learn like babies do, and kind of go through developmental milestones the way babies do.
Strogatz: Oh, boy. Yeah?
Mitchell: So, this is just a microcosm of the big kind of paradox of AI, is that we have these brilliant systems that can play chess and Go and do all these amazing things, and translate between languages and, you know, what have you. But it’s a grand challenge to get a machine to have the common sense of an 18-month-old.
Strogatz: Hmm. That’s good to remember the next time you’re reading the big headline in the newspaper, or one of those business magazines, that that’s the grand challenge, to produce an 18-month-old or something even close, probably even as smart as my dog.
Mitchell: No, nowhere near as smart as your dog, or even — in some sense, even mice or [LAUGHTER] things that we might think of as not very smart at all.
Strogatz: What about emotion? I mean, we often make a division in our mind between feeling and thought. Of course, people have both. Is part of what makes them — the computers, the AIs — so dumb so far, that they — we don’t imbue them with any emotional capacity?
Mitchell: That’s very likely. I mean, it’s hard to know because we don’t really understand ourselves how emotion impacts our own thinking. It clearly impacts it quite a bit. But, you know, when a human does a task or, like, plays a game of Go, they care. They want to win, right? And they’re upset when they lose, and the machine doesn’t even have the concept of winning and losing. It doesn’t have any sort of skin in the game, if you will. [LAUGHTER]
Strogatz: It certainly does not.
Mitchell: Yeah, and does that matter? And these machine-learning programs, you know, they’re kind of fed data in this very passive way, whereas children are very emotional about their learning. They desperately want to know certain things. They desperately want to try certain things. They get very upset when they’re denied the opportunity to do certain things, because they have very strong motivations. So, the question — but it’s a big question, whether that could even make sense, to have a machine with emotions or motivations.
Strogatz: Well, from an evolutionary standpoint you can see why a baby needs to feel love to attach to its mother or parents or kin. You know, I mean, there’s all kinds of — in the language of that field, there’s a lot of selective advantage to having certain emotions for your survival. The computers don’t need emotions at the moment because they’re just comfortably plugged into their power source, and we make them play their millions of games of chess.
Mitchell: Right. A big part of modern AI is an area called reinforcement learning. In fact, that’s a big part of the way that AlphaGo and AlphaZero work, where the machine does something and it gets a “reward” if it does a good thing, but where the reward is just a number, right? It’s just like I add a number to my reward register. So, that’s kind of a very unsatisfying simulation of emotion.
Mitchell: But, you know, I think no one really knows the answer to whether you need something more than that or not, whether you need… And, you know, another — there’s all kinds of other things that are important to us humans, like our social interactions, our — we learn from other people.
Strogatz: I see. So, social artificial intelligence, that’s an interesting new direction, isn’t it? I mean, is there such a thing? Do people think about that?
Mitchell: Yeah, people are thinking about it. I mean, you know, they’re trying to think about what it is that we humans have that these machines don’t have, or even animals. You know, so, there’s an area called imitation learning, where you learn — the machine learns by trying to imitate a human. But to do that, it has to kind of have a model of the human, and try and understand, like, what is the human doing at some conceptual level that will allow the machine to do it. So, it’s all very primitive, but I think the field has long been in this view that machines don’t need emotions; in fact, emotions would be detrimental, because they kind of get in the way of rational thinking. You know, if you’re driving and you have road rage, [LAUGHTER] you’re more likely to get into an accident, and are —
Strogatz: Yes. Wow, that’s interesting. The self-driving cars, we haven’t thought about what kind of road rage they might be feeling, or should be feeling from all these idiot humans doing their irrational stupid stuff out there on the road.
Mitchell: Right, and the thing is, they don’t have any road rage, and they don’t have any motivation to get to someplace fast or to — [LAUGHTER] But I think it’s a really — you know, this is really at the frontier of what people are thinking about in AI. It’s, do we need road rage or is it better not to have road rage? Or, you know, could a machine be sort of super intelligent in the sense that it’s superhumanly rational without any of our — you know, the need to sleep, or our emotions, or our cognitive biases that we have? Or are all those things necessary for intelligence?
Strogatz: The picture I’m getting from your description of all this is that we really are in the infancy of studying this field.
Mitchell: I believe that’s correct. One of the things I quoted in my book was somebody saying that general AI is 100 Nobel Prizes away, which is a good way to measure time, I think, or how far along a field is.
Strogatz: Listening to Melanie makes me really feel optimistic about the future of artificial intelligence, not so much for what it’s going to mean for society, but what it’s going to entail for the human beings who are working on it. It’s going to require such collaboration on our part. It’s really going to have to be all hands on deck, and that in itself is exciting.
Mitchell: I’m about to go off to the Santa Fe Institute for a year, and I’m going to be putting together a research program on intelligence as studied from an interdisciplinary perspective. So, I’m really excited about that, and I think that that is really what we need to do in order to get at some of these missing pieces of AI. To understand better what we mean by intelligence and what we mean — you know, not just in humans, but kind of across species, and even in organizations, societies, et cetera.
Strogatz: Oh, I see. Right, and there is talk of, yes, “smart cities.” I mean people use the word “intelligence” to refer to things that we don’t normally think of as having intelligence, but —
Mitchell: Yeah, so, like, intelligence kind of writ large. What does it mean? And how do we study it? It doesn’t seem like it makes sense just to study it in these isolated disciplinary ways, because it’s much broader than just neurons or, you know, behavior or machine learning. That’s something I’m very excited about.
Strogatz: And so the excitement is partly to think about who to invite, or what the topic should be or…?
Mitchell: Yeah, who to invite, what topics, just how to go about trying to get these people from different disciplines to talk to each other in useful ways.
Strogatz: Has this been done before?
Mitchell: Yeah, it’s been done. I mean, the whole field of cognitive science is kind of an attempt to do this. I don’t think it’s really… I mean, one of the problems with AI is that it used to be a close cousin of cognitive science, in that people would go to the same conferences and talk to each other. But AI’s kind of become a victim of its own success, in that now the methods are much more akin to statistics than to psychology. And big data, big neural networks, you know, fast computers is really the way to get these programs to work well. And it’s so successful that now people are in companies working on it for specific applications rather than in universities, thinking about more generally what intelligence is. So, I think it’s a little bit — it’s so successful that it’s become itself — AI, the field itself has become narrower rather than more general.
Strogatz: I see.
Mitchell: You know, we talk about narrow AI, but the whole field has become really focused on a particular set of methods, and has lost its contact with cognitive science to some great extent.
Strogatz: You know, normally we would talk about the advances in AI, so it sounds like you’re talking about a kind of retreat, or maybe a step to the side or a step backward?
Mitchell: Yeah, yeah, exactly. And, you know, people talk about how fast the field is moving, you know. I think I’m talking about slowing it down. [LAUGHTER]
Mitchell: I think that, you know, in some ways it’s great that it’s progressing extremely fast, but in some ways, it’s not progressing at all.
Strogatz: Next time on The Joy of x, Dr. Emery Brown on what anesthesia is teaching us about different states of consciousness.
Emery Brown: If I went to a patient and I said, “Excuse me, Mr. Jones, but I’m going to put you in a drug-induced reversible coma,” I mean, you know, he would get up and run out of the room. But it’s not fair to say, “I’m going to put you to sleep,” because you’re not asleep.
Strogatz: The Joy of x is a podcast project of Quanta Magazine. We’re produced by Story Mechanics. Our producers are Dana Bialek and Camille Peterson. Our music is composed by Yuri Weber and Charles Michelet. Ellen Horne is our executive producer. From Quanta, our editorial advisors are Thomas Lin and John Rennie. Our sound engineers are Charles Michelet and at the Cornell University Broadcast Studio, Glen Palmer and Bertrand Odom-Reed, who I like to call Bert.