Can algorithms make society more equitable? Rediet Abebe, a computer scientist at the University of California, Berkeley, has shown that data-driven machine learning can help to optimize the results of social and economic strategies. In this week’s episode, host Steven Strogatz speaks with Abebe about how her Ethiopian upbringing helps her see discrimination in the U.S. in a different light, and why her research interests and here concerns with poverty, inclusion and diversity are inseparable. This episode was produced by Dana Bialek. Read more at Quantamagazine.org. Production and original music by Story Mechanics.
Steve Strogatz: Any larger questions that we wanna get into about society and algorithms, inclusion, fairness, democracy, whatever?
Rediet Abebe: All that.
Strogatz: All that big stuff. [LAUGHS]
Abebe: All the small things, you know, the minor things we’ll just check off. It’ll be great.
Strogatz: I think you offer a unique opportunity for us to talk about the personal and the political, because a lot of our scientists don’t really —
Abebe: Yeah, about the same.
Strogatz: Oh really, you think so? The personal is the political?
Abebe: For me it is. That’s just one package. You get the research and the personal and the advocacy and the political. I don’t really separate it out for myself.
Steve Strogatz [narration]: From Quanta Magazine, this is The Joy of x. I’m Steve Strogatz. In this episode: Rediet Abebe.
Abebe: I was born and raised in Ethiopia, and I grew up in Addis Ababa, which is just a giant city, and I was randomly really interested in math.
Strogatz: It was really exciting for me to have a chance to catch up with Rediet. She’s someone I’ve known for really quite a long time. I became aware of her when she applied to graduate school at Cornell, where I’m a professor, and one of the things that was so thrilling about her file and her application was that she had letters from some of the world’s top mathematicians saying what a tremendous talent she was in abstract math.
And curiously — and for me very exciting — she wanted to make a move from abstract math into applied math. And in particular, she wanted to apply math in realms where it hadn’t traditionally been applied — areas like politics and sociology. This really struck a chord with me because I personally love to apply math in new areas, and so I kind of held out hopes that maybe we would end up working together.
As it turned out, though, she found a fantastic advisor, a colleague of mine named Jon Kleinberg, in computer science. And they did magnificent work applying ideas about optimization theory and algorithms, other parts of computer science, to really important social questions about the alleviation of poverty, questions of justice, things like that. Well, as you’ll hear, her interest in math goes back a long way.
Abebe: In middle school, we started doing geometry. I remember we were learning about all these trigonometric identities, and the thing I love about geometry — I loved it then, I still love it now — is that the idea of proofs is so evident in geometry. I still remember these weekends that I would spend just being, like, “I’m gonna look up a statement and I’m gonna try to prove to myself why this is true.” And I didn’t have the words actually. I didn’t have the word “proof.” I just … it wasn’t a thing that was said in my classrooms. It was not an idea that we had.
But to me, I was like, “Okay, this thing is true, and I can convince myself that this thing is true from these other statements.” I think, around that time, I decided, okay, I clearly really like this. It was a little bit unusual, at least from the people that I knew, that I really liked this stuff, and I thought I would like to keep doing this. And I don’t know how I learned it, but I learned that if you’re a professor, then you just get to do this for a living. And I decided, okay, that’s what I’m gonna do. I didn’t have any examples of math professors. This was not a job that I knew about.
In Ethiopia, the way that our education system worked, and continues to work to a large extent, is that you take these massive national exams at the end of 12th grade. Again, you express your preferences over the public universities, and you also express preferences over different majors. And so, what the Ministry does is, they take whatever score you get and your preferences, and they assign you to a university and a major. And because there’s a huge demand for medicine and for engineering, if you’re a high-performing student like I was, then you were likely to be assigned to those, even if you didn’t really list them as your top choices. And so, this kind of got in the way of my plan to be a mathematician because, you know, I wanted to study math, and I can choose that as a major. There was a chance I might not be able to do it.
And it also got in the way because I could be assigned to medicine and, you know, I was sort of like a child who kept to herself. I was like, “Oh God, I have to care for people for a job.” I’m scared of blood. I would make the world’s worst doctor. I remember feeling a real sense of panic because I was, like, “I wanna study math, and maybe that’s not a given. Also, I don’t wanna be a doctor, and maybe I’ll have to be a doctor.”
In fifth or sixth grade, I was like, all right, what’s my way out? So, I thought I have to go abroad and just do my studies there. So, I asked around. I must have done a lot of research on this, and I learned that the U.S. was better because there was a possibility of getting financial aid. There was a lot of universities here, it’s an English-speaking country, all this stuff. So, I was like, okay, I guess the U.S. is where I need to be. Then I learned, actually, if you want to get full financial aid, then you have to get into the top schools. That was going be the only way I was going to be able to attend university. So I decided, okay, so my way out is that I have to get into these top-ranked colleges, but it was hard to envision how I would get from where I was, which was in sixth grade in an Ethiopian middle school, to a place like Harvard, which is where I ended up. And so, I was like, how do I get there?
And I learned there was this scholarship that was given by the International Community School of Addis Ababa. It’s a school where kids of ambassadors and diplomats and extremely wealthy Ethiopian people went. It’s a very expensive, very — really amazing school, but very expensive school. But they give out four scholarships every year to local students. You go through this examination process, interview process, all that, right, and then every year four people get it. So, I decided when I was in fifth grade, “Okay, you can get the scholarship to go to high school,” so I kind of basically reverse-engineered the plan. I was like —
Strogatz: Wow. You realize how remarkable this whole thing is. The story that you’re telling, that you’re planning all this stuff as a fifth grader. It’s kind of unheard of.
Abebe: It really is, and it’s just like, you know, I’m so glad about how it worked out. And in retrospect, I look back and I’m like, “what a ridiculous plan that was.” I just think about how many stars had to align. But you know, I was like 12 or 13. I was very naïve. I’m a stubborn person. I was even more stubborn when I was younger, I heard from my mom. I was like, “This is my plan, and this is what’s gonna happen and get out of my way. This is what I’m doing.” So, I made that plan, and I went with it, and it worked, right? And so, I was really excited that it worked out. In retrospect, I’m still even more excited, like kudos to 12-year-old me for being like, “This is what I’m doing.” But then that’s when it got hard.
Strogatz: It’s amazing, actually, that she was able to do this to get to Harvard from Ethiopia, and once she got there, her journey gave her a unique perspective.
Abebe: At Harvard, I was able to explore these other interests that I had. I was clearly invested in issues of, at the time, education inequality, but now it’s more socioeconomic inequality more generally. I was kinda struck by the poverty that we had back home, but even more by the poverty that we have in Cambridge and the segregation that we have in Cambridge, and things like that.
Strogatz: How did you explore this?
Abebe: I took a bunch of classes. I was a member of a lot of mentoring programs, but even… I think maybe the highlight was that I joined the Harvard Crimson, the school newspaper, as a writer, and I covered Cambridge Public School and Cambridge city politics. So, I would go down to Cambridge City Hall and I would attend the meetings, the weekly meetings. I was, like, the youngest person by a couple decades. I was just… I knew the mayor. She’s still around, but I knew the mayor at the time, and I knew all the city councilors and the Cambridge Public School committee meeting members and things like that, and I just learned about the city, and the issues that we are facing, and we were facing at the time.
One of the first meetings that I went to, Parag Pathak, an economics professor at M.I.T., he had given this presentation on how the assignment of students to public schools in Cambridge was using the sort of mechanism that was unintentionally causing discrimination because it was doing this thing where it prioritized assigning you to schools that were closer to you. And that makes sense logically. You don’t want parents to travel far to send their kids to schools. You don’t want kids kind of just getting shipped across the city. But if you have an incredibly segregated city and lower-performing schools or lower-resource schools in low-income communities, then you cause — you exacerbate this existing inequality.
And when I arrived, I knew at Cambridge that it’s a small city as cities go, and it had Harvard and it had M.I.T., but it also has Tufts and B.U. and Brandeis essentially within biking distance, if not walking distance. It felt like it was sort of this utopia of higher ed. But one thing that was really surprising to me was that I remember arriving at Cambridge and arriving at Harvard Square and being, like, “I don’t see any black people. Where are the black people?” It’s really interesting to be black your whole life and to not think of yourself as a black person because you’re living in Ethiopia, where basically everyone is black, right? So, you don’t think about it, and then you arrive and it’s like, oh, you being black is now going to define so much — so many of your interactions and how you navigate the system.
I remember going to Cambridge City Hall and noticing how the demographic changes as you just walk down from one block to another. If you just walk around Cambridge, you would notice the segregation immediately. It’s just a very — I mean, this is true of so many U.S. cities, but I remember thinking this is kind of unusual because in Ethiopia, we also have inequalities. We have massive income inequality, but one thing that I guess I appreciate now — I didn’t realize I appreciated then — is that at the time, at least, we didn’t segregate quite as much. And so, you would have giant houses, and then you would have plastic homes right next to it, and so in a sense, I think people were exposed to the discrimination that was happening kind of in their day-to-day. You couldn’t hide from it.
But in Cambridge, it’s… You could live in a suburb of Cambridge and just never meet anyone outside of the suburb except for people you work with, and just never know that one in 10 students are starving in classrooms. You could just never know that. That is an easy thing to not know if you wanted to not know it.
Strogatz: You’re talking about starving in the public school classrooms.
Abebe: In the public school classrooms. You could live in a neighborhood and not know that something like over 30 million people in the U.S. have lost jobs in the past six months. You could not know that. It is very easy to live in a neighborhood where you would not know a single person who lost their job. It’s just possible.
Strogatz: That is a fact of life, yep. Here it is a fact.
Abebe: That is a fact of life in the U.S. And that was quite jarring to me, honestly, because I felt like, at least when I was growing up in Ethiopia, I was sort of like, you know, we can see the problems. And also, a lot of the problems we can actually explain as a resource constraint problem. But in the U.S., and at least in Cambridge where I had lived, it was sort of like you can’t even see the problems unless you’re trying to learn about it. And also, we certainly have enough resources.
So, in Cambridge, you’re — the average black student at the time was performing two, three grades below their level. How is that acceptable in a city that has Harvard and M.I.T., and has all these other universities close by? So, it was very jarring for me to realize that these are not resource constraint problems. These are decisions that we’ve made to make things, okay, “this is not my problem” or “this is their problem,” to separate ourselves from people who are disadvantaged and marginalized. And it’s a decision that we made to invest in specific things and not in other things, right?
When I arrived to Harvard, this is like August 2009. I had arrived a few days earlier to do this program called Dorm Crew. A bunch of students would go, and they would clean the dormitories before the undergraduate population arrived. And so, this was a job that I signed up for because I needed money to pay for stuff. My parents weren’t able to give me that much, and so I was like, all right, I’ll work and I’ll get money.
And so, I arrived, and I was excited obviously, right, because it’s like I made this ridiculous plan, and my ridiculous plan worked, and here I am at Harvard Square. So, it’s gonna be uphill from here. But I was really hungry, and I had exactly $40. My parents had given me $40. That was what I had.
Strogatz: I shouldn’t interrupt you, but I just wanna picture it even a little more. You’re flying across the ocean. You arrive, and it’s a big, long trip, and you got your — some kind of bag or something.
Abebe: It’s a massive trip. I have massive —. I have two — actually, I had two suitcases. One of ’em had my clothes and stuff. The other one had tea and coffee, because I was like, “I don’t trust these Americans! I don’t know what kind of coffee they have!” So, I’m bringing my own. It’s funny because at the border, they were like, “Why do you have so much coffee?” Like, I don’t know what you people have! I need to — so I had — you know, I looked out for myself. I knew what I needed.
Strogatz: One bag of clothes, one bag of coffee. And $40.
Abebe: And $40, and there was like a — it’s like a massive scarf. It’s more like a blanket than a scarf. My grandma had given it to me when I got on the plane. She was like, “You’re gonna be cold. Take this.” And so, I had this massive thing I was carrying. Not a comfortable situation. I’m dragging two giant —
Strogatz: ’Cause it’s August now, right?
Abebe: It’s August, it’s hot. I have this massive thing I have to drag with me; there are these two giant suitcases; I’m hungry; I’m tired; I’m excited because I got to be a student here. I got to the Dunkin Donuts in front of the Harvard Kennedy School, if you know — if you’ve seen it. It’s still there, actually. Every time I walk by it, I remember it, and I was, “I’ve got to eat. I’m tired. I’ve got to eat.” And I remember just thinking through, “What do I get here?” because I have $40 and things here are expensive. I thought $40 would go further. It does not. So, I remember really calculating what I was gonna get. And I think I got like a bagel and a coffee. It was a good call that I brought my coffee. It was not very good coffee —
Strogatz: People think Dunkin Donuts is good coffee, but I guess not for you.
Abebe: It’s fine, but I’m spoiled. I grew up in Ethiopia. When it comes to coffee, I’m just… I’m now, like, you know, now I’ll do it, but I remember thinking, “This is not very good.” So then, okay, so I got my coffee, and I rolled my giant suitcases up, and it was overwhelming.
I spent three days doing Dorm Crews. A lot of the dorms are old, as you know, and so they’re not air-conditioned, so it’s hot, it’s sticky. And our job was to clean stuff, and we cleaned stuff. And I remember thinking, “Wow, okay, this was not how I expected things to go,” because I had made this elaborate plan to get to where I was, and it was sort of like, okay, the way I’m gonna start it is by “I’m hungry.” And they didn’t pay us immediately because it takes time to set us up in the system, and so I was, like, “All right, I guess the $40 that I have, it’s gonna have to extend for at least four days,” maybe the week, because the dining halls weren’t gonna open. It was hard. It was like —
Strogatz: Wow, so welcome to Harvard.
Abebe: Welcome to Harvard, right.
Strogatz: This is quite a shock, yeah.
Abebe: Right, exactly. And you know, my then-roommate, who’s now my closest friend, she had taken that same period of time to do something called FOP, which is Freshman Outdoors Program. They hiked and trailed and all that stuff that people do. I don’t do that stuff. When she arrived at Harvard having done FOP, she was like, “Oh, I did this fun thing.” And I was like, “Great, I kind of starved and cleaned bathrooms.” It just set the tone, I think, for the remainder of my experience. Harvard has so many resources. Why didn’t they just provide food for people doing Dorm Crew? It’s like it’s pocket change, and it was not something that occurred to them.
I don’t know how clearly I was thinking about it at the time, because I think I was sort of in shock of all this new information I had to process, but in retrospect, it was clear to me that I was absorbing all these signals, explicit and implicit ones —
About, you know, was I considered a student in the same way that everyone else was considered a student. But I think I was absorbing that I was a sort of second-class citizen there. And that got me really fired up because I was, like, “How dare you? I worked my way here, I worked really, really hard to get here. Like, why am I not taken care of in the same way that other students are?”
Strogatz: When we get back, how Rediet is trying to use math to help the people who need it the most. That’s ahead.
Strogatz: What’s come through so clearly in your own personal trajectory here is, sometimes the unwitting ways that American society can discriminate against people with different backgrounds or different race or whatever. I mean, it’s clear you were always passionate about these kinds of matters, but it’s now become a big part of your mathematical and scientific career.
Abebe: It has. Yes, it has. Yes, absolutely. It wasn’t until after undergrad that it all clicked for me. I had a one-year fellowship at the University of Cambridge, and I was like, “Oh, there’s a way that you could do math, but also actually think about these other things that I cared about.” I cared about what was going on in the City of Cambridge. That really, really mattered to me, and at some point, I thought that I had to choose between caring about that and caring about math, and I didn’t have to. And also just —
Strogatz: How did you realize you didn’t have to choose? I mean, it’s not like there are many role models that are doing math applied to democracy or discrimination or whatever. Justice.
Abebe: I think there were some examples. So, I mentioned to you, Parag is someone whose work I was following, and I liked what he was doing. I attended a talk by Al Roth, who has a Nobel Prize for his work on school choice and other things.
Strogatz: So, you were aware of economists doing this kind of thing.
Abebe: So, I was aware of economists, but I knew I wasn’t an economist. And what happened actually was, Laszlo Babai at University of Chicago is… I had spent a couple summers with him. Again, I met him through the most random way where I just barged into his office and was like, “I would like to talk to you.” And he is also an intense person, and he was like, “Here’s a problem set that you can work on.” So, I had a very close kind of mentoring relationship with him.
But I remember I had talked to him a bunch, and of course he doesn’t do what I do, but I think he was paying a lot of attention to what excited me, and so he was nudging me in that direction. He really was helping me through that transition from math Ph.D. to computer science Ph.D. Because initially I was gonna do a math Ph.D., then I realized I wanted to do computer science, because I also wanted to do data-driven stuff. I wanted to do more outward-facing work, and I thought it might be easier from a computer science Ph.D. than —.
Strogatz: And that’s when we met. That’s when I remember talking to you. You were applying to grad school. I think you’d already been a grad student, but you were shifting direction, right.
Abebe: I was shifting direction, exactly, and I had a lot of support. I think that there were people who did different things from what I did but supported me really, really helped. You said, “There aren’t that many role models you could look to that were doing what you do.” And yes, there weren’t as many, but I could see snippets of things that I wanted to do in different people, and I could piece that together. But more importantly, I had people who just sort of believed that I was onto something and just supported me, even if they didn’t themselves do it.
Jon Kleinberg, my advisor and your colleague, is an example of someone who does — some of what he does really resonated with me, but more importantly, I think the most important role that he played was that he sort of just believed that I was onto something, and he supported me through that.
I was kind of interested in network science, because I liked graph theory. But in networks you get to think about social interactions, so I thought, okay, this is a way for me to do graph theory, but also think about social interactions. And I learned about your work and I learned about John’s work. And what I really liked about both of what you do is that you do really fascinating mathematics, but there are these — at its core, there’s these sorts of social insights. I thought that was really amazing. That was an amazing way to engage with both of these things that really matter to me. So, I started actually doing more network stuff in the beginning, and then over time I shifted, and I thought about, “Okay, what are social processes that are important to me? I care a lot about poverty. I care a lot about inequality and discrimination.” And so, I thought if you wanted to study those, but you also wanted to have math around it, which is what I really liked also, how can you do that?
So, that got me thinking about how we measure economic welfare. A lotta times we use simple measures like people’s income, but we know from sociological work and empirical work and policy work that there’s a bunch of other things that matter.
An example would be different shocks that people experience. If you have an unexpected expense, or something as small as a parking ticket, or also as large as a medical bill, or if you had a delayed paycheck, we know that different families have different abilities to buffer these different types of shocks. So, I was thinking about, okay, so this is a sociological insight that is important and interesting, and maybe we have an ability to use mathematical models to gain further insights into what’s going on.
Strogatz: But let me ask about this last point. Because people might be thinking to themselves as they hear you say what you just said, that if you have enough money, you can buffer against a shock. If you need to pay a parking ticket, you have the money to cover it. On the other hand, if you’re poor by conventional measures, you don’t have the money. So, why is it different from just —
Abebe: Yeah, so at the extremes, that’s actually true. So, if you have a lot of money, yes, it doesn’t really matter. If you have not that much money, then okay, it’s sort of a difficult situation, but there’s a lot of families, especially in the U.S., there’s a lot of families that are sort of in the middle, and they’re like going in and out of poverty. So, they’re right below the threshold, and for those families, it matters. An example that is true, unfortunately, is that in many cities, the grad student stipend is not great, and actually there are these instances where grad students end up being eligible for all sorts of different assistance programs.
Strogatz: So, food stamps or something or other —
Abebe: Other things, right. I think that’s a separate conversation we need to have, about how much we pay grad students and why that is. But at the same time, the other thing that is important to keep in mind is that a lot of times when people think about this, they have a different type of person in mind. Maybe if you’re someone who has a comparable income, a kid or something, that would be an example. Where, like, kids come with a lot of expenses, a lot of unexpected expenses. So, a lot of times, I think we use these simple measures, and we just assume that a fixed income means the same thing across different contexts. And it doesn’t, because you could have a situation where someone is susceptible to experiencing a lot of shocks, or maybe shocks that are very deep.
Strogatz: I see. So, you’re saying this could be an important — I don’t know what you wanna call it — metric or something, the ability to withstand these calamities, small or big. Shocks, as you call them.
Abebe: Yeah, and I think the other thing is also — there’s a book that I really like, called The Hidden Costs of Being African American, which talks about how if you were to track individuals whose profiles looks the same… Let’s say they graduated from the same college, same degree, same type of job, whatever, but you follow them over time, what they notice is that individuals who are African American, when they experience a shock like a job loss, they might not necessarily have the same safety net as someone who is, let’s say, white. So, if you lost your job and you’re not able to pay your rent, do you have a home that you could go to with your parents or whatever, siblings or whatever? Or does that kind of maybe trigger a cycle of poverty that might be difficult to get out of? And so, there’s a lot of sociological evidence actually that a lot of these very coarse metrics that we use don’t necessarily capture the safety nets that people have accessible to them and their ability to buffer these shocks.
Strogatz: And then you were saying you also wanted to look at how you could apply your mathematical or computer science training to probe these questions more deeply, but I’m not sure how you do that. This sounds very creative. This is an innovative thing to be doing.
Abebe: Yeah. So, what we did… So, this was a paper that was presented at the AAAI, which is a big AI conference in 2020, back in February. So, what we did in that work is, we basically said, “Okay, let’s only take the main ingredients of this issue.” So, we have people’s income, and we have some measure of wealth, which you can think of as what you have saved in your bank account that you can draw from in case of emergency, and then let’s imagine that you have some distribution that determines the size of shocks that you might experience. So, let’s say, Steve, you’re about to experience a shock. I’m gonna draw from this distribution, and the size of the shock is gonna be determined by this distribution.
Strogatz: When you’re speaking of distribution, this isn’t as a statistician. You mean I’m suffering all kinds of slings and arrows all the time, some big, some small.
Abebe: Exactly. And I’m just gonna quantify them not by type, but just by size. I’m just gonna say something happened to you. I’m just gonna say, okay, how much is it gonna cost you? We’re just trying to get to only the main ingredients of it.
And so you have this distribution, and let’s say that you have sort of, like, a pause-on-arrival process for how frequently you experience shocks. So, maybe you experience shocks more frequently than I do, or vice versa. And also, you and I can have different distributions. Maybe I experience shocks very frequently, but the intensity is smaller in magnitude. You experience shocks less frequently, but let’s say that they are quite deep when they happen. They have high magnitude. So, you can sort of vary that.
So, now what do you have? So, now you have these four parameters. You have people’s income, you have their initial wealth, you have the distribution from which these shocks are drawn, and you have some measure of the frequency with which you might experience shocks. And so now you sort of have a probability type problem. Let’s say that if you fall below a certain threshold of reserve — which is like, there’s your initial wealth but then you may be adding to your reserve over time — as you get more and more income, but you’re also experiencing some shocks. That might be kind of dipping that reserve a little bit.
So, at any point, you have some sort of reserve, and let’s say that if your reserve falls below a certain threshold, now you’ve experienced ruin. And this is something that we observe. We’ve seen situations where people are barely getting by and then a small thing happens like a parking ticket, and then that leads to them being evicted. It’s just above this kind of threshold that even a small thing can really have disastrous consequences.
Strogatz: I see. So, like in just common language, when you’ve hit this ruin threshold, you suddenly go to being really poor, and it’s like a whole new life. You’re gonna —
Abebe: Yeah, so it means that you’re not able to pay your rent, it means that you’re not able to pay your medical bill, maybe you have someone that you’re no longer able to pay and it’s multiplying sort of exponentially how much you have to pay back. Something really terrible is happening here. We wanna make sure that people don’t experience it.
Strogatz: Yes, I see.
Abebe: Right, so this is where the fun stuff happens in terms of just the math.
Right now, you get to think about allocation of subsidies. So, we have a lot of programs in the US. Food stamps, housing vouchers, things like that. Basically, things that are meant to serve as a sort of income supplement in a sense because they make — like, vouchers might make the cost of rent cheaper to you. Food stamps make the cost of groceries cheaper and things like that.
Strogatz: So, these are the parts of the safety net you’re talking about.
Abebe: Yes, exactly. Let’s again map them to, sort of, income subsidy. You’re just imagining that you as a planner — in this case in government, but if you think of whatever planner — have a fixed budget and you’re trying to allocate that among people to serve as a sort of income subsidy. And what we’ve done so far in the U.S. is, we usually use just people’s income, right? We just say, if your income is below this, then we’re going to assume that you need assistance and we’re going to give subsidies. If you’re above, let us know when you fall below. That’s sort of like the —
Strogatz: Now I see where you’re going with all this. That you might imagine you’re just gonna give the money to the poorest people, but actually there might be people who are less poor by this measure, but they’re actually poorer by resilience to shocks.
Abebe: Exactly. And so basically, what we found is exactly that. You can set up an optimization problem of income subsidy under a given objective… So, the objective that we used was a min-sum objective. Basically, you’re saying, “I would like to minimize the expected number of people that experienced ruin.” Let’s say that’s your objective, and you have a fixed budget for giving out income subsidies. How do you optimally allocate that income subsidy to minimize the expected number of people that experience ruin?
Strogatz: Hold on, let me make sure I really get that. So, like some people who have taken economics or politics courses will have heard about utilitarianism, where we are concerned with not necessarily individual people, but we kind of add up how many people are in a bad situation, and we’re very concerned with — as you say in the language of economics here — we’re trying to minimize the number of people who are gonna get ruined. That could be one objective. Not that we wanna make the poorest person really rich. That could be a different objective, but we’re not doing that. We’re just saying, “Let’s try to keep as few people as possible from getting ruined.” Then under that you have some algorithm that says how to do this?
Abebe: Yes. Under this main sum objective, the version that you mentioned where you’re minimizing the expected number of people that experience ruin, under natural assumptions, you can sort of solve this problem optimally, and it gives you the optimal sort of way to allocate your income subsidy.
So now, what you can do is you can compare that with the version that only uses people’s income. You can say, “What if I didn’t know anything about people’s income shocks?” Similar to what we do in the U.S. here when we allocate subsidies. We just had people’s income, and you just set some threshold, and you were like, “I’m just gonna give assistance to people that are below this threshold and not the ones who are above.” So that’s a different allocation problem. Or it’s the same allocation problem, but in one case you’re only using information about people’s income. In the second case, you’re using information about people’s income, but also their experiences with these different types of shocks.
And what we found in this situation is that this optimization problem that I mentioned to you, that has these two versions — one with only income and the other one with income but also different types of shocks — can actually yield drastically different solutions. So, it can actually tell you in the version where you only had people’s income, you could say, “Okay, I’m just gonna give a subsidy to the people who have the lowest income, and that’s what I’m gonna do. That’s the information I have.” But actually, maybe there’s people whose income is a little bit higher than that but actually are not so resilient to different types of shocks. But you wouldn’t be detecting them in this measure that only had people’s income.
Strogatz: I guess the thing that I’m wondering as you tell this is, it’s sort of straightforward to imagine how we would measure people’s income, but I’m not sure how do we know people’s ability to withstand shocks. Is there a metric for that? Or is that part of the problem that we don’t measure it, or we don’t even know how to measure it?
Abebe: So, in the first paper that I mentioned to you, we assume that we can measure it, and we just say, “I can see when it has arrived and I can just sort of fit a distribution to that,” or “I know sort of like the distribution from which I’m drawing shocks, so we assume that that’s true.” But in a separate work, actually, that’s — this is forthcoming shortly — we have been working with a massive data set that encodes people’s different experiences with different types of shocks. So far, we’ve just been calling them shocks, but there’s actually many different types. It could be you lost your job, it could be you have a medical expense, it could be that you were a victim of a crime, it could be that there’s a romantic relationship that failed. There are all these different types of shocks, and so we’re working with this massive data set that encodes this type of information, and we’re looking at using data-driven methods to understand what patterns we see there as well.
Strogatz: Oh, I see. So, wait, so from that kind of data, then you can also see what became of those people, like whether they needed certain kinds of federal assistance or whatever.
Abebe: Exactly. Did they end up getting evicted, and things like that. I mean, that’s an entire separate direction that I think is also fascinating, because we have these data sets, and really, ultimately, we’re trying to make causal statements, but you can’t really do that with these types of data sets. There’s so many different types of constraints. Then you might say, “Well, maybe prediction is fine, let’s try to predict stuff,” but we know that with issues related to poverty, state-of-the-art machine learning algorithms trained on massive data sets and thousands of features actually don’t do so well. They do really, really badly. This is an established observation now. When you have these data sets, and you know the causal inference is difficult, and you know that prediction is not gonna get you far, how can you still extract patterns that can inform poverty alleviation programs? So, that’s like the question that we focused on from the data-driven side.
Strogatz: So, if we can just pan back a little to kind of common sense ways of talking about the things that you’re talking about, it’s like we as government planners — or “social engineers,” some people would call them, that’s often used derisively, that kind of term, that we’re doing social engineering here. But it sounds like you’re trying to use the tools of math and computer science and optimization theory and that sort of thing — and machine learning now — by looking at real data, data science too, to try to think if we have a certain objective, which may be to help the person in the most precarious situation as much as possible, or to help as many people who are in dangerous situations. I mean, you can choose. That’s a question of values. But you have rational ways of approaching this now. In fact, not just rational, but probably the best ways, given the certain values.
Abebe: A lot of times when we talk about poverty alleviation, what happens is that we say things like, “We want to help the most number of people.” And no one specifically says what that means. They just say “the most number of people.” And it’s like, well, okay, I told you these two objective functions. In a sense, they could translate to the most number of people, depending on what you mean.
And so, a lot of times, this job of translating these policy objectives to mathematical objective functions, we don’t have a very good process for it. Oftentimes, that actually gets delegated to someone in the background that we don’t even know. And so, a lot of times, we sort of, like, we don’t see the importance of this translation of these policy objectives that we talk about versus the mathematical objectives, where it actually says, “You could be doing the exact opposite thing,” depending on which one you pick.
A lot of what happens is that the people who are doing the optimization that I mentioned to you, that just say, “Okay, just tell me what the problem is. I’ll translate it into math, and I’ll just do the math and I’ll tell you what to do,” or “I’ll do the algorithms and I’ll tell you what to do.” That community doesn’t always embrace the policy of what’s going on. So, sometimes we do these very consequential translations from policy into mathematical objectives, and we don’t even know what we’ve done. We don’t even know how consequential that decision was.
And likewise, I think in the reverse, we don’t necessarily translate what we’ve done to the community and say, “Just to let you know, you said help the most number of people. The way I wrote it down is like this. Does that sound reasonable to you?” And so, there’s this gap that exists between policy and the sort of algorithmic optimization community. And many things could fall through the cracks. It did in this very simplified model, right?
Strogatz: Because you’re saying there are two reasonable objectives that we might have, and we end up helping totally different sets of people.
Abebe: Yeah, exactly.
Strogatz: So, what is the lesson for that? That we could have good intentions, but if we sort of formulate it two different ways, that might sound almost the same, but they’re not. You end up having totally different prescription of who should get the money.
Abebe: Yeah, exactly. So, I guess this goes back to the very first thing that we talked about, which is… You mentioned to me, you know, we can talk about your research, but we can also talk about politics and all that. And I told you, actually, I see all of these as one. Because the way I see it is that this decision about how you set your objective function or what information you take about families or what intervention you’re considering, that was a very political decision, and we don’t necessarily appreciate it to be that. And so, the way I see my role as a researcher, and also as a person, is to have enough familiarity about all the different angles of the problems that I’m working on, both the math side of it, of course, but also the data-driven side of it, but also the policy side of it, but also engaging with affected communities, so I can appreciate the gravity of each of these decisions that we’re making.
Strogatz: This is such a holistic look at some of the biggest problems facing our society and our world, is what I’m hearing. Instead of carving out just the math part of it or just the algorithm part of it, you are trying to think about the politics of it, the sociology, the data.
Abebe: Exactly. And also, just the personal side. Why am I studying poverty? It could be that there’s a universe where I would have grown up to be very well off, and I would have been interested in these problems. It’s entirely possible, but I know at least a little bit I’m really deeply invested in this because I grew up poor, and a lot of people important to me grew up poor. And so, getting it right feels extremely personal to me.
Strogatz: Yes, it does, I’m sure. Wow. But you’ve moved into these circles now where maybe nobody knows anyone poor.
Abebe: Yeah, I mean, it’s hard because you end up in these situations where — and this is why I’m so excited to be a professor — because I feel like, for many students, maybe they’ll be like, “That’s nice that she’s here, I guess.” But I think for a few students, I think it may mean a lot that I’m there, because I remember when I was in college and my friends would be like, “Oh, let’s go get lunch at whatever place,” and I’m like, “Okay, lunch there costs $20. I don’t have $20 to spend on lunch.”
And so, I think that it’s like these little things that aren’t meant to be alienating to people that actually can end up being alienating, and I don’t wanna focus on these kinds of minor stories. I think there are deeper structural problems that we have in higher ed that make that cut even deeper. Just about how we set up our education, how we set up our — even like our courses, our office hours, our — how we determine who can major in what. There are all these much deeper structural problems.
When I applied for faculty positions, I wrote a teaching statement, I wrote a research statement, and I wrote a diversity and equity and inclusion statement. That’s what’s asked of you. And to me, I’m like, honestly, they’re all the same. I would have rather just written one giant document for you that shows you that to me it’s all the same. And you know, it’s fine that…
Actually, I think people should do whatever feels good to them when it comes to research. It’s fine if, in some situations, you can actually separate out these different aspects of your job and say, “Okay, here’s the diversity component, here’s the teaching component, here’s the research component, and I can compartmentalize it.” But for me, I just can’t. It is not possible for me to compartmentalize it in that way, and I think we need to make space for people like myself who — for whom it’s just a holistic job. It’s one thing. I see it as one thing.
Strogatz: Wow. This is a wonderful summary. I feel like I wanna drop the microphone right there.
Abebe: The virtual microphone. Yeah. I’ll make a bang just so we simulate the microphone drop. Yeah.
Strogatz: Next time on The Joy of x, Trachette Jackson explains what a cancerous tumor looks like to a mathematical oncologist.
Strogatz: I realize it’s wrong, but just to have something in our head. Like picture a big roast beef or something like that.
Trachette Jackson: If I could change our picture just a little bit, maybe just think about — this is how we actually modeled it as concentric cylinders. So, a blood vessel is a cylinder and it’s got — and it feeds a certain radius of tissue around it, and so you would inject these molecules into the blood vessel, the inner cylinder, and you would watch it diffuse out into the surrounding tissue that that blood vessel feeds.
Strogatz: So, there’s the cylinder and there’s the stuff just oozing out radially from the cylinder. Okay, well I wanted to have a picture.
Jackson: I like the roast beef too, but —
Strogatz: The Joy of x is a podcast project of Quanta Magazine. We’re produced by Story Mechanics. Our producers are Dana Bialek and Camille Peterson. Our music is composed by Yuri Weber and Charles Michelet. Ellen Horne is our executive producer. From Quanta, our editorial advisors are Thomas Lin and John Rennie. Our sound engineers are Charles Michelet and at the Cornell University Broadcast Studio, Glen Palmer and Bertrand Odom-Reed, who I like to call Bert.
[END OF AUDIO]