On Being with Krista Tippett

Latanya Sweeney

On Shaping Technology to Human Purpose

Last Updated

October 26, 2023


Original Air Date

October 26, 2023

You may not know Latanya Sweeney’s name, but as much as any other single person — and with good humor and grace as well as brilliance — she has led on the frontier of our gradual understanding of how far from anonymous you and I are in almost any database we inhabit, and how far from neutral all the algorithms by which we increasingly navigate our lives.

In this conversation with Krista, she brings a helpful big-picture view to our lives with technology, seeing how far we’ve come — and not — since the advent of the internet, and setting that in the context of history both industrial and digital. She insists that we don’t have to accept the harms of digital technology in order to reap its benefits — and she sees very clearly the work that will take. From where she sits, the new generative AI is in equal measure an exciting and alarming evolution. And she shares with us the questions she is asking, and how she and her students and the emerging field of Public Interest Technology might help us all make sense.

This is the second in what will be an ongoing occasional On Being episode to delve into and accompany our lives with this new technological revolution — training clear eyes on downsides and dangers while cultivating an attention to how we might elevate the new frontier of AI — and how, in fact, it might invite us more deeply into our humanity.

  • Download

Guest

Image of Latanya Sweeney

Latanya Sweeney is the Daniel Paul Professor of the Practice of Government and Technology at the Harvard Kennedy School, among her many other credentials. She’s founder and director of Harvard’s Public Interest Tech Lab and its Data Privacy Lab, and she’s the former Chief Technology Officer at the U.S. Federal Trade Commission.

Transcript

Transcription by Alletta Cooper

Krista Tippett: This is the second in what will be an ongoing occasional On Being episode to delve into and accompany our lives with technology — training clear eyes on downsides and dangers while cultivating an attention to how we might elevate the new frontier of AI — how, in fact, it might invite us more deeply into our humanity.

We started with Reid Hoffman, a philosophical mind in Silicon Valley. Today I’m so happy to introduce you to Latanya Sweeney of Harvard. She might be the person who’s touched your life online more than anyone whose name you don’t know. Latanya is one of those people I love talking to — someone who’s been present at the genesis of her field. Like neuroscience, computer science has only emerged in the last handful of decades, and Latanya Sweeney has been there, attending its birth pangs and adolescent crises. As much as any other single person — and with good humor and grace as well as brilliance — she has led on the frontier of our gradual understanding of how far from anonymous you and I are in almost any database we inhabit, and how far from neutral all the algorithms by which we increasingly navigate our lives.

So Latanya Sweeney brings a really helpful big-picture view, seeing how far we’ve come —and not — since the advent of the internet, and setting that in the context of history both industrial and digital. She insists that we don’t have to accept the harms of digital technology in order to reap its benefits, and she sees very clearly the work that will take. From where she sits, the new generative AI is in equal measure an exciting and alarming evolution. And she shares with us the questions she is asking, and how she and her students and the emerging field of Public Interest Technology might help us all make sense.

[music: “Seven League Boots” by Zoë Keating]

I’m Krista Tippett, and this is On Being. 

Among her many credentials, Latanya Sweeney is the Daniel Paul Professor of the Practice of Government and Technology at the Harvard Kennedy School. She’s founder and director of Harvard’s Public Interest Tech Lab and its Data Privacy Lab, and she’s the former Chief Technology Officer at the U.S. Federal Trade Commission.

Tippett: Hi, Latanya?

Latanya Sweeney: Hi, how are you?

Tippett: Oh, so glad you’re here.

Sweeney: It’s wonderful to be here and have the opportunity to talk with you.

Tippett: I’ve really been looking forward to it. Do you have any questions for me before we start? Anything at all?

Sweeney: No, I’m just excited to have a conversation with you.

Tippett: Okay. Zack? Okay, I got my thumbs up. So one of the things I’m always interested in is where the seeds of the passions and the questions that drive someone root in a person’s earliest life. And so I kind of want to circle around that as we begin. I know you grew up in Nashville. Were you born in Nashville?

Sweeney: I was born in Nashville in 1959, so you have to go back in time.

Tippett: [laughs] Yeah. So I’ve been reading other interviews you’ve done and the ubiquitous YouTube videos of our time. I’ve heard you saying that even as a young girl you loved mathematics. You knew at some point you wanted to be a computer scientist. Is that right?

Sweeney: Yeah, that’s absolutely correct, which was really odd because no one I knew in my neighborhood wanted to do math. [laughs] Kids were dreaming of being policemen and firemen and so forth, but I was the only one who just had this crazy passion for math. And I think that also predisposed me when I first did encounter, in high school, a computer science course — or computer programming course to be more exact — and it was to really be transformative in my life.

Tippett: I feel like a lot of what we’re going to talk about is this thread that runs through, this insistence, that you bring to your life and to your work about pursuing the practical and the moral good that is possible in our lives with technology, even as you have wide-open eyes and are applying a fierce intelligence to attending to what goes wrong and what can go wrong. And so I’m also just really curious about if you think about how this moral compass was planted in you. Was there in that background of your childhood a moral or spiritual or religious formation — however you would define that now?

Sweeney: Yeah, I also think it may have been part of what drew me to math in the first place. I was raised by my great-grandparents. They were born in the 1899 and 1900.

Tippett: Oh my gosh.

Sweeney: And I was of course the only person I’ve ever known who was raised by their great-grandparents. Everyone else in my neighborhood had a mother, a father, and siblings. And here I was. It was just such an unusual arrangement at the time. It just seemed like everything in my life was messy like that. And I really liked the certainty of math, the idea — at least back then in that level of mathematics — there was a right answer. [laughter] I think it really brought certainty to my life, which seemed really uncertain. I think that’s part of what drew me to math.

And the other piece, of course is that was a tremendous arc of history that my great-grandparents were able to share with me. And sometimes we would muse about their 1899 origin and maybe my 2050 ending, and what are the big lessons learned and what are the arcs? Where did the arc of history leave them? You have to realize they survived. They spent their young adulthood in the South during Jim Crow laws, but yet they were positive people.

Somehow, their better angels just really showed all the time, which I thought was pretty amazing. But they had learned lessons in life, and I just think that added to this idea of a kind of black-and-white, do-the-right-thing, guiding oneself and belief in oneself.

Tippett: That’s extraordinary. It’s also true — just to keep going here — you, in 2001, so actually more than 100 years after your great-grandparents were born, you became MIT’s first African American woman to earn a PhD in computer science.

Sweeney: Yeah. It’s a sad state of affairs for MIT. [laughter] That’s not a good thing, right? That’s absolutely not a good thing. I don’t think the numbers have gotten a lot of improvement since, either. And, what’s really interesting, is that also has played a lot in shaping the work that I do. Being the only Black or the only woman often in the room, and recognizing for the world that the technology has lots of values baked into its design, but those values pretty much come from 20-year-old white guys who don’t have families. [laughter] And everyone else in society is struggling when their use case wasn’t really a part of the base design, because nobody else was in the room.

Tippett: I was thinking as I was getting ready to talk to you about how — So I started this show 20 years ago, but actually started piloting before that. So I like to say, “At the turn of the century.” [laughter] Right around the same time that you were breaking that embarrassing milestone. And the conversation that I was having 20 years ago or across these decades — and it’s so interesting to remember even that’s pre-social media — so the conversation was about the internet.

Sweeney: Yes, that’s right.

Tippett: And so the technological revolution was the internet. And I’ve always been seeking out people who bring wisdom to this, and who are thoughtful, and who were thinking in moral terms and about social implications. One of the ideas that came through that really has shaped the way I’ve approached both my life with technology and this conversation is the idea that as all-encompassing and dramatic as these technologies have been and in how they’ve landed in our lives — and even for those of us who were kind of in the middle of our lives, and then suddenly we’re in a new country — that these technologies, the internet as we would say 10, 15 years ago, is and is still in its infancy, and that we remain, even though it doesn’t feel like this, we remain the grownups in the room and that it has been ours to shape these technologies to human purpose.

I feel like this is precisely the lens that you took on, maybe for all the reasons that you and I just went over. That really feels to me like it runs through all these various things you’ve done, what you teach, but also founding the Public Interest Tech Lab and working for the U.S. Federal Trade Commission, a professor of the practice of government and technology, and the work you’re doing with technology in the civic space. Does that sound to you like an accurate way to talk about your lens of approach on all of this?

Sweeney: Yeah, it’s pretty consistent. In many ways, just as my passion with math led me to computer science, and my love of computer science led me to realize that the world was changing. It was really clear by the time I was a graduate student that there was a revolution coming, and it was going to change everything. But in my naiveness at the time, and in my excitement as a graduate student, I said, “Yeah, but it’s going to right all of the wrongs of society. It’s going to make everything right. After all, technology doesn’t see race, it doesn’t see gender, it’s cheap. It can be easily reproduced.”

I just thought it was going to lead us all to a better democracy, a more perfect world. And so in many ways, I think now that the decades have rolled by, my pursuit from the graduate student years on has really been the same. And that is how to make technology deliver on that promise, on that vision. Just as math gave me certainty and comfort, I want society to have technology without all the harms. And that’s absolutely possible because most of the harms come, that we experience, are arbitrary or added on, and they don’t have to be that way.

Tippett: And I think that idealism, and optimism, and those rose-colored glasses that you talked about — I think that that was true of society as a whole. We went into this very optimistically.

Sweeney: Yeah, but I think the difference was I felt like, “Wait a second, this guy’s messing up.” And meanwhile, like you’re saying, society says, “This is the best thing since apple pie. Don’t tell me about these problems, Latanya, we want to just keep using this shiny new thing.” And I’m like, “Yeah, but you can use the shiny new thing, but the shiny new thing needs to behave itself. It needs to be doing this right.” [laughs]

Tippett: Right. You’re the grownup.

Sweeney: Exactly. And you know what’s really funny, my first professorship was at Carnegie Mellon and I would teach a class called data privacy and students would take my class and I would wonder why they even bothered to sign up because they didn’t believe in privacy. And in particular, what that meant was it wasn’t that they didn’t believe in privacy, they didn’t believe that the technology would violate their privacy. And they were arguing, “But if privacy on Facebook was a problem, we’d all be in it together? It would be a bigger deal,” they’d say. But it was literally society just in love with it.

Now let’s fast forward to today. Today I teach classes, the students call them the “Save the World” classes. And now the students — before I can even bring up privacy as one of the kinds of clashes that we talk about with technology and society — the students will bring up privacy very quickly and they’ll even use examples from Facebook and I’ll look like, “What just happened?”

Tippett: Really.

Sweeney: And then they’ll go on to talk about the stealth activities they have to undertake in order to have privacy on Facebook. And so when I then give to them what my earlier students gave to me as questions, and I ask them, “What happened?” They said, “Oh, that’s because that was my parents’ generation.” And then I look and I say, “Oh my God, that’s true.” [laughs]

Tippett: You actually had a kind of big turning point of your own — a before and an after moment — in 1997, which I guess was your version of this pivot, which involved — where did I read that? You said you met an ethicist who told of a grim future where technology rules our lives.

Sweeney: Yeah, it was pretty amazing. I was a graduate student at the time at MIT, and I was sort of walking through the lounge on my way to get some coffee, and I hear this ethicist say, “Computers are evil.” Now, you have to remember as a graduate student, I’m thinking of, “Oh my God, this amazing world is about to unfold.” And now I hear this person say, “Computers are evil.” I’m like, “I got to stop and fix her thinking. Doesn’t she understand what technology’s about?”

And so she and I get engaged in this conversation and it’s like 10,000 foot up. And eventually, we started coming down to some concrete examples, and she names one in particular. She says, “Well, look at this. There’s health insurance that was given out to state employees, their families, and retirees. All of their hospital records were included in this. And a copy has already been given to researchers and another copy sold to industry.”

And I said, “Yeah, but look, oh my God, if that’s done at scale, we could save lives sooner. We could find better ways to cut costs. We could come up with hypothesis related to illness and disease and treatment.” And she says, “Yeah, that’s all true,” she says. “If the data’s anonymous, that would be great, but if the data’s not anonymous, then people could identify our judges and they could blackmail them.” And she went on to talk about all the ways that the data could be used to undercut our expectations in society. And she literally foretold the future about not just that technology but other technologies breaking our social contracts.

So now in my eagerness, I’m like, “Well, let me explain. I’m sure that data is just fine,” I tell her. [laughs] So I look at the data, and in the demographics it has month, day, and year of birth, gender, and five-digit zip code. And so I do this quick calculation in my head. There are 365 days in a year, let’s say people live 100 years. And there were two genders in the database. If you multiply that out, that’s 73,000 possible unique combinations. But I happen to know that the typical five-digit zip code in Massachusetts only had about 25,000 people. And so that meant that that data would probably be unique for most everyone that was there in the data. So now my hopes are fading.

Tippett: So this is a way — because I had trouble just kind of comprehending. So here’s another way I think you said: 87% of the population of the United States can be uniquely identified by only their date of birth, gender, and five-digit zip code. Is that right?

Sweeney: Yeah, exactly.

Tippett: Which is stunning to say it that way.

Sweeney: Yeah, and it was an amazing situation. In that particular data set I used as an example William Weld. When he was the governor of Massachusetts, only six people had his date of birth. Only three of them were men, and he was the only one in his five-digit zip code. So by linking the voter data on the health data on those same fields, you could put his name uniquely to his record. And then, like you said, using 1990 census data, we estimated that 87% of the population were kind of like Governor Weld.

What was pretty amazing, though, is that about a month later, I was testifying down in D.C. And about three months to six months later, laws around the world were changing, citing that example. It’s often called the “Weld Experiment.” But it was about how society wasn’t aware of the ways that these changes in technology would undercut our expectations for all kinds of values. And now of course, even democracy itself.

[music: “Vik Fence Sahder” by Blue Dot Sessions]

Tippett: I want to read something actually just because it’s fun to read, [laughs] but as we keep going, it also just takes us a little deeper into this. This was in Scientific American. It was an article about you and I guess it was when you were at Carnegie Mellon. Do you remember this?

Sweeney: Yes, definitely. [laughter]

Tippett: There’s a visual. Radio’s the most visual medium, so get ready. “Latanya Sweeney attracts a lot of attention. It could be because of her deep affection for esoteric and cunning mathematics. Or maybe it is the black leather outfit she wears while riding her Honda VTX 1300 motorcycle around the sedate campus of Carnegie Mellon University, where she directs the Laboratory for International Data Privacy. Whatever the case, Sweeney suspects the attention helps to explain her fascination with protecting people’s privacy. Because at the heart of her work lies a nagging question: Is it possible to maintain privacy, freedom and safety in today’s security-centric, databased world where identities sit ripe for the plucking?”

[laughter]

Sweeney: Yes, I still ride the motorcycle.

Tippett: Do you?

Sweeney: But I’ve updated to an Indian Springfield. But yes.

Tippett: I’m glad to know that. You often bring a historical perspective in when you’re in conversations. The context that this new technological state we’re in is a new industrial revolution. These companies, these digital technologies, are not like what we’ve had before. I find that really helpful in so many ways. Is that something that you became aware of gradually or how did that start to dawn on you? How has that helped you put all of this into perspective?

Sweeney: Well, what made me start to think about, “Surely society’s experienced something like this before,” was when it first started it was data privacy, starting with the Weld Experiment. But then we look up and then there are these discrimination and biases in algorithms. And I was first to do some work in that and shed light on how algorithms that are supposed to be statistical decision-making are actually not what we really should be using. Then we have to really question whether it’s giving us the appropriate answers. And I had so many graphic examples and others came and also showed even more. And so that’s been a real problem.

So by the time we got to the third wave around some of the democracy and election work and how technology was undercutting our democracy and so forth, you start to realize that, “Oh my God, the number of problems is just growing exponentially.”

And so the question was: where else has society experienced something like this? And so as I began to look historically, I started looking first by technology, by technology. And I’m like, “No, but this is bigger than that. It’s bigger than a television. It’s not just communication. It’s bigger than the printing press.” [laughs]

And so that’s when I came to realize, as I was reading about the history and the impact of the Second Industrial Revolution. Historians themselves had been calling the times we’re going through the Third Industrial Revolution. And they put the start date back at 1950 with the invention of the semiconductor. And then if you think about it, our iPhones, and iPods, the Internet of Things, the internet — all of these things are sort of revolutions within this revolution. And now as we are looking at generative AI, it’s yet another revolution. And it’s changed everything already: how we live, how we work, how we play, how we communicate with each other. And the end is nowhere in sight. We don’t know when this is going to end. And yet, the earliest of the clashes, like privacy, are still not resolved.

Tippett: So, I want to talk about generative AI. Before we do that though, I just want to say something that I came to understand about this matter of what makes our revolution different from previous ones — and perhaps you were drawing on historians — but I heard you explaining this in a way that was really helpful. So when the car was invented, for example. [laughter] With previous technologies, there was a runway. There was time with the car or the camera or the telephone or even the printing press between the conception of something, the design of something, the distribution of something, and it becoming part of people’s lives. And that in that time there was deliberation on what could go wrong, that there was time and…

Sweeney: It was time and also it required a social contract. So for example, you had cars, but who owned the roads? The roads, they were — primarily horses were using them. They were made out of dirt. There was nothing but potholes, and to try to run cars over it — that meant we needed cities and others to invest in actually building roads. That was a negotiation between society and the manufacturers that had to unfold.

And what happened during that unfolding? Well, when you first would get in the car and you push the gas pedal, we don’t know how fast it might go. [laughs] It might go slowly forward or it might just bolt as fast as it could go. And brakes, if you hit the brake, it may not work quite as you might expect, certainly not to today’s standards. Well, these things caused harms to individuals and became major lawsuits and concerns. And so, as a result, if you want paved roads, you basically have to improve the safety of the automobile.

Tippett: So all of this was working in concert…

Sweeney: That’s right.

Tippett: …It was this complicated process, as you say, that brought everybody in before it was launched on the world.

Sweeney: Yeah. Or certainly as it was unfolding. But here, where commercial adoption is the only thing that’s needed, and where the cost is usually a free technology or seemingly free — it’s not really free — but seemingly doesn’t cost me out of my pocket to use Google search, for example. It makes you feel as there’s no downside.

Tippett: Right. And so one of the things that you’ve also been really leading on is this matter of — okay, so in this world we inhabit with these technologies we have, what is the right way to intervene? Which really is a way to ask to get really pragmatic and granular about that question of how to shape these technologies to human purpose. And you talk about looking for the “sweet spot.” Would you explain that?

Sweeney: Yeah, exactly. Normally when some clash happens — it’s usually by the way, when the technology’s gone through the lifecycle all the way into the marketplace, and now the problem rears itself, and society finds itself in a take-it-or-leave-it situation. “If I have to have more privacy, I have to give up some utility. If I need more utility, I can’t have the privacy.”

And this kind of zero-sum argument is so far from the truth. The truth is most of the time it’s a design issue. Either the commercialization of the product is causing the clash or an arbitrary design decision. And in those particular cases, there exist these sweet spots where we can actually have all of the benefits or maximize the utility, and we can do it without the harms. And so that’s what we try to do in the work that I do is: How do you get society and the manufacturer to move to this sweet spot?

Tippett: And that means asking these questions and having the solutions in the design stage before it’s out in the marketplace, is that right?

Sweeney: Right. So we have solutions all the way through the lifecycle. But during design, you can do risk assessments, you can do various kinds of impact assessments to know where are the problems and how do we go about fixing them. And usually, that’s the cheapest, easiest way because it’s usually just incidental.

But by the time a business package gets put on it, how it’s going to be sustained, how it’s going to make money, and then it goes into the marketplace where it gets adopted, now it’s really hard. Those easier solutions, they’re gone. That time is gone already. And so now we’re left into this take-it-or-leave-it, and the sweet spots are harder to get to, because either we’ve got to have some patch or some technology add-on, or we’re going to have to live with the harms, or something like that.

Tippett: It feels to me like this orientation that has developed in you to — as you say, starting with just all-out enthusiasm, “this is going to make everything better,” and then becoming really aware of unforeseen consequences and of the need — which actually, in my mind, I feel like it’s a move that culturally, it’s actually a move of adulthood, understanding that life is as much about things that go wrong or not as we plan, as things that go right. And that actually that’s how we learn and grow. And yet we don’t necessarily make those assumptions and behave that way, especially when it comes to the market. And so now I’m just really curious about how all of these things you’ve done, and this way you approach technology in general, and our lives with technology, how has this prepared you and shaped you now to greet this new world of generative AI?

Sweeney: I still have the same excitement with all the technology. I definitely feel still the energy that I felt as a graduate student. Of course, we’re far more worried now because we haven’t righted any of these clashes. We haven’t resolved any of the big ones. So for example, social media, we don’t have the slightest idea how to do content moderation at scale. We haven’t resolved how you build trust at scale. Journalism has gone through major transformations, disinformation, and so forth. What do you trust?

And generative AI plays right into those fault lines, right into it. They’re going to exasperate it like times 10 or times 100. And so we don’t really have time quite on our side either. It’s a society we’re not quite ready for. But on the other hand, generative AI is very exciting.

So I think if anybody was to hear me talk and they walk away feeling gloom and doom, I don’t have the gloom and doom. But at the same time, it is not the great panacea either.

[music: “Pop Vibration” by Blue Dot Sessions]

Tippett: Well, let me just ask you, as somebody in this field, you did use the word exciting a minute ago. What excites you, and what surprises you about generative AI?

Sweeney: Well, in some ways, like I said, going back to my own age of history, I remember when spellcheckers came out and people were like, “Oh my God, children are going to never know how to spell again. We’re all going to lose that.” Or when word processing came along and people were like, “Well, what about handwriting? Nobody’s going to teach it anymore.” And in that way, something like a ChatGPT is sort of in that same evolution. That is, I give it a prompt and it gives me a first draft. And it can be a first draft of anything: a poem, a chapter in a book, a song. I don’t know, it could never do a Krista interview.

So I’m excited in the sense that it’s pushing us into another tool. I’ll just stick to ChatGPT now, because I just think it’s — Generative AI is much broader than only ChatGPT, but ChatGPT has become the poster child of it. And it’s a great poster child because you can go to the website, you can try it yourself, and you can produce anything out of it. And it has all of the benefits, and features, and concerns that we see in general with generative AI. I think those are all the good things.

But then I could flip around and say, “Yeah, so we’re just moving the evolution from going from writing to typing on a computer, to spell checking on a computer, to grammar checking to first drafts.” So one could argue that’s the arc and we shouldn’t worry too much. But what makes this one also different on the concern side though is — there’s lots of concerns. One of which is, the internet in five years — maybe even as short as three years — won’t be the internet that we know today. Right now, most of the content on the internet, good or bad, right or wrong, full of disinformation or whatever, is pretty much human-generated. But in three years, most of the content is going to be bot-generated.

Tippett: Yeah, I’ve heard you say that.

Sweeney: And it’s going to be a huge echo chamber. So if we don’t know how to do content moderation on social media, how do you do content moderation when it seems like every original piece of writing is saying the same thing and it’s all really from the same source?

Tippett: So one of the questions that just prompts in me is we should be less trusting now than we are, right? Maybe we just become more reasonably untrusting of what’s on the internet.

Sweeney: So if it was 1995 and you said, “I’m going to be distrustful of what’s on the internet,” that’s one thing. Because your notion of truth, and your notion of news, and your notion of what’s right, and your notion of whether or not what people believe is coming from all of these other sources — that they had their own problems, but for the most part, we could argue they were probably more reliable than being in an echo chamber with ChatGPT.

And so the internet of today, or even the internet of tomorrow, we increasingly don’t know how to know what’s right. So let me give you a couple of examples. So if you ask ChatGPT just medical questions, which my students and I did this spring. One of the things that kind of popped out was if you ask medical questions around common diseases, sometimes you get the right answer and sometimes you get “drink bleach.”

And if you ask it about more obscure medical problems, you get reliable content. So why is that happening? It’s happening because ChatGPT learned all it knows — and it actually doesn’t know anything, it’s just statistical correlations around words and how you put these words together — it learned it on the internet. And so it has all the biases of the internet. So it’s one of the most racist, sexist things you could imagine. It’s so racist and sexist they actually write a program to interface between ChatGPT and the world. [laughs]

Tippett: The companies write that interface?

Sweeney: Yeah, so that certain things won’t come out. But it’s not perfect because we don’t know how to do moderation at scale. [laughs]

Tippett: As you said a minute ago.

Sweeney: Right. So we don’t even know how to moderate ChatGPT. So when we want it to reveal itself to have these biases, the students had so much fun finding great prompts that would make it reveal these types of biases.

Tippett: Right. It seems to me also, though, that by the same token in terms of our agency — that we have an agency to lower or raise the quality of what we get back from it by — I hate this word prompt, I want to say the questions, the quality of the questions we ask of it, which we’re talking about in terms of prompt. You’re using it in the classroom, right?

Sweeney: I use it in the classroom. I use it all the time because I want to know what it’s good for, and what it’s not good for, and how to understand it. I’ll give you an example. A student of mine typed in — this is literally the only thing that the student typed in — “Write a research paper by Latanya Sweeney.” And what came out was a beautifully formatted research paper. It had an abstract, an introduction, a background, method section, statistically relevant results, and beautiful bibliography references at the end. The only thing is none of it’s true. I never did that survey, I never did that study. As far as I can tell, that study never happened. It was all about privacy, too. It was all about data privacy. [laughs] And the results were significant and none of the references were real. That’s pretty amazing.

Tippett: So, what do you make of that? The language — and now I’m speaking as a kind of consumer out here, non-specialist, not in the field, which most of us aren’t — this language of “hallucination,” which it’s such a fanciful, it’s so interesting that that’s the word that’s being used technically even, right? Because it’s making things up.

Sweeney: Right. If you turn that in, in fact, there was a lawyer that turned in a brief that ChatGPT made, and of course, none of the cases were real. [laughs] That lawyer has to personally suffer the consequences of that because ultimately he’s responsible for what he submitted. But ChatGPT’s not responsible.

Tippett: It’s a weird kind of creativity that it has. I’m just asking you as a computer scientist, what do you think of this weird creativity? Or is that even not the right word?

Sweeney: We’ve been using completing readers for a while. So if you go on your phone and you’re typing in a text message or you’re typing in a word processor and it wants to finish the sentence for you.

Tippett: Finish the sentence, yeah.

Sweeney: Right. So all it’s learned is: What do you normally type after this? This is my best guess is what I think comes after this word, or this is what I think comes next. That’s all ChatGPT is doing. This is my best guess is what I think would go here, because…

Tippett: The best guess is that she would’ve done this study, and it would’ve had these results.

[laughter]

Sweeney: Right. And that they would be statistically significant. That’s the funniest part. [laughs] And the references would kind of look like this. She’d have about 30 of them.

Tippett: I don’t even know what questions to ask about this. It just feels like such a new territory in some ways. I know what you’re saying. It follows on territory we’ve been on. So what I’m trying to think through now is what are the human condition implications of generative AI? And something that you also alluded to a minute ago — which I feel like we haven’t stated just clearly enough as what is the elemental thing that happened here — is that this technology is a student of us, us on the internet.

Sweeney: That’s right.

Tippett: It’s us on the internet, which is a huge qualification. So if I just say that phrase to you, “the human condition implications of generative AI or ChatGPT,” tell me where that takes your mind.

Sweeney: Well, it makes me a little afraid. It makes me afraid because, like I was saying in the 1990s, if ChatGPT came out then, we had other trusted sources and people would’ve just been able to discount it. But our other trusted sources are gone. This conversation has been very focused on ChatGPT. We’re coming up in the 2024 election, and fake videos, fake images that you can’t distinguish is also another mind-boggling issue. It’s just a matter of writing a prompt. Think about that. That’s so crazy. And literally within 60 seconds there is this beautiful image, or there is this video of that person doing that.

Tippett: Okay, but I’m talking to Latanya Sweeney, who says, “We don’t have to sacrifice privacy to have the benefits of our technology.” I’m just curious about how are you framing this for yourself and your students? If you continue on this path you’ve been on doggedly these years, what do we do about that? Or what is it giving us to learn, which I think is another interesting question?

Sweeney: Well, I think the first thing — and I think that’s the right way to think about it too, Krista, is what are the big things to learn here? What are the big takeaways or where the space of harms are going to look like? Where are the space of harms apt to be? And the level of disinformation is going to go up dramatically. And our instincts about what is true, that’s the big challenge generative AI gives us as a society.

Tippett: I feel like we’ve spent a lot of time arguing about whose facts are right. And in that, there was a flawed assumption that facts were ever what alone conveyed truth or landed in human bodies as truth. I realize I’m being a little bit of a devil’s advocate here, but even if, let’s say, one of the things that gives us to learn is to be talking about the nature of truth.

Sweeney: [laughs] That would be an accomplishment, right?

Tippett: That would be an accomplishment.

Sweeney: And I think that’s a really great accomplishment that is one of the most important things for us to get our heads around now. That’s the biggest challenge we’re facing. What do I think of as truth? What am I using so that we become skeptics? The biggest problem is we don’t have a replacement yet for it. We don’t have a solution for it. But that’s where I think the overarching challenge comes from.

The second challenge is a U.S. one. The internet is mostly, most of the data learned on is from an American perspective, which means our stuff is the stuff…

Tippett: Our subjectivity, our bias, our pathologies.

Sweeney: Exactly. Yeah.

[music: “Building the Sled” by Blue Dot Sessions]

Tippett: Is one implication of this that weans many of us off the internet?

Sweeney: No.

Tippett: No?

Sweeney: Or let’s say it differently. Maybe the internet is just going to be a new kind — maybe we’re going to find a new thing. Because our need for a north star around truth is just fundamental to democracy. We can’t really survive if all of us come to a table with completely different belief systems and not even be able to find a common fact that we agree on. So we’re going to have to navigate our society through to that, and that’s going to take some unpacking to figure out what this means and how we get there. But I think that’s the biggest challenge of generative AI is, how do we build trust at scale?

Tippett: But if you think about those previous worlds we talked about, yes, they had time, the printing press or the car or the telephone or whatever, it was a much longer span of time getting disseminated. But it was transformative. There was incredible upheaval. And that is also true for our age.

Sweeney: Well, you know what I would also liken it to? Electricity. So electricity, like cars, had to be negotiated. That is somebody had to run these power lines. Somebody had to have generators, and that required some negotiation. But people’s houses burned down and people would plug things in, and you plug it in, it would blow up in your hand. All of this stuff had to be…

Tippett: Okay. You’re not making things better, but okay. [laughter]

Sweeney: Mechanisms came to exist in society, like Underwriters Laboratory, and things like that to help us navigate through that. I think your question though is will we have enough time as the speed of generative AI is moving, that we’ll be able to find those problems and find those kinds of solutions before we’re so transformed we can’t find our way back?

Tippett: Well, let me ask you this. I do want to ask you again, what is the upside you see? What does excite you? Because you also are closer than the rest of us in seeing that.

Sweeney: I think on the other hand, the ability to express ourselves is certainly enhanced. I have a 15-year-old son. He has an idea for a card game. I don’t know if you know about Magic: The Gathering, but one of the things about Magic: The Gathering is they’re kind of a trading card, but you play them also, and they have a lot of strategy in how you play them. But on the card, they have these beautiful images that artists have generated. So he decided that he had an alternative game he wanted to design around a chess theme, but he can’t do that art and he doesn’t know artists that would give him art that he could use, like a blazing queen or a bishop threatening a knight, and stuff like that.

So he finds generative AI, he types in a few prompts, and voilá, these amazing images come out. [laughs] And then using standard word processing with different colored fonts and so forth, he writes or draws really impressive looking cards, and he does this 80 times. And he did that within a month. That’s amazing. All of a sudden we have a way of expressing our creativity that we never had before. I had a colleague who literally wrote a book in seven days.

Tippett: Oh my gosh.

Sweeney: Now I’ve been tortured and tortured myself and tortured my family for years trying to get this book done. And he wrote a book in seven days. That’s crazy.

Tippett: With ChatGPT as a companion, as a helper?

Sweeney: Yeah, with ChatGPT. That’s right.

Tippett: What’s clear is — especially, again, I really like this historical perspective — we are the generation in the middle of the mess. We are the generation right at the outset where we can see so clearly what is being undone, and we can see the dangers because this thing is accelerated in its development.

Sweeney: And we’ve seen a few cycles. We’ve seen a few cycles.

Tippett: We’re wary in a way we weren’t wary with Facebook in the beginning.

Sweeney: But until we can really get our head around how do we deploy an army of public interest technologists, we are not going to be able to get completely in front of it.

Tippett: No. You do say that there is a field emerging of “public interest technologists.” That’s not a phrase most people have heard.

Sweeney: And it doesn’t really flow off the tongue that easily. [laughs]

Tippett: No, no. But it’s comforting when it does. [laughs]

Sweeney: And most importantly though, it is really needed. It’s really needed for someone who’s going to represent society’s interest and actually move us from, what I would call, a technocracy to back to a democracy. Back to right now, all of our rules that we live by are literally written by how the technology works, what the technology can do, whatever arbitrary design decisions it has. And if you’re using decision-making algorithms, it doesn’t even matter what our laws are if you can’t enforce them if the technology goes contrary to them.

So we need someone who’s going to say, “Wait, I got to represent societal interests.” Meaning what are our values and what are the things that we hold dear? In particular, what are our laws and how do we make sure they get protected in today’s society? And that’s at the heart of public interest technologies, helping society navigate its way back to itself today.

But think of it this way, what if we succeed? Think of the world then. Our democracy would be restored, our quality of life would be restored, and we would have all these benefits.

Tippett: I wanted to touch down on the fact that you also write poetry, which might…

Sweeney: Who told you? Oh boy, you really do your homework.

Tippett: Come on. Come on. It’s on the internet. All right? What are we talking about?

Sweeney: Yes. 1990-something. [laughs]

Tippett: Okay, well just as you said, everything has eternal life online. All right. But it might sound like a non-sequitur, but I think one of my questions, again, if I think about the human condition angle on this is what do we learn about what makes us human in having to grapple with these technologies? And, of course, we know that ChatGPT can write poetry, but I would also say that where poetry comes from in us and the full range of what it expresses about the human experience is very much also embodied. And that embodied human experience is not present on the internet in its fullness. Right?

Sweeney: Absolutely.

Tippett: I don’t know. Does that…

Sweeney: Yeah. We were talking about a joint friend of ours earlier and whenever — I haven’t gotten very many emails from her — but whenever I get them, they are stunning in her use of words. And ChatGPT will never be able to do that. There’s something just amazing about the human ability that is not completely captured. And so maybe what we’re saying is, ChatGPT and all the other things that we do in our lives where, “I just need to send a note.” I’m sure she doesn’t send every email like that. And I’m sure there are some emails where she just needs to get something out right away. Maybe those are the things that we leave to ChatGPT, and these amazing pieces where we reach into our hearts, and into our souls, and try to convert that feeling and emotion into words, or into writing, or into a drawing will still be there. And we’ll learn to distinguish the two.

Tippett: Yeah, right. I guess that’s the muscle we have to grow. There’s a poem called “Blood Passage.” Yes, from 1996, online.

Sweeney: [laughs] I’m going to delete those.

Tippett: [laughs] No, but you say at the bottom that was written for a family reunion, there’s a reference in the middle of this.  “I am old and you are young. / We span two hundred years. / I know the past five scores, / you hold five more still.” Is that a reference to the 200-year present idea of Elise Bolding or were you just really counting?

Sweeney: No, it was really a reference back to my great-grandparents.

Tippett: Yeah, I was going to say — that’s what I — your great grandparents. You truly spanned 200 years.

Sweeney: No, really when they would talk about their parents or their grandparents, it’s mind-boggling to go back to think about somebody talking about their parents in the 1880s. It’s like —

Tippett: And I guess I just wonder, do we start to see this kind of experience and perspective we have in a new light, because of what the technologies take away from us that they can do better? I don’t know.

Sweeney: Let’s flush it out more, Krista. Are you saying that the fact that everything is so literally preserved or do you mean —

Tippett: No, I just mean more — again, having this embodied experience of 200 years. You can write about that, but you can’t feel it. We have this in our bodies and you put some words on a page that express it.

Sweeney: Now hallucinate as it might, it’s not likely to bring those connections together. And I think that’s what you’re after, that the things that are said 90% of the time it’s going to say it. It’s going to hallucinate to connect one 90% to another 90%. But it’s the unusual pieces that hold the idea — that’s not going to really happen. It might happen once in a while, but it’s not going to happen regularly.

Tippett: If you think about the notion of intelligence — one thing I think, I’m still remembering the days when I was working in a big media organization and the internet came along and there was the “New Media” department, and then at some point that became a completely ridiculous phrase because the new media was media and all the old media had to convert to it.

Sweeney: [laughs] It’s gone.

Tippett: And so I’m so curious about — this language of “artificial intelligence” is clunky and I’m sure it’s placeholder now — but intelligence. And you’ve been working with intelligence and with computing intelligence. Human intelligence is so much more than thinking. There is knowledge intelligence, but there’s also the intelligence of love or care or parenting or — You gave this beautiful talk — thank you, internet, again — at the Arlington Church in Boston, Arlington Street Church, right?

Sweeney: Oh my gosh. That’s amazing. [laughs]

Tippett: … about your philosophy of giving. But that’s a kind of intelligence that is different from thinking intelligence or civic. You and I both love that language of the civic, and civic intelligence is also different from private, individual smarts.

Sweeney: Well, I don’t think anyone is going — maybe there are a few people out there who might try to claim something like a ChatGPT has intelligence — but if we go back over the arc of time and the work of artificial intelligence, the “artificial” is more bigger than the word “intelligence.”

Tippett: That’s so interesting. That’s an interesting thing to think about. What do you think now? How would you just start to answer that question now, “What is artificial intelligence?”

Sweeney: The truth is, back in the day, what AI really was the pursuit of building a human. It was literally — it was no different than artists in Grecian urns where you’re just trying to represent an image of man or an image of your time — humans have always tried to find ways to express their intelligence and to make likenesses of themselves.

And in that way, that’s really — when I was a graduate student — was at the heart of AI, what drove us. But today, “What is AI” is none of that, right? It’s just like you said, it’s just the statistical correlation of the internet or the statistical correlation of images. It’s got some fine-tuning algorithms. I don’t want to take away from the great work of a lot of computer scientists recently. I don’t want to take that away, but at a 10,000-foot picture, this is what it looks like.

Tippett: But I think it’s really helpful to hear somebody just define that, because I feel like that fact — it’s hard for ordinary people out here, for us to just see some of the things that are actually simple, not simple, but straightforward about it, what it is.

I think I want to ask, just following on that, just as we wind down, so let me just ask the question this way: With this life you’ve led with this intelligence and knowledge base that is yours, your engagement with our technologies, what do you keep learning? What are you learning now about what it means to be human?

Sweeney: Oh my God, that would definitely be — I take so much hope for tomorrow. I have the luxury of living and working with these 20-year-olds who are just amazing. They’re just amazing. And the society that we’re passing over to them, I have to apologize to them on a regular basis. I’m sorry about this, but it is what it is. And they’re eager and their eyes are open wide to see the world as it is, and what they’re inheriting and what they need to take on. So I do have a lot of hope in the future. I think that’s also particularly a human trait.

Tippett: Yeah. That hope is what you possess, and that itself is an expression — is a manifestation of the thing I’m asking you to give some definition to.

Sweeney: Yeah.

[music: “Eventide” by Gautam Srikishan]

Tippett: Latanya Sweeney is the Daniel Paul Professor of the Practice of Government and Technology at the Harvard University Kennedy School and in the Harvard Faculty of Arts and Sciences. She is founder and director of the Public Interest Tech Lab at Harvard and also founder and director of the Data Privacy Lab there. And she is former Chief Technology Officer of the U.S. Federal Trade Commission.

The On Being Project is: Chris Heagle, Laurén Drommerhausen, Eddie Gonzalez, Lilian Vo, Lucas Johnson, Suzette Burley, Zack Rose, Colleen Scheck, Julie Siple, Gretchen Honnold, Pádraig Ó Tuama, Gautam Srikishan, April Adamson, Ashley Her, Amy Chatelaine, Cameron Mussar, Kayla Edwards, Tiffany Champion, Juliette Dallas-Feeney, Annisa Hale, and Andrea Prevost.

On Being is an independent nonprofit production of The On Being Project. We are located on Dakota land. Our lovely theme music is provided and composed by Zoë Keating. Our closing music was composed by Gautam Srikishan. And the last voice that you hear singing at the end of our show is Cameron Kinghorn.

Our funding partners include:

The Hearthland Foundation. Helping to build a more just, equitable and connected America — one creative act at a time.

The Fetzer Institute, supporting a movement of organizations applying spiritual solutions to society’s toughest problems. Find them at fetzer.org.

Kalliopeia Foundation. Dedicated to cultivating the connections between ecology, culture, and spirituality. Supporting initiatives and organizations that uphold sacred relationships with the living Earth. Learn more at kalliopeia.org.

The Osprey Foundation — a catalyst for empowered, healthy, and fulfilled lives.

And the Lilly Endowment, an Indianapolis-based, private family foundation dedicated to its founders’ interests in religion, community development, and education.

Books & Music

Music Played

Reflections