# TIME100 Talks: AI and a More Equitable Society Auto-transcribed by https://aliceapp.ai on Thursday, 19 Sep 2024. Synced media and text playback available on this page: https://aliceapp.ai/recordings/m94ry6F2NLckOouQ7JDeCvPajTXlRAdw * Words : 5,615 * Duration : 00:28:47 * Recorded on : Unknown date * Uploaded on : 2024-09-19 02:46:03 UTC * At : Unknown location * Using : Uploaded to aliceapp.ai ## Speakers: * Speaker A - 18.34% * Speaker B - 44.84% * Speaker C - 36.81% ---------------------------- Speaker A [00:00:00] Well, good morning. How's everyone feeling? Speaker B [00:00:02] Good. Speaker A [00:00:03] Great dreamforce so far. Well, welcome to our time 100 Talks, AI and a more equitable society. I'm, um, Jessica Sibley. I'm, um, the CEO of time, and I have two incredible panelists here today. Fred, I'm going to start with you just to, uh, introduce yourself and give a very brief bio. Speaker C [00:00:28] Uh, it's great to be here. Uh, my name is Fred Swanika, and, uh, I'm the founder of Sand Technologies. Um, originally from Ghana, but I grew up all across the african continent. Um, so very passionate about Africa and seeing how we can transform the continent. Ah, and, um, today, San technologies, an enterprise AI company that deploys, uh, um, data and AI solutions for governments and companies, ah, around the world. So that's what I do. Speaker A [00:00:54] You're also a time 100 2019 time name. You is one of the most important, influential people in the world. Congratulations. Speaker C [00:01:05] Thank you very much. Speaker A [00:01:06] And you received in 2023 last year, a time impact award. So we're really, um, proud of you and all the hard work that you're doing and to be part of the time, family and community. Speaker C [00:01:21] Thank you so much. Speaker A [00:01:22] Sasha? Speaker B [00:01:23] Yeah, I'm Sasha. Um, I'm the AI and climate leader hugging face, which some of you may know, it's a platform for sharing, um, AI models and data sets. And so it's kind of the backstage of a lot of the AI world. And, um, I'm part of the ethical team. And what we try to do is we evaluate models, we evaluate data sets, and we try to make sure that they are unbiased as much as possible, that they represent what they're supposed to represent. And me particularly my, um, interest, I guess, is, um, evaluating the sustainability of AI models. So essentially how much energy they use, how many greenhouse gases were emitted, and, um, trying to make sure that we, uh, limit those impacts as much as possible. And I'm also, uh, affiliated with two other organizations that are really important to me. One is climate change AI. So it's, um, a community that brings together people who want to use AI for climate positive applications. And then the other one is women in machine learning. Um, the gender diversity in, uh, machine learning and AI could, um, be better. I think the last statistics was that it's 80% men and 20% women. And so women in machine learning tries to create a safe environment, tries to give grants and mentorship to help keep women in this field. Speaker A [00:02:34] Well, just, uh, last week, uh, time released our time 100 AI, the 100 most important, influential people in the world in AI. Today, 30% were women. Speaker B [00:02:46] Yeah. So it's better. Better than the average. Speaker A [00:02:51] More work to do, but better than the average. And you were named one of the most important, influential people in AI today. Speaker B [00:02:58] A leader. Yes. Speaker A [00:03:00] In the leader category. Congratulations to you. Uh, let's start with you, Sasha, and talk about your TED talk. Speaker C [00:03:09] Nearly trailblazers head to AI landing. For the time, 100 talk, AI and a more equitable society. Speaker B [00:03:21] All right, here we are. Speaker A [00:03:23] So your TED talk, nearly 2 million views. I'm assuming some in the crowd have watched this already, but if not, you must tell me about that process and what's the most important takeaway from what you. What you're passionate about, what you spoke about. Speaker B [00:03:40] So that. That TEd talk was really born from, uh, a desire to shift the focus of the discussion away from catastrophic risks, away from killer robots. Because I've been working in AI for over ten years, and then, you know, initially, people didn't really know what it was, and then people were really excited about it, and then people got very dark about it. And then every time I'd be like, oh, I work in AI. Someone would be like, so, are robots coming to kill us? Is there going to be an apocalypse? Should I be afraid? And actually, still now, just last night, someone was like, oh, yeah, yeah. Should I be afraid? And so I made that TED talk as a way of kind of, you know, putting something out there that says we're not. We're not worrying about the right things, essentially. We shouldn't be worried about killer robots. We should be worried about how our data is used, about the environmental impacts of AI. We should be worried about the here and the now, the really present day impacts, and not these, like, future hypothetical ones that. That may or may not happen. Speaker A [00:04:28] Right, Fred, switching topics a little bit, you've been focused on training 3 million Africans, uh, to be AI and technology leaders. You're focused on closing the digital divide and the democratization and access to AI. So looking at this room of executives and leaders, how can we work together to close this gap? What can we be doing? Give us some practical advice and actions that we can take. Speaker C [00:05:02] Thank you, Jessica. Well, you know, we're talking today about how, um, AI can bring about a more equitable society. So one of the, uh, beliefs we have is that for AI to really, um, benefit society, um, we need to think about who's building the AI, right? Who's training the trainers of our AI models. And so, um, today, as Sasha just mentioned, most, uh, of the folks who are building our AI models are sitting in Silicon Valley, and they're predominantly men. And, um, we believe that we need to, uh, bring a more diverse group of people into the AI conversations. And, uh, so our goal is actually 5 million. We are looking to train 5 million people in the next decade from Africa, um, in AI skills. Speaker B [00:05:51] Right. Speaker C [00:05:51] So today we are training about 300,000 people across Africa. We're probably the largest trainer of software engineers and data scientists in the world now. And, um, about 30%, 35% of people we're training are women. Right? So we think that, uh, we need to do three things as we think about who trains the trainers, who trains our models. One is we need more diversity, which we talked about. Second, we need to infuse ethics and values into the training program. So the folks that we train, before we train them in software engineering and data science or cloud computing, they first go through six months of training in leadership and ethics and values so that they can understand some of the trade offs that they might need to make as they build software. And the final thing we need to do is we need to expose them to the problems of the world. Healthcare, education, climate change, um, gender issues. So that when they now start coding, they have a purpose. I'm driven to build AI to improve healthcare. I'm driven to improve AI to build AI for climate change. You're not just coding for coding's sake. Right? So this is the way that one of the things that we think is really important, I'd like to encourage all of us in the room to think about how do we influence the education system and the people who are actually going to be building these models that are going to shape the future of humanity. Speaker A [00:07:11] So when we talk about AI for peril or progress, it's really AI for good. And starting at the very beginning with those that are passionate about this, AI for health and climate, and all of those things to solve the largest global societal challenges. I really like that. That's really important. Uh, Sasha, speaking about climate, you've been working on having AI, ah, solve climate. And then you realize AI was a part of the climate problem. What do you mean by that? Explain that. Speaker B [00:07:50] Yeah. So, um, about ten years ago now, I had what I call my quarter of life crisis. Um, I was working at Morgan Stanley and I realized that climate change is real and I was going to quit my job and go plant trees or I think my thing at the time was teach kids how to compost because it was really hard anyway. And then my partner was like, well, maybe you have a PhD in AI. Maybe you can use that and you don't need to go like, start from scratch. And then I realized that there was no community in AI and climate, that there was no like, central place where people could come together and work on, you know, but better climate modeling, biodiversity monitoring, you know, predicting and all these things that AI is really good at and what we have data for, actually. Um, and so I started meeting people who wanted to work on this and we created this community. And then like a couple of years in, someone comes up to me and they're like, well, aren't you like part of the problem? And I was like, what do you mean we're solving climate change? And they're like, well, how about the energy, how about the water? How about the natural resources that AI is using? And at the time, people didn't really know about it. I didn't really know about it. And then I realized that it was kind of part of my, um, responsibility as a scientist, work on this and to understand it better. And so it's been, yeah, it's been six years now. I've been working to try and understand how we can quantify the amount of energy used for training AI models, how we can make informed choices when it comes to deployment, when it comes to training, um, how do we get people a better idea of, you know, what model is less harmful for the climate? How can they make these choices? So, for example, a project I'm working on right now is, uh, developing energy star ratings for AI models. So when people want to do object detection, translation, what have you, they can choose the most efficient model that does the task they want to do. And so essentially right now, a lot of people are thinking about this. The, you know, legislators are thinking about it. There was just a bill that came, um, came through the United States Congress in February. So this is really a crucial time when we're trying to get ahead of the issue so that AI doesn't become more, um, of a problem than a solution in terms of, ah, climate change. Speaker A [00:09:45] So you said, uh, six years ago, now, six years later, here we are at Dreamforce, the largest AI conference in the world. Is this a topic that's being spoken about this week? Are people more aware of that? Also, Fred, help answer this question. When you talk about ethics and training for that, is that part of what this is all about, or is that a conversation that has been left out for now? Speaker C [00:10:14] Well, I think it's something that should be, that this should be about, right. Um, so, you know, today I believe that, um, you know, there's too much conversation around the technology and not about the problems that the technology can solve. Right. There's a lot of excitement about large language models and, you know, generative AI. And today it almost feels like a big hammer. M looking for a nail, like a big tool that we're trying to figure. Speaker B [00:10:37] Out, or even a microscope people are using to hammer nails. Speaker C [00:10:40] To hammer nails, yeah. Speaker B [00:10:41] Like, that's not what it's about. Speaker C [00:10:42] That's not what it's about. Speaker B [00:10:42] Right, exactly. Speaker C [00:10:43] So I think, first, let's start with the problems. What are the big, important, complex, uh, problems that the world is facing that we need to solve, right. And there are many, many significant problems. Um, and, um, one of the things we talk about, the african leadership group, is we say that those with privilege, who are healthy, who are alive, who are influential, shouldn't be doing easy things. They should be doing hard things. We should be solving really big problems. That's the only way to justify a privilege. I think that Aih is occupying a position of privilege. And what do I mean by that? We're spending a lot of, you know, money, right. Billions of dollars going to this thing, right. So there's. There's a huge financial cost, there's an environmental cost. Right. In terms of what it means for the, you know, with energy that we. For training models and so forth. And then there's a human cost in terms of job losses might incur and everything. Speaker B [00:11:32] And also the labor for labeling data and for, like, creating these data sets. Right. Speaker C [00:11:36] Exactly. Right. So we need to then say, are we getting enough return on this investment that we're making in AI? And I think that today, a lot of the conversation and excitement is around, you know, AI for writing haikus and for solving homework and creating art and chatbots. And I think we need to move beyond that and say, how do you actually use AI to solve healthcare and education and infrastructure challenges, access to water? Um, there's some really big problems that. Speaker B [00:12:07] We need to solve that don't necessarily need generation forest. Speaker C [00:12:11] That's right. AI's been around as a field for 40 years. Right. And there's so many techniques. And if you start with a problem first, then you find the right technology, the right AI technology, uh, you know, techniques for that problem. Not saying, oh, I've got generative AI, let me try and find a way to use it. And you're actually then finding frivolous use cases for it sometimes. Speaker A [00:12:29] How far away are we from that? Are we. Is it five years? It is a year. And then the second part of my question is, it just feels like this is happening so fast and moving so fast and the acceleration of this technology. Uh, I work with a lot of the large global tech companies where it seems like money is no issue, it's just a race, and it's all about time. And is that what is part of this issue, that we're just moving so fast that we need to slow down a little bit to make sure that this is fair and responsible? Speaker B [00:13:06] So what's interesting about AI, right, that it actually originated in the 1950s, and it's gone through these AI summers and AI winters. And so right now, I would argue that we're in this generative AI summer, but this is not the first summer AI has had. Like, even in the sixties, AI was going to, I don't know, play chess better than people, and that took 30 years. Right? Deep blue was in the nineties. And so I think that it's kind of a part of the natural, um, I guess, momentum of the field to kind of get very excited about a new technique and an application. And then as we realize collectively, I think that it's not going to solve all the issues that we want it to solve, then people are going to, you know, the, um. Maybe the excitement is going to die down a little bit, but there's still going to be a core set of problems that, for example, generative AI can be very useful. And so maybe we're going to realize that it's not going to solve health or climate change, but there are some interesting problems, like customer service, that it will solve. And so it's interesting because, like, when you talk to. So yoshua Bengio is one of my, one of my supervisors, and he's seen a couple of these waves come and go. And so he's very like, well, you know, it's going to die over in a year or two, but there's going to be some very interesting problems, like, for example, discovering new antibiotics. You know, where we have, we don't have enough antibiotics, there's a lot of bacterial resistance, and AI can actually generate. So generative AI can come up with new, um, antibiotics that we can test, um, for new diseases. And so essentially, he's like, well, these applications are still going to stay, but maybe the superfluous ones are going to die out a little bit. And so that's what I'm waiting for, for the likely, the wave to die down, and then we're going to see what's, uh, what is really left. Speaker A [00:14:34] I hadn't heard that, like, the crypto winter right now, former president Trump is launching is all the press, crypto companies. So maybe this is summer, Winter. I hadn't thought of that. With that, it's a really interesting point. Let's, uh, let's get a little dark here. Speaker C [00:14:50] But Jessica, this is happening now. You asked about is this. Yeah, in the future, it's happening now. I mean, if you think Google, uh, alpha fold, an OMN technology that can be used for discovering new, uh, you know, proteins for medicine and so forth. Right. Um, the work that we do at sand technologies, um, the way to think about it is, uh, we're like a non defense palantir, right? So, palantir leverages data and AI, but very much around, you know, working with the CIA and Homeland defense, security and Department of Defense, you know, and we're driven by different ethical system. We say, let's use AI to have real impact. So I'll give you a few examples of some of the work that we're doing today. This isn't in the future, uh, in water, 66% of people in the world are going to face water scarcity by 2025. And, um, about $7 trillion of water is lost, instead of, um, 7 trillion liters of water is lost every year due to leakage. A city like London, many cities around the world, a third of the water that's produced is just lost to leakage. Right? So, in the UK, uh, our AI systems manage the water supply for the entire city of London, 16 million homes. We're predicting when a pipe is going to burst. We're preventing pollution from going to the oceans, we're helping to reduce carbon emissions from the waste treatment plants, and helping them to plan where to invest in water infrastructure. To really make sure that people have access to water. We put sensors in the rivers, so we can track the nitrates, the phosphates, the turbidity in the river, and so that before you drink water, you can put in your home address and we can tell you the water you're about to drink comes from that river. And here's the health of that river. So you can actually improve the health of the rivers that feed our homes. Right. This is a real application that exists today. We've been doing this for five years. Uh, second example I give is in telecommunications. The president of a country in Rwanda, uh, in Africa, wanted to figure out how do I bring access to all my citizens. And we have done some work in the US for at and t to help them figure out where to put fiber to roll out, uh, about 4 million homes in the US. And we created the algorithms to map out the homes that it made sense to. But here they were trying to profit maximize. In Africa, they were trying to impact maximize. They said, how do I give access to people who live in this, in areas where they don't have access to broadband? So we were able to bring those same algorithms to that country and helped them improve access from 50% to 96%, just by more intelligently placing where they put their towers. And now they can access telehealth in education and so forth. Right? Um, we use that same technology of where to roll out, um, you know, uh, towers and fiber when they wanted to figure out where should I put my health facilities, same problem. I've got population here, I've got roads, I've got hilliness. And algorithms can detect where to put health facilities to optimize, improve access. And the final example I give is, as we transition to the electric vehicle, from fossil fuels to electric vehicles, one of the things we have to figure out is, where do I put charging stations in my cities, right? Today, most people who have access to charging stations are wealthy people who can have it in their house, drive a Tesla and so forth. But if we really want to see the energy transition, you need to make sure that there's no range anxiety, and drivers can get out, and there's charging stations all over the place. AI can help you figure out, and we've created the algorithms to figure out, where do you put charging stations in a city. Speaker B [00:18:11] That's an optimization problem. Speaker C [00:18:14] So I can go on and on and on. Right? But there's a lot of. This isn't something that we have to hope about. We can actually just change our attention to focus on more important problems that can really justify all the investment we're making in AI. And then I think we'll end up with a much more equitable and better society. Speaker A [00:18:33] As I said before, let's get a little dark for a moment. Sasha, essential threat of AI. What are you thinking about that? How are you working on that? You said somebody came over to you just yesterday and said, should I be terrified? Should we be terrified? And how do we make sure that we do this right? Speaker B [00:18:53] So maybe this is a bit of a hot take, but I sincerely believe that the existential threat debate is a distraction from the concrete problems we should be talking about. And I think a prime example of this was, uh, the Bletchley park summit last year, where it brought together 100 countries, world leaders, etcetera. And instead of talking about labor, climate, health, what have you, they were debating about existential risks. And the Bletchley park declaration at the end of the day was really like, we're going to do all that we can in order to stop, essentially. In a nutshell, killer robots. And I was like, come on, you have, uh, most powerful people in the world. And what you talk about isn't labor issues, isn't about like consent and data, isn't about privacy, isn't about energy and climate. It's about this kind of, you know, future potential possible risk. And I guess I'm a bit of a realist in the sense that I've always wanted to focus on today's problems because there are issues that need our attention today. And the way I see it is that if we focus on these problems, tomorrow's issues will be less, you know, drastic, will be less big. So as an example, right, if we have more, uh, stringent data protection laws, if we have better accountability when it comes to AI systems. Like, for me, it's actually mind blowing that currently, you know, if, um, an AI system makes a prediction that will, you know, for example, uh, in the justice system, right, it will. Or for example, facial identification, right? Like there have been so many misidentification, especially if people of color, and there's no accountability. There are people, there are companies that make the software. And then at the end of the day it's like, oh, well, yeah, I guess it does a little bit badly on people of color. Oops. You know, and then, and then these systems still get used in police and justice. And it's like, we need laws that, that, you know, put accountability. If your system screws up, you have to, you have the liability for it. And, you know, instead of having these conversations and these debates in policy, we're having the debates about, you know, how do we make sure that the killer robots don't come wipe us out? And I think that just like on a visceral level, it really kind of has like our reptilian brain kicks in. And then all of a sudden when you talk about killer robots, people get very, um, emotional about it, I guess. And so it makes it hard to talk about anything else. And I've had this, um, had this happen a lot to me. You know, I come to a place, an AI conference, a technical AI conference, and people are debating existential risk. And I'm like, well, how about climate change? And, you know, it's not as, it's like, it's like the frog in the boiling pot of water. It's so much more gradual that people are like, no, no, no. Climate change, yeah, whatever. But how about existential risks? And it's gotten so frustrating. Like, literally. Yeah, it's very tough. Speaker A [00:21:24] Do you think you need to be a victim in order to become an activist? And I say that because on our time, 100 AI list, the youngest, uh, individual on the list is 15, uh, Francesca Manny. And she was a victim of pornographic deepfake photos of herself and her friend group. And, um, now she's become an advocate, an activist and advocating for this issue and this topic so that this does not happen again to anyone else. And the issues and the challenges that she's facing. And she's here at this conference with her mother, and she's going to Washington and doing all these incredible things. So I think that's really important. Speaker B [00:22:01] I think that, I mean, being a victim can definitely be one of the, um, motivators. But also, when you start talking to people who are creating these systems, like Fred was saying, like, you realize that they're not magic. You realize that they're not, you know, omniscient or omnipotent. And actually they do screw up in ways that are very potentially very harmful. So, for example, I've talked to artists who, like, essentially have their complete life work stolen by these AI systems. And, you know, now they can't get a job because you can just type in their name and then have their work be generated automatically. And so when you talk to them, you realize to what extent, like, the focus should be there instead of kind of the broader existential catastrophic risk debate? Speaker A [00:22:43] We're here at Dreamforce, and I know that we all saw Mark Benioff's keynote. I just want to change topics because I want to spend time on this one. I think it's really important. Agent Force, you're both working in the area of labor and workforce enhancement versus replacement. How are you thinking about this issue now that we've got agents that can really accelerate what humans weren't able to do or can do it better and just create more jobs and better jobs? And then if you have another point to make on that other topic, Fred, go back to. Speaker C [00:23:17] I'm very happy to talk about that. I think the, um, you know, the way we look at it is, and there's a friend of mine who used to work at Google Brain who was telling me that AI, we can think of it as adding an extra 30 basis points to every human beings iq, right? Imagine if all of a sudden we're more intelligent, right? So, um, we think that the opportunities to augment every single human, um, and this can definitely drastically improve those who have historically not have had access to education. For example, one of the ways we're doing this in Africa is, uh, we have a huge shortage of doctors. So the doctor patient ratio in, uh, the west is 300 patients to a doctor. In Africa, it's 8000 to a doctor, 8000, uh, patients to a doctor. So what we're doing is we are connecting these rural clinics with high speed bandwidth using SpaceX satellite. And with that, now you can equip a nurse or community healthcare worker with AI enabled devices. So Google's Gemini just passed the us medical licensing exam with a score of 96%. Right now, a nurse or community health worker can do very sophisticated diagnosis like a doctor or a specialist in rural Africa. And that, by the way, exists in rural America where they don't have doctors, right? So the similar solution where you can now augment someone and they can really then bring much more, you know, and then that allows you to have intelligence at the national level to see what's going on, the whole healthcare system. So this is an example of how we can augment our capabilities and suddenly supercharge every single human being on this planet. Speaker B [00:24:49] And I think that what struck me most about Mark's keynote was that this idea of low code and no code solutions, because, you know, I have the privilege of being, you know, uh, technically, I guess, savvy in terms of AI, but I realized to what extent that's not the case for a lot of people. And having AI tools that you can use without understanding how to code, without understanding, you know, uh, the nuts and bolts of it is actually really powerful. And when he was showing the agents, I was like, oh, yeah, well, I think that's like SQL. And I'm sure there are people who know that, but there are also some, so many people who can't write an SQL query. And having this agent that will be able to query databases without having to spend whatever, months or years learning how to code is actually huge. And that's one of the main reasons I came to work at hugging face, because I saw the potential of this platform and you click a button and you have an AI model deployed that can answer questions or whatever. And I think that's really the key because instead of having a small group of, you know, very well paid, very well educated, um, you know, technical experts, we have a more democratic access to AI. And that's so much more powerful, right? It's so much more, um, wide, wide ranging, wide sweeping in terms of benefits. And so when I was watching the talk yesterday. I was like, yeah, it's going to give more people access to this technology that don't have access to it now. Speaker C [00:26:03] And even for education, right? I mean, you can, every single child now can have a personalized tutor, um, and, uh, you know, someone who can give them feedback on assignments and things like that. So we're, uh, using this to radically improve access to education in, uh, underserved communities around the world. Speaker B [00:26:18] That's what I did my PhD on, like, ten years ago. AI and education and. Yeah. And back then, it wasn't like deep, uh, neural networks. It was really kind of more rule based things, a little bit of pattern matching. And now I see, like, the technology that exists today. And I was like, if I was doing my PhD now, I'd be, like, using this completely to give, like, to actually detect errors, understand the cause of the error, and give exercises or feedback that's really, like, tailored to the exact thing that the learner needs. That's super powerful. Speaker A [00:26:46] Well, I always like to end on, uh, some words of optimism and positivity. So, lightning round, last question. Fred, start with you. What are you most excited about here at Dreamforce? Speaker C [00:26:59] I'm most excited about the potential for AI to be the greatest humanity equalizer in human history. And I think that if we actually point our attention to the right problems and are problem driven, all of this phenomenal technology that we're learning about could end up with the greatest progress that humanity has ever seen. Speaker B [00:27:21] I think I'm most excited about having people who don't know much about AI learning about what it can and can't do sometimes, and really connecting with it as a technology and not some, like, abstract concept. Because for a lot of people, AI is so ephemeral that it's really hard to even understand how they can use it in their business. Yesterday, I was in the line for the bathroom. At some point, I was talking to a woman who works in a nonprofit, and we were talking about AI. And I was like, you can use it in this way. And she's like, I never really thought of that. And so these kinds of connections, like connecting people who have absolutely no prior experience with AI to this potentially life changing technology, um, I find that it's really exciting. Speaker A [00:27:58] Well, you mentioned line, you're online, you mentioned connections. What I'm most excited about is being here at Dreamforce in San Francisco. We're talking about agents, we're talking about technology. But look at how many people are here. 45,000. So many more are streaming in virtually and the human to human connectivity is so meaningful and matters so much. And nothing, no technology, including AI, will replace that. So I'm really excited about that. Thank you to Salesforce. Thank you to Dreamforce. Speaker B [00:28:32] Thank you to time. Speaker A [00:28:33] Thank you. Congratulations to our time 100 honorees. We'll have more time talks later today. So thank you for being here this morning so early. Speaker C [00:28:42] Thank you so much. Speaker B [00:28:46] Dreamforce.