# TIME100 Talks: Harnessing AI for Humanity’s Benefit Auto-transcribed by https://aliceapp.ai on Wednesday, 18 Sep 2024. Synced media and text playback available on this page: https://aliceapp.ai/recordings/3jl2uVx04mBZf_PDtdS6gTnJaKyYvHXe * Words : 4,108 * Duration : 00:27:43 * Recorded on : Unknown date * Uploaded on : 2024-09-18 23:30:19 UTC * At : Unknown location * Using : Uploaded to aliceapp.ai ## Speakers: * Speaker A - 18.84% * Speaker B - 81.16% ---------------------------- Speaker A [00:00:00] Uh, it is my pleasure to welcome you to the Time 100 talk, harnessing AI for humanity's benefit. I'm joined by Aman Gill, Amandeep Gill, the United nations secretary General's envoy on technology. Uh, time is, uh, owned or co owned by Mark Benioff, the founder and CEO of Salesforce, as you all know, uh, we're very pleased to bring our Time 100 talk series to Dreamforce and also to be celebrating here the Time 100 AI, the 100 most influential people in AI in 2024, and this gentleman is one of them. Um, so, good afternoon. My question for you to start off this conversation, um, what role does the UN hope to play when it comes to making sure AI does the most good? Speaker B [00:00:50] Thank you, Sam, for having me, and it's a pleasure to be here at Dreamforce. Uh, the UN's role in the governance of artificial intelligence is largely going to be facilitative, so facilitating the efforts of governments and of the private sector, uh, in making sure that we seize the opportunities, uh, of AI and also minimize its risks and harms. Um, uh, so it's not going to be the central actor in terms of regulation, which is the domain of national governments, the United States, European Union. But I think it's going to be an essential actor, uh, in making sure that governments understand how this technology is developing. What are its implications in making sure that governments and private sector across the world dialogues works together so that there is interoperability and in general, the quality of AI governance is high. So that's the kind of facilitation that we are speaking of when it comes to the UN's role. Speaker A [00:02:07] What would it look like if we get AI governance wrong? Speaker B [00:02:12] I think we will see massive societal impact that our existing public institutions would be hard pressed to respond to. We uh, could see misuse in many domains, violation of human rights through, uh, surveillance, AI driven surveillance, uh, the misuse in the international security arena, uh, in violation of international humanitarian law, so conflicts becoming nastier, less amenable to human control and also, I would say, missed opportunities in the sustainable development space. So if we don't harness AI, well, if we don't steer it, guide it in the kind of traditional sense of governance in the right direction, uh, then, you know, we would miss the opportunity to leverage it to, to solve our pressing problems. Food insecurity, lack of universal health coverage, climate change, plastics in the oceans of all those things. Speaker A [00:03:16] And not to start on a darker note, but what worries you most, I. Speaker B [00:03:21] Think what worries me most is the societal impact, uh, the um, on relationships with each other, human relationships. So there is a degree of delusion associated with AI. So if we go down that slippery path where we start to leak more and more agency, uh, to machines and agents, uh, then at some stage, you know, human laziness, cognitive laziness comes in and we delegate more and more of our essential human functions. That includes connecting with other people, uh, relating to them, to machines and to agents. And that could have implications for our society, you know, the way we treat family, uh, as a core of our societal institutions. So these cultural societal implications, which are hard to guess, these are 2nd, 3rd order, uh, consequences of AI. That worries me the most. Speaker A [00:04:22] I'd love to pull back a little and talk about your biography and what has led you to this moment. You've worked in arms control and disarmament work, and I'm really fascinated to understand how that work informs what you're doing today when it comes to artificial intelligence, yes. Speaker B [00:04:39] So in the area of disarmament, non proliferation and arms control, you are also dealing with powerful technologies, uh, and you have a certain toolkit, you have treaties, you have confidence building measures, you have regular dialogues, uh, among, let's say, the nuclear weapons possessors to try and contain the risk, and also to channel some of these technologies to peaceful users. In the case of AI, many of our, uh, known tools in that toolkit are irrelevant or are, um, not impactful enough. The technology moves fast, it sits mostly with the private sector, and there are uh, multiple, there's a multiplicity of actors involved. So you cannot just apply the old tools. So what really fascinates me as a former practitioner of arms control is how we can update our toolkit and how we can play with this kind of interface of soft norms and hard law, some of which is you centered on the UN, international human rights treaties, commitments on gender and the environment, and make sure that we can govern AI in a agile, um, responsive manner without kind of, you know, writing down the law, in a sense, uh, and just capturing a slice in time and not being impactful with a very fast moving technology. Speaker A [00:06:08] When we interviewed you for our time 100 AI issue, you spoke to one of my colleagues about some of the lessons of the Cold War. I'm curious how you think about those lessons today and how they might apply to the current questions around governance. Speaker B [00:06:22] So during the cold war, uh, trust was very low, uh, and you needed to find ways, uh, to sit together, uh, and you needed to develop a common vocabulary. Uh, so there was a moment when I, uh, Bob McNamara spent, uh, a considerable amount of time talking to the Soviets about his theories about nuclear detriments. And the Soviets were very skeptical. But through that kind of a dialogue, a common vocabulary developed. Uh, and there was a moment again, uh, at the heart of the Cold war, when the US deliberately leaked technology that kept nuclear weapons safe to the Soviets because keeping weapons safe was in the mutual interest of the two sides, who were at loggerheads on a number of issues. I don't think we are in a dramatic situation like that, uh, when it comes to AI. But I think broadly, we should take those lessons. That dialogue is important, correct? Understanding a common vocabulary is important. And that's why, you know, what we've been working on at the Un over the past one and a half years to the advisory body on AI is to kind of, you know, have states governments be informed by that kind of sophisticated understanding, have the space and the, uh, avenues for them to dialogue, because it was not easy to construct those avenues for dialogue during the Cold War. So why wait for a crisis like the cuban missile crisis for us to kind of come up with something like the hotline which was, uh, put in place at that time between, uh, Kennedy, uh, and Khrushchev. So we need to be more anticipatory today. And that's what we've been doing at the UN, put in place some, some mechanisms so that, you know, as we navigate this period of low trust, geopolitical tensions, etcetera, we don't kind of misgovern this technology, which, uh, as I said, has enormous potential for good. Speaker A [00:08:31] Uh, I want to talk a little bit more about the work that you're doing today at the UN. You have a report coming out tomorrow. I'd like to dig into that. But one of the things that I found most fascinating in that report was the identification that there are 118 countries that are not party to any AI, uh, governance initiative. Speaker B [00:08:52] Today. Speaker A [00:08:53] It's, um, right now we're talking about how do we manage these great powers. But there are so many countries that are not even at the table. Uh, and I'm curious, what are the consequences of their absence from the governance conversation? Speaker B [00:09:06] This is a very, very important deficit today. It's a representation deficit. There's also a coordination deficit. There's an effectiveness deficit. But I think it's important that when you talk about a technology that can impact all humanity, you have to find some way to involve all of humanity in its governance. Uh, I think it's only fair. We are in San Francisco. This is where in 1945, the, uh, charter of the United nations, uh, was, uh, adopted. So how do we make sure that those 118 countries, uh, find a meaningful venue for participation in aih governance? And that's been one of the focus areas. Um, and tomorrow, when the high level advisory body on AI launches its reports, you will find a set of recommendations, uh, that reinforce inclusion, inclusive governance, but distributed governance. So it's not centralized in the UN. So we are not saying, you know, set up an international agency in the undead to AI governance, but it's more like, you know, how can we help governments, the private sector, civil society, the tech community, get the governance right? How can we use the UN's convening power to bring those excluded countries into, uh, those, you know, as you mentioned, those, you know, tables, those, uh, uh, discussions. At the end of the day, the benefits are unevenly distributed. The risks are also going to fall proportionally on the weakest. So, you know, as they say, with taxation, you know, no taxation without representation. So if we are talking of these benefits and these risks, so we better involve the rest of humanity in putting down some guardrails, some facilitators, some enablers, uh, that make it work for all humanity. Speaker A [00:11:03] I know we're ahead of the reports released tomorrow, but I'm curious, what do you hope that people will take away from it? Speaker B [00:11:12] I hope people take away, um, the need for anticipatory, wise governance that, a, involves common scientific understandings, b, regular dialogues, so that we have interoperability, we have mutual learning. Um, three, the ability, uh, the imperative of focusing on enablers. We hear a lot about AI use and this festival, uh, um, dreamforce, um, uh, underlines the importance of, uh, the growing AI use. But how do we put in place the enablers? So I hope people get, uh, an understanding. The need to focus on enablers, talent, data, compute, uh, incentives and enablers that are required to take AI to the rest of the world. Beyond these two and a half geographies of, you, uh, know, the US, China and parts of Europe, to the global majority. Speaker A [00:12:27] How optimistic are you that that will happen? Speaker B [00:12:30] I'm very optimistic. We are going to work very hard to make sure that the attention stays on the enablers, uh, and we have a unique opportunity at the summit of the future, uh, next week, uh, to take some decisions, uh, so that, you know, uh, public and private sector resources can flow to a, uh, global capacity development effort, so that we can get more people, democratize the AI, um, innovation, AI development, uh, effort. Speaker A [00:13:05] So we've talked about who's being left outside the conversation today, this emphasis on enablers. I think there were two other things that I came away from the report that I wanted to put in front of this audience. One, some, um, of your thinking around funding and some of your thinking around data. And so, on the funding front, what role might the Un or what role would you hope that global bodies would play when it comes to changing our current model? When it comes to funding? Speaker B [00:13:31] Right. So there is the, uh, private sector funding efforts and, you know, uh, they will continue. And I hope more and more countries around the world focus, uh, strategically on these kind of, uh, uh, funding possibilities for startups and then scaling up when it comes to moving up the value chain. But I think at the global level, we need a consolidated funding capacity, uh, pooling public and private resources to build capacity, capacity of government officials for governance, smart governance, not just, you know, uh, knee jerk reactions. Second, access for researchers and innovators, uh, from, uh, you know, underserved countries to data compute, um, uh, and models, uh, three, adapting models that are developed elsewhere, uh, to the context that, uh, obtains in certain countries, agriculture, health, etcetera. So those are the areas that can be facilitated by this global funding capacity. And also, frankly, we need more research into the governance itself, the, uh, govtech of AI, uh, ethics, uh, ethical benchmarks, uh, procurement playbooks. So, you know, this kind of pool funding can contribute, uh, to that. I think we also need to knit together the capacity development efforts that are, uh, popping up across the globe. Uh, so the UN, uh, advisory body has recommended the creation of a capacity development network that knits together these centers, provides them with some kind of a facilitation, uh, knits together their limited compute capacity so that it grows into a kind of a timeshare capacity that's more impactful, that is capable of training more powerful models, that also catalyzes coming to data. An effort to put together data commons gold standard flagship data sets in cutting edge areas like agriculture, health, the green transition. Because frankly, at the end of the day, you know, in certain sectors, you have easy access to data. It may be costly, but it's highly curated. Uh, it's flowing. But in other areas, you don't have the data sets. Today, 810 years from now, if you want AI to be really impactful, to make a difference, then you need to start the data effort today. And the advisory body is suggesting, through a global data framework, the creation of these marketplaces for data where data can be accessed by SME's startups, uh, the creation of these liability, you know, these liability arrangements, uh, like templates which can be used by companies so that they can have cross border training of, uh, data. Uh, not every SME is able to, you know, write fat checks for access to data. So how can we facilitate the data marketplace for, uh, those companies? So those are the areas that we need to start working on now. We are in early days of the AI, uh, revolution, so we need to prepare the enablers, the data piece, the talent piece, so that 810 years from now, we can really take it to the next level. Speaker A [00:16:55] One of the things, as someone who loves history, that I like to think about at these moments when we talk about things being unprecedented is in fact what are the precedents that can help us navigate through them. Um, and you talked about the cold war as a moment in time. I'm curious if there are other moments in history or other technological revolutions that you think could give us a better context to how to approach governance today. Speaker B [00:17:17] Yes. So nuclear is one example. Um, but in the biological, uh, domain, in the chemical domain, we've had, uh, these moments, uh, where, you know, we've learned how to work with the private sector, for instance, at the UN. And there's a learning for the UN. We've also learned how to use hard law treaties. You know, in the chemical domain, there is a very, uh, powerful treaty in place, uh, in the biological domain, less so, but then, you know, how to supplement that with other arrangement like confidence building measures in the biological domain, because the sector is very complex. So there are many, many lessons we can take away from these three areas. Also from space, uh, which is developing very rapidly. We have some hard law from the past, but then, you know, some arrangements in terms of, uh, collaboration today. So I'm very optimistic that leaders around the world, in the private sector, in governments, in international organizations, would be up to the task. And without copying pasting from old models, they would be able to use, for instance, our recent, um, experience with climate change, the international Intercommunication Panel on Climate Change. What is the equivalent that works in the air space, so that we have a common understanding of what's happening with the technology. It's not what someone is telling you that AI is capable of doing this or that, but it's kind of like the collective wisdom of experts from around the world, which helps us to come out with better policy responses. Speaker A [00:18:56] Uh, you mentioned climate change. I'm curious how AI can be an enabler, to use the word that you're using for our sustainable development goals. I think some would be scared that, let's say, when it comes to energy consumption, that AI might actually set us backwards. Speaker B [00:19:11] I think that is an incredibly important issue. We have to make sure that the net impact is positive. Uh, so I hear that, you know, more renewable energy is being used, uh, for AI development, but that's like, you know, load shifting. Uh, we need to think about the energy footprint, the water footprint, the materials footprint, and be more strategic about what are we going to use large AI models for and why do we need to develop, train, uh, the same model a million times over around the world? How can we be more collaborative so that we minimize training runs? How can we be more collaborative across industry sectors, but also across our borders, like this common data effort, common model development. And then most importantly, how can we use AI to reduce energy consumption in other sectors? Because, you know, I'm afraid that the energy used, uh, for not just training but also inference is going to rise exponentially. So we need to find uh, an offset in other places. So using AI to optimize air conditioning in buildings, you uh, know, how trains run, how uh, power plants, uh, can deliver more, uh, efficiency. So there are a number of areas where we can work on the energy, uh, piece and I could go on with water and with other aspects. This again, uh, requires cross domain collaboration, often cross border collaboration. This requires kind of new incentives because often there's no money to be made in these areas. So how can we create those incentives? I think that's going to be our work for the next few years. Speaker A [00:21:00] Are there other areas of the sustainable development goals where you feel like AI is particularly relevant outside of climate? Speaker B [00:21:08] So we did, as part of the work of the advisory body, we did data risk scan and we also did an opportunity scan with global experts. So for now, the expert opinion around the world is, uh, that the most impactful area is going to be one, accelerating scientific discovery. Second, um, journal expansion of the digital economy, where the foundations are there. And third, some areas in the sdgs, so people are more pessimistic about the sustainable development goals. Uh, health, education, agriculture, food security, environment. These are obviously, uh, the priority areas. But, you know, what can happen in the next three to five years seems to be limited unless we really double down on the, uh, enablers. Frankly, uh, I worked for three years before joining the UN in the health space and I'm very excited about the possibilities of AI use in health. But I also know it's hard work, uh, so we need to get down to that hard work. Uh, and a lot of it is, you know, down to simple things like workflows. What does a nurse do and what does a doctor do in the operation theatre? And how can we create the right use so that trust is there not only of the patients, but also those who are deploying those tools, and also that, uh, the results are not like, you know, not just in silica, because you often see beautiful results in theory, in papers, but when you take it down to a hospital, like, you know, we took one tool to a hospital where the, you know, the signal was very noisy and it broke down completely. So really making sure that we're not coming out with snake oil, but impactful use cases so that the trust and the uptake is faster. Speaker A [00:23:07] I'm curious how, um, an organization like the UN, not um, known for its agility, is able to keep up with the pace of change with something like AI. Speaker B [00:23:18] Yes, absolutely. I mean, that's a legitimate question to ask. And my answer to that is the advisory body, in nine months, produced two impactful reports, and that made a unique contribution. So, 2000 plus engagements, uh, with individuals, you know, three in person meetings, 18 deep dives, two scans, opportunity and risk, you know, so we've set a pace, we've set, uh, an example, and I think, uh, my boss, the secretary general, convinced with the summary of the future, we have a way to demonstrate that the UN is fit for purpose, and it's ready to work with member states, with other stakeholders on the challenges that the planet faces, and technology governance being one of them, we can do it. Speaker A [00:24:12] I'm curious how you think about this audience, those at Dreamforce, those who are here, uh, as a constituency in that effort, how do you bring these stakeholders in? And then as a follow up, what were you listening for this week to help you do your work? Speaker B [00:24:28] I think this constituency is the most powerful constituency, uh, that we have in the AI space, not only because it has the talent and the technology, but also because it has the understanding and it can help governments and other stakeholders what the technology can and cannot do. So I think that's the first contribution that this community can make. Help governments and others understand what this is. Second, help them collaborate with each other, because you work across borders, so there is a, ah, kind of mistrust. Um, today there are difficulties in terms of geopolitics, so the private sector can provide a safety net in terms of collaboration across borders. So help governments come together and get governance right, so that, you know, their work is facilitated and your work is, uh, facilitated. And the third and most important, please contribute to this global capacity development effort. Uh, like, you know, Mark Benioff was saying at the keynote yesterday, uh, the sales force philosophy of one, one, one contributing a certain part of the revenue employee time to these kind of societal challenges. I think we need to take that to the global level and have the private sector contribute talent, contribute some compute and some models by open sourcing those models to solving these, uh, grand, uh, challenges. And I think this will be the key to the success of the UN's agenda. So my takeaway from this week is, you know, I'm optimistic. This is the message that I brought to this conference and the message that I'll be taking to other conferences. And I, um, am encouraged by the response of the private sector. We need to bring this together. Speaker A [00:26:23] Is there something that's giving you, um. You said it's hard work, what you're doing. What is it that keeps sustaining that hard work for you? What keeps you in, in the fight for better governance? Speaker B [00:26:35] So I'm an engineer. I trained as an engineer, you know, back in the day, wrote code. Uh, AI was more research at that time. It's not, you know, out in applications. Uh, so I've seen the technologies, power grow, uh, and I've also seen the kind of gap in terms of the understanding among, uh, uh, in the policy making community. Ministers, prime ministers, presidents, leaders of international organization. They simply don't understand the technology enough. So this is what really worries me a lot. How can I bridge this gap, uh, by using simple language, by using that shared vocabulary that we talked about earlier, uh, to have policymakers understand this in a nuanced way so that their actions, the quality of their actions is better. Speaker A [00:27:27] Fantastic. Well, thank you very much for this conversation. Thank you all for joining us. Please. I see some of you have them grab a copy of the time 100 AI issue. Uh, and I hope you have a wonderful time at dreamforce.