# TIME100 Talks Distributing the Power of AI Auto-transcribed by https://aliceapp.ai on Tuesday, 17 Sep 2024. Synced media and text playback available on this page: https://aliceapp.ai/recordings/YPhebCfRg0wE7tBQW83NjJXpODaVs_k2 * Words : 4,477 * Duration : 00:25:51 * Recorded on : Unknown date * Uploaded on : 2024-09-17 23:27:22 UTC * At : Unknown location * Using : Uploaded to aliceapp.ai ## Speakers: * Speaker A - 83.02% * Speaker B - 16.98% ---------------------------- Speaker A [00:00:03] This is wild. The silent disco situation. Wow. Speaker B [00:00:09] Well, hi, everyone. It's so good to see you. Jessica Sibley. I'm the CEO of time, and I'm here with Amba kak. I'm so excited for this conversation. Thank you so much for joining for a time 100 talk. First, I want to congratulate you for making the time 2024 AI 100 list. Speaker A [00:00:31] Thank you. Speaker B [00:00:31] The most influential, important people in AI today, and we have a lot to talk about. Congratulations. Speaker A [00:00:42] Thank you. Thank you. Speaker B [00:00:43] Thank you for joining us last night with, uh, over 50% of our honorees who are there, um, celebrating, let's talk about the incredible advancements with AI. This technology is so powerful. We've also seen that it's not always equitable and that the advancements have not necessarily been equally distributed to individuals around the world. And this is what you've been focused on. This is your work. How, um. How do we increase access to AI? How do we change that? Speaker A [00:01:18] Yes. Speaker B [00:01:19] What are you doing to propel, uh. Speaker A [00:01:22] That forward all good questions, I'm going to first flip the script a little bit in that I wouldn't necessarily describe the work we do at, ah, AI now being about sort of distributing the advancements or the benefits. I think the thing that keeps us up at night and motivates our work is much more the question of value distribution. So if we're in the middle of a hype wave, if AI is, as everyone here for sure says, it is the social and digital infrastructure of the future, then the question that I think we should all be thinking about is who is going to benefit from it? Who is going to lose out, and how are we going to sort of distribute value from this engine of innovation in a more equitable way? So, again, not just who kind of participates as passive consumers, but who gets to build, who gets to profit, and who gets to be kind of part of shaping the horizon, um, for innovation. And, um, I think I, you know, I'm going to also propel us, because I've sort of been walking around here and a lot of the, um, I think the narrative, and if I might call it that, the hype around AI is sort of like screaming out at me. And so I can't help but take a step back and just say, look, as exciting as the kind of narrative of the eccentric genius that comes up with these technological advancements is, there's a kind of alternative history to what this moment, that sort of led us to this moment. Um, it's a history of almost 70 plus years of what AI even means. But let's go back to sort of 2012, right? 2012 was, um. Any machine learning engineer will attest to this was clearly a pivotal sort of landmark moment in the history of AI. And it was landmark because Alexnet, the winners of that competition, they were able to, uh, through their models, sort of identify a cat, a step change better than models previously. Um, but I'm bringing this up to say one of the most interesting things they said in that paper was that the models that they used weren't actually new or innovative. In fact, they were using models that were maybe even a couple of decades old. What was new was data and compute, and just exponentially larger amounts of both data and compute. And so I think that we should think of this current moment we're in as sort of the next version of that bigger is better paradigm. And I think the question, again, that needs to keep us up at night is if, you know, the inputs or the parameters that determine performance of this new technological paradigm are controlled by very few companies, um, then do we have a say in that? Is that a technological paradigm that we should all buy into? Um, and it's certainly a question for policymakers to be asking themselves. Speaker B [00:04:07] Well, let's hold on that thought. Policymakers asking that thought. You have been advocating for producing an FDA style body to really regulate new AA models. We talked about not letting anyone get left behind, uh, releasing policy frameworks for governments on how to tackle the social risks of AI, including understanding the impact of data, especially around climate, and data centers around climate change. When we think about this topic and this issue, members of the tech industry say that regulation really stifles innovation. What is your response to that? Speaker A [00:04:58] Well, my first response is that if I had, uh, a penny or a dollar for every time I heard that regulation will kill innovation, I would be, uh, a very rich woman. And I am not heads up for anyone who is wondering. But I think the thing, and maybe the other one that I hear almost that would be maybe up there on the bingo card, the next most popular, um, sort of policy lobbying point, if I can just call it that, that I hear is, oh, regulation is going to hurt the small guy, and big tech is almost certainly paying for that person's, uh, Bill. Right? So I think that there's. I want to talk about both of these. Um, the first is to say that I think innovation has become a little bit of an empty buzzwords. It's like, you know, we have, if you say innovation and you say China and the threat of, you know, chinese competition, then all of us people that are talking about reasonable risks that need to be regulated, need to shut up and sit down. And so it's being used, used a little bit as a sort of gag order, even though in my mind, and I know I'm not alone in this, for all of us that work in the space of policymaking, we see regulation as about as, uh, sort of shaping the right kind of innovation, making sure that innovation isn't just being sort of driven by the incentive structures of a handful of companies in the Bay Area, but sort of responding to a broader public interest. And I think that, you know, these are tools that impact every aspect of our lives. You mentioned the environment. They're, you know, they're affecting the prospects for our future generations. So I think that these questions are not solely questions that we can, you know, leave to the private companies that are driven by their bottom lines. They are absolutely questions of public interest. And that's sort of the work that, the work we do. So, to summarize, I would say I absolutely reject the premise that regulation is antithetical to innovation. I think it's essential to get us to the innovation that we need. Speaker B [00:06:50] You talk a lot about action, action over ideas. Give this audience some just practical information, practical tactics and actions to make sure that no one gets left behind, to make sure that we have equity, we have safety, and we have all of those sort of reliable issues around AI that are just going to be so core to this next future that we're heading into. Speaker A [00:07:19] Yeah, really good question. I think, um, even before I get there, it's worthwhile to mention that sort of both the poles of the debate we've seen, including the debate on social harms, we've seen the kind of AI boom, like AI is going to cure cancer, shouldn't be regulated like extremely sort of booster version of the argument. On the other hand, we've seen AI is going to lead to catastrophic risks, existential threats, maybe it's going to create bio weapons tomorrow in 50 years, we don't know. But I think what both these polls of the debate sort of miss is a much more grounded conversation in the sort of harms AI is already causing and the ways in which policymakers can use the tools they have to prevent those harms. So, to go back to your question, what is our laundry list of what we want to see practically happened? We sort of divide it into three different buckets. It's first to say, enforce, enforce the laws that exist on the books. There is no AI exception. AI shaped hole to the laws. Um, that exist on the books, whether those are fraud related laws, data privacy, competition. We have been advocating for much stricter merger review because we're seeing a, uh, kind of uptick, not just in killer acquisitions, but also in these sort of unconventional business arrangements that sort of obscure the power dynamics at play. So there's all of that enforce with all your might, use the tools you have to remind Silicon Valley and these handful of companies that they are not beyond the law. I think the other, the second bucket is regulate. So there are actually gaps in the law. Um, you mentioned the FDA. We've been asking a very fundamental question, which is, why is it that products are being released on the market to a broad consumer audience that haven't gone through basic privacy and security testing? There are numerous examples of LLMs routinely leaking personal and sensitive information. And we're sort of just supposed to be like, oops, we didn't fix that. Or we have, uh, Sam Altman, for example, announcing a text to video tool in the middle of a year, where we're going to see elections happening all over the world. And you kind of stop and wonder and say, is there no room for friction before these products go onto the market? So I would say a big room for more intervention is making sure the burden is put on companies to prove that their products are safe, uh, for wide use, especially when they have this kind of, um. The metaphor we try to use is sort of similar to the financial sector. These are systemically risky technologies because they form the foundation of what innovation is going to be built on. And so errors that creep in at that foundation layer have the, they carry the risk of causing a contagion effect. Um, and the chair of the SEC, Gary Gensler, said this better than anyone else. He was like, if these LLMs are going to be the sort of infrastructure of our financial systems, and there's just a handful of players in the mix, that is a recipe for essentially a financial sector collapse if there are security or other vulnerabilities in these systems. Um, but so that's on regulate. Lots to do there. And the final one is actually build. So, in the last year, we've seen governments all over the world with differing motivations, essentially say, look, we can't rely on just private capital. We need to invest public capital here. We need to make sure that we are creating an AI ecosystem that is incentivized towards the public interest. I set different motivations because some of them are sort of like, we just want to create our own national champion. Others, uh, are more invested in this because they think that there should be a diversity of players that gets to innovate and the cost of computer too high. And so the government has a role to play. Um, but anyway, I think to summarize, I would say that last category is really exciting for us because it offers the possibility for the public to really be involved in shaping an innovation trajectory. Um, that is sort of animated by the question, where will the market not go? What is the kind of innovation that the market is not incentivized to produce, and how can we produce it under different conditions? Speaker B [00:11:27] So product launches, new launches, without the kind of rigor, testing and learning. If we think about lessons that we've learned from the social media boom and what we might say, like lack of guardrails that were put in place. This is something that you have spoken publicly about. You testified at the senate hearing, um, on AI and privacy. This was just in July, this past summer. And in your testimony, you stated, this is the moment for action. Should these actions be regulations or other policy action? And how do we make that work? How do we make that happen? Because it's just moving all so fast. And how do we at least slow it down and get the right guard welds and get the right controls in place, but not stifle the experiments and the wonderful things that AI can do? Speaker A [00:12:32] No, I love that. And I also really appreciate the embrace of slowness as not necessarily bad. Whenever I think in the policy space. One thing that we're always talking with policymakers about is that there is a need for sort of healthy friction. Right. Friction is what probably, uh, caused Google not to release its AI system, uh, sooner than they eventually were forced to do in the middle of a hype. Speaker B [00:12:57] Friction and safety. Speaker A [00:12:58] Exactly. Because I think what is being referred to as friction or sort of unnecessarily slowing us down is essentially actions that, you know, are motivated by what we feel is a kind of dangerous hype cycle where the incentives are to go to mark market as soon as possible and to essentially make yourself the, uh, make your company or your product or m your model, sort of the foundation of the innovation ecosystem with a kind of, you know, uh, vendor and lock in effect, that we're seeing because these same companies are also the, you know, cloud and infrastructure providers. So there's, I mean, uh, that is to say that we can understand why these companies are in a rush to sort of cannibalize the AI market. I think for policymakers or for anyone else that isn't motivated by those interests, the call to action right now is multifold. It's like I said, it's to enforce existing laws, it's to regulate areas that aren't regulated where there is regulatory vacuum. And it's to build alternative spaces for innovation so that we can have an imagination for AI R and D that exists outside of the ages of industry labs. Um, and I'll say one thing, because I think there's a lot of sort of builders and um, startup folks in the room is one of the most exciting spaces in the policy world right now is actually competition and antitrust. Um, so what we're seeing in this sphere is a recognition from competition authorities all over the world that it's kind of a problem that three of the largest cloud providers are also playing at every single point in the AI stack. And they have the ability to essentially, um, I think, capitalize on the fact that they are the infrastructure providers and potentially either shut other people out or shape the kind of winners and losers in the AI market. Um, so the FTC where I worked, uh, they recently put out, uh, a kind of section six b is the wonky term for it study where they said, there's something smells off with these unconventional business arrangements, um, that are going on between big tech companies and these so called AI startups, because it's sort of giving the illusion of competition and disruption. But it seems like we need to sort of pierce the veil. And for me, it's like one of the most exciting policy developments I've seen in my career is to see policymakers say, look, the law as it stands, maybe doesn't consider all of this stuff illegal, but something smells off. And if we don't act now, if we don't raise alarms now, then we'll be looking back at this sector like ten years from now and being like, wait, how did it happen that AI became yet another big tech controlled ecosystem? And like, how did we get here? So, um, yeah, I think there's, there's a lot of promise in this moment and also a lot of willingness to act from, from policymakers, especially here in the US. Speaker B [00:15:46] We want to get it right, we need to get it right. And there's collaboration, there's a lot of conversation and collaboration. Yes, we need to get it right. We understand the perils and the promise and the progress, but it's a balance. Speaker A [00:15:59] Yes. Speaker B [00:15:59] And I know that your work also really leans in and is informed by research. So can you talk a little bit about why research is so important to sort of guide what's happening right now? Speaker A [00:16:11] Yes, I mean, I think that research is really an antidote to what we've seen in the last 18 months to two years in the AI market, which is a lot of unsubstantiated claims with very, very little rigorous on all sides. Right? So, I mean, I know Silicon Valley is no stranger to grandiose narratives without any business viability, but I do think that we've reached a new peak, uh, in the last two years. Like, we've had essentially a grand narrative replace any conversation about an actual business model. Like, how did we even get here? Right? And I think that, um, to me is a reminder. At AI now institute, we asked somewhat like, you know, if you took a step back or you were an alien looking down at the planet, you'd be like, these are very reasonable questions. For example, this is very capitally intensive, this industry, particularly because of its compute costs, where is it going to make money? And are we going to see the sort of, uh, you know, collapsing of business models into familiar places that, as we've seen in the last decade, like personalized, basically business models that are based on sort of personal data and more of the same? Um, I would say all of our work is evidence based, and in some ways, that's what makes it disruptive in a moment where mythology has taken the place of fact. Um, and I say this on both sides of the field. So, uh, we just heard from Sam Altman this week that the new model might be even more likely to propel, uh, these biosecurity risks. Um, and, you know, in the last six months, we've had everyone from the House science committee to the Atomic Scientists association, to a very esteemed panel of biomedical and biochemical engineers actually come out and say this whole bio, bio weapon, bio security stuff is so inflated because, yes. Do LLMs make it easier to access information, um, that could maybe lead to the creation of a bio weapon? Sure. Does it do so much more than someone with reasonable, average Google search skills? Not really. And the reason I bring it up is that sometimes this sort of focus on the horizon, on the future, existential, speculative risk, really takes the air out of the room from conversations that are focused on harm that's happening right now. So we're like, actually, yes, maybe killer robots. Sure, we can find time to talk about that. Uh, how about we listen to the unions that have been asking for fair wages and a fair value distribution? How about we speak to the nurses union that feels like their work is being sort of devalued in clinical settings? How about we talk to the startups who feel like their relationships with cloud providers and with big tech companies is becoming even more dependent. Right. And so I bring this up again to say that again, on both sides of the both spectrums of the debate, there's a lot of narrative. And the way to combat it is, I think, by getting extremely concrete. Speaker B [00:19:06] So how does this all then tie to this issue and making sure that we don't omit the underserved communities, explain that, how that all kind of fits together? Speaker A [00:19:19] I mean, I think the question to ask is, are companies under the current paradigm incentivized to cater to underserved communities? And I think the answer will tell itself. One, um, example that I like to give in this context is not from the generative AI moment, but. And, you know, I think everyone in this space knows it. But AI didn't, just wasn't, uh, invented two years ago, but in the last decade. If we look at facial recognition systems, right, we've seen that as a sort of paradigmatic example of a criteria like performance or efficiency, failing to omit the more important question of performing or accuracy for whom. And so what we've seen through research that mostly women of color researchers uncovered over the last decade is that actually many of these state of the art facial recognition computer vision models don't work as well on women, the elderly, people of color, leading to false positives. And that can also seem like a sort of neutral, objective fact. Okay, there's some false positives, but then you think of the countless people, literally at this point, countless, because, uh, of these false positives have been actually arrested on wrongful grounds. And you realize that there's sort of real cost, the AI hype cycle. You know, there are some harmless externalities, like your boss probably, like, you know, demanding that you have an AI strategy or demanding that you find a way to use AI in your workflow. Like, all of that. Fine. Everyone's excited. I think when it gets really pernicious is when we're seeing AI get integrated into context where it matters, where it's life or death, where it's your life chances, housing, education, employment, um, you know, uh, health. Health. And then I think we see the sort of dangers of essentially hype consuming and filtering down to these very sensitive contexts where there are real people and real lives on the line. And again, I think I'm sort of optimistic that the last ten years of seeing AI in practice have at least got policymakers to the spot where they're like, okay, bias, discrimination, privacy, security. We believe you. We've seen the evidence and we're ready to act, that they're not acting fast enough, but I feel like they're ready to accept the premise, whereas I think a decade ago, they were probably like, not so. Speaker B [00:21:40] All right. Um, you said optimistic, you're optimistic. We're here at this incredible dreamforce. I hear from the keynote, Mark Benioff, one of the most important, if not the most important, dreamforce ever. Beautiful day in San Francisco. Talk about what you're optimistic about with AI and with all of the work that you're doing. Speaker A [00:22:00] Okay, well, first I want to put in a, uh, small word in favor of not seeing regulation as necessarily a pessimistic conversation. I love what I do because I think it's about holding companies accountable so that we do see innovation that works for all of us and not just for a few of us. Um, and I think that is absolutely an optimistic project, even as people sort of try to marginalize our work by being like, oh, you're the critics. It's like, no, actually, we're trying to build innovation that, um, serves us all. But I think, to answer your question, what is the thing that, uh, most excites me? And, uh, I am going to crib from my time dinner speech yesterday, because it was the exact same question. Um, but really, I think my optimism came this year from being at a press conference of the nurses union and hearing nurses that have devoted their entire life to caregiving. Just talk about how the most important question on their minds is not yes AI or no AI. It's, can we have a say in shaping how AI is deployed in the clinical context? Because guess who has expertise on care, and guess who has expertise on how it can best serve the patient. It is the nurses. Right? So I thought it was profoundly exciting to see a community of people reclaim their power and expertise and be like, this isn't just a conversation for techies. This is a conversation for all of us. Speaker B [00:23:23] That's amazing. I love that. And I love that story. Just to tie it into humanity and real people and not the replacement, but the enhancement and the abilities to solve the biggest societal, global challenges. Speaker A [00:23:38] Yeah. Speaker B [00:23:39] Well, what are you focused on? This is just a last question. What are you focused on this week? What do you want to see? Who do you want to meet? Let's get, uh, a few tactical. Speaker A [00:23:50] So I was telling someone for various different reasons, including that I was working with a regulator for almost two years. I haven't been to, like, an industry conference in a long while. And so what I'm most interested in is getting out of my usual echo chamber, which is policymakers and policy people talking about regulation and speak to people that are in the process of building. I'm especially interested if you have thoughts on competition and market concentration and if you think this is affecting your work in, uh, not so great ways. Maybe you can't speak to me on the record, but I'd love to hear, because I think part of what we're trying to understand is what is the lived experience of innovators right now in both the AI ecosystem, but also, uh, sort of, uh, adjacent sectors that are being touched by AI? And are they feeling the pinch of concentrated power of the kind we're seeing now, uh, or not? So, yeah, that's my. That's, uh, my pitch. Like, if you have thoughts, I would love to hear from you. Um, I don't get to meet a crowd like this very often, so lots. Speaker B [00:24:50] Of entrepreneurs and technology trailblazers and market blazers and all kinds of exciting people. A lot of people here to meet. Amba, congratulations on being a time AI 100 most important, influential in AI today. Keep, uh, doing all the work that you're doing. And thank you so much for sitting down with me today on this conversation. Speaker A [00:25:14] It's really. This was incredible, and thank you for the honor. Speaker B [00:25:17] Um, thank you. Speaker A [00:25:21] Dreamforce attendees can receive a complimentary special edition of this year's Time 100 AI issue using the QR code on screen. Time's annual list of the 100 most influential individuals in AI recognizes the people driving the adoption of artificial intelligence forward, asking the hard questions about what comes next and reshaping the world as we know it.