Tenant Of Schools And Your Moderator For Today's For

Words: 8,465
Duration: 00:56:42
Recorded on: 30 days ago [Fri, 21 Nov 2025 18:33 UTC]
Uploaded on: 30 days ago [Fri, 21 Nov 2025 18:33 UTC]
Transcribed on: 30 days ago [Fri, 21 Nov 2025 19:31 UTC]
Last edited on: Not yet updated
Speakers: Speaker A - 13.7%
Speaker B - 30.71%
Speaker C - 20.82%
Speaker D - 34.77%
Location: Sierra College - Nevada County Campus, Grass Valley
Using: Alice APP Download
Language: English (US)
Channels: 1
Sample rate: 22050 Hz
Summary:
      

Duration 00:56:42
Word Count 8,465
Source Alice App
Recorded Nov 21, 2025 at 06:33 PM UTC
Transcribed Nov 21, 2025 at 07:31 PM UTC
Updated
Location Sierra College - Nevada County Campus, Grass Valley, United States
Language English (US)
Sent to Google Drive
Speakers Speaker A - 13.7%
Speaker B - 30.71%
Speaker C - 20.82%
Speaker D - 34.77%
Notes

Speaker A
[00:00:00] County Superintendent of Schools and your moderator for today's forum. And you must look at the group here and say, where are the 20 somethings here? They should be running this. No, we went to, we went for leadership. We went for knowledge base and experience. So I want to introduce my very good friend who I've known for a long time, Steve Monahan, who's the retired Chief Information Officer for the county of Nevada, who's been awarded CIO of the year.
[00:00:31] Oh, he's got, he's, that's why I put Tech Guru on the, on the announcement. And then next to him is Eric Little. Eric's an attorney who's worked with lots of startup AI companies and is currently actually working with a company locally called Ladras that's here in Nevada County. AI firm. And then finally is Sasha Sidorkin.
[00:00:56] And Sasha is a professor of education for SAC State and known as the Tech Guru. Actually that's his title for Sacramento State and AI research and all. So welcome to you all. And let's begin with you, Mr. Monahan. Let's get right into this. So what is AI, the workplace going to look like here in the, in the next decade?
[00:01:21] What's going to happen in the American workplace? You keep hearing these things like, oh, it's going to be tremendous job losses. Then I heard last night, oh, there's going to be job increases. What's the workplace going to look like, Steve?
Speaker B
[00:01:33] Great question, Terry. And we don't really know yet, but we know it's going to be profoundly different than it is today. Right. So I teach IT leadership classes and over the last six month, AI classes to leaders of organizations about what they need to do now to prepare for what's coming and what's already here.
[00:01:56] So I like to put context to it. People are talking about AI. It's all new. And I'm sure Sasha will talk about this as well. AI has been around for 50 years. It's what's been out for the last three years, this generative AI that came out with ChatGPT and those technologies that has made it available to the masses, consumerized AI so that it's available to your average organization in all sorts of products.
[00:02:26] That's what's really changed. And it got really hyped up and now we're starting to see some disillusionment with it. You know, people are like, oh, it's not that good. I put it into Google, I do my search, it's wrong half the time. That's kind of that consumer version of AI and It's really just the tip of the iceberg of all the AI that's in place and it's coming down the pike right now.
[00:02:50] So I have this slide I put up in the beginning of my classes and it's the Internet in 1994. That's where we're at with AI right now. If you think about the Internet in 1994, web pages were like PDF documents, right? There wasn't a lot of functionality there. There was no Amazon, there was no Netflix, there was no streaming Spotify, there was no Zoom, there was no E commerce, there was no banking online, there was no telemedicine, there was no online education.
[00:03:25] And if you look at the last 30 years of what the Internet has done to every aspect of our lives and our society and our commerce and our education and healthcare, that's the impact that AI is going to have on our organizations. But it's not going to take 30 years. It's going to be in five years, 10 years.
[00:03:48] And it's happening at an accelerated rate compared to what the Internet did. Internet took some time, but AI is going to happen quicker. So I don't know if that answered your question.
Speaker A
[00:03:58] Well, do you see it as a, in the average workplace, as a threat these days to the average workplace or you as an employer excited about this opportunity?
Speaker B
[00:04:11] Well, I'm excited about the opportunity and what it can do in augmenting people and enhancing employees performance and capability. So that's what generative AI is doing. It's helping people who like, I'm not artistic, but I can create art now, right? I can create a song. It may not be a good song, but I could create it.
[00:04:33] I couldn't do it before at all. So it's enabling people to do things they otherwise couldn't do and it's doing general productivity enhancements. You know, manage your calendar better, manage meetings better, manage, you know, write memos better, you know, correct all my spelling errors, my grammar errors for me, those kind of things.
[00:04:55] AI as a thought partner, critical thinking, challenging your assumptions. So it's helping in those areas and it's just starting now to get into organizations to help with products and services, business processes, how we do workflows. Every major software provider is building AI into their applications. It's in our phones, it's in your Windows computers that you have at home.
[00:05:28] It's going to be in your tv, it's going to be in every piece of software.
Speaker A
[00:05:33] So back to the workplace. We had the conversation when we were chatting about coming to the forum and you noted you have one son who's going to be an electrician and another who's an undergraduate in computer studies. And you said, gee, I'm really excited about my son in going to be an electrician, but finish the sentence.
Speaker B
[00:05:54] Yeah, but my son who's, who's going into information science is going to have a little more challenging job market. So what we're seeing in big organizations, if you look at Google, if you look at Meta, Microsoft, they're all doing huge layoffs in that programmer development, that expert kind of layer. You know, 2,000 here, 3,000 there.
[00:06:25] We don't have a lot of organizations, especially in Nevada county that have first employees that only really do one function, program development. Right. Or do we have hundreds of them in that same that can be replaced by AI because AI can take over expert via an expert. So it's very, very effective for AI to take over and let a thousand developers do the work of what 1500 used to do through its productivity enhancements.
[00:06:58] We don't have a lot of opportunities like that. I don't know of any because most employees are fragmented. You know, point one of their time is doing that is zero point five of their time is doing that is point two of their time is doing that. So you can't, I don't see that mass transformation in the average business, especially small business.
Speaker A
[00:07:19] Great. Yeah, thanks. Eric, your thoughts? You deal with companies starting up and pick up the mic there if you don't mind and give us some thoughts about this changing workplace.
Speaker C
[00:07:30] Well, so my profession is being changed dramatically by AI and I think it will be, it will look very different in 15 to 20 years.
Speaker A
[00:07:44] The legal profession.
Speaker C
[00:07:45] The legal profession. There's a study by a Stanford economist that came out a couple of months ago that said an entry level jobs in the professions, the fields most affected by AI are down 13%. There is a writer about AI who's begun a conversation about a post labor economy. I think that what I read suggests that that won't happen quite as quickly as the technologists suggest.
[00:08:28] There seems to be a split of opinion. The technologists say this is going to happen in five years and the economists say no. There are various institutional barriers, human barriers that are going to slow the rate of adoption. But I completely agree with Steve. It's going to be faster than the Internet and way faster than the industrial revolution.
[00:08:57] There's, I think some people that feel that AI is in a bubble and we can unpack what might be meant by that. But one of the things that I believe is True, because I see it greatly in my own work that AI is becoming much more useful over the past couple of years and they're really.
[00:09:23] I don't believe that we've exhausted the sorts of new technologies that are going to related to AI that are going to have an impact. For example, on the horizon is.
Speaker D
[00:09:38] The.
Speaker C
[00:09:39] Not just AI, but robotics. And not to raise a controversial name, but Elon Musk is going to make a trillion dollars if he sells enough robots over the next 10 years. And he's pretty good at that sort of thing. And when you marry a robot with AI, you're going to get another technological change which is going to have dramatic effect.
Speaker A
[00:10:11] So let's just talk about your profession for a sure bit because we're not all lawyers. So tell us, in your field, what are you going to see change just relative to the law profession?
Speaker C
[00:10:23] Well, so let me talk first about myself and then about what I think I'm seeing in the larger firms. So I have a background in a couple of areas. I have done a lot of trouble transactional work for technology companies. I've done some financing and I actually have a background over 30 years ago in building expert systems for lawyers.
[00:10:57] What I found about AI in my own practice is it, it expands the areas where I feel comfortable practicing maybe more than just at the margin. So that while I had a fairly narrow lane that I normally like to keep in, now I can move more outside of that lane. And that in general makes me do two things happen.
[00:11:28] One is I can spend less time on a matter which means that my, my cost to a client is lower so I become somewhat more affordable. And secondly, I guess the practice of law is becoming more competitive. There are now a huge number of well funded legal software companies that will automate a variety of different aspects of your practice.
[00:12:03] And they are selling to everybody from small firms to the very large firms. A lot of the large firms are, I believe have millions of documents in their databases and they are taking AI training, AI on those documents so that their need for young lawyers who work research is declining. They will hire less.
[00:12:36] I would not want to be a law school graduate today unless I had serious expertise in some additional domain that would support perhaps a more specialized practice grant.
Speaker A
[00:12:50] Steve?
Speaker B
[00:12:51] Yeah, that brings up Eric had mentioned the Stanford study and that was a Stanford adp, the payroll company study. They came out last month and a half ago and what it showed was a 13% decline in new hires in the demographic of 22 year olds to 28 year olds. But it wasn't an overall decline in hiring.
[00:13:12] It shifted to an older demographic, the 29 year old to the 40 year old. So companies are not hiring the younger people. That goes back to my son. I'm telling them differentiate. You know, he's picking up a minor in project management. Because you can't just have the tech skills. Right. To try to what's that edge going to be?
[00:13:32] So what we're seeing is AI can replace or replace the need for a lot of the younger people, but they're hiring older, more well established people because you still need judgment. AI has no judgment. AI has no empathy. So you still need the older. You need a little wisdom, Terry. I'll get back in the workplace.
[00:13:56] So we're seeing that shift, which is going to make it harder. That's why we're seeing that, you know, people, you know, the low hiring rates out of college, you know, the new graduates not being able to find jobs in the workplace. And that's a real problem.
Speaker D
[00:14:14] Yeah.
Speaker A
[00:14:14] Well, let's talk about academia. So Sasha, what's the workplace going to look like in college and high school education? All then we'll focus a little bit more about the classroom. So talk to us about the changing workplace and the changing workplace in academia.
Speaker D
[00:14:37] So the AI affects different industries in a different way. So that's I think important to understand. So when we try to actually implement it in business and other organizations, what turn out to be the case is that if you have a short, relatively short workflows where you can see the final result and can judge whether it's good or not, that's the easiest ones to implement.
[00:15:00] So when you have long and complicated workflows where everybody has to trust the previous chain kind of a person in the chain, then it's a lot more difficult to implement. So like a beginner lawyer or like a paralegal beginning coder is the vulnerable professional sessions. The. If you look at customer service, right, customer service, you have one phone call, one interaction and then you fix the problem, then you go to the next one.
[00:15:30] It's the same thing, very easy to replace. So the customer service as a industry is really dying, being wiped out, because AI can do a better job actually than most humans to do that. Now in education it's very different because we are really affected directly more than any other industries because in education we've been moving away from multiple choice, test or oral exam assessments for decades.
[00:15:58] What we use now in education is what we call performance based assessment. So you wrote an essay, you must not know something or you created a project or you developed something, so you create the product and you show it to us as a student, and then we can judge whether you know anything or.
[00:16:13] Or can do things. So the problem with us is that AI kind of wipes out our most essential tool like that. You know, it can turn out, colleges say, pretty easy, and it's going to be better than any student. So it's. And of course, because we. We're not affected because there's some competition somewhere there.
[00:16:35] No, we're affected because that technology takes away some of our most essential tools. And, you know, just to be completely honest, we're a little bit in disarray in higher ed. We didn't expect that to happen. Nobody wanted to happen. People don't know what to do. We will figure it out. You know, there are solutions to that.
[00:16:56] The solutions are that you have to actually ask students to do more complicated things and allow them to use AI. But the complicated things where you can actually measure the gap between what AI can do and what actual human input will be doing. So the solution is to teach students to supervise AI to be an executive who.
[00:17:20] You know, there is a difference between, like, a manager and executive. Executive is the one that has a strategy and tells people what to do. So in a way, we're all turning into executives. But you still need to have an overall idea what needs to be done, what is worth doing, what's the difference between the good outcome and the crappy outcome, and judge it and make a difference.
[00:17:41] So those are more advanced skills that we need to teach students. The skills like, where do you put. Or whether you put in the Oxford comma somewhere is really becoming completely irrelevant. I would argue that it was always irrelevant, but. Or like, can you put together a perfectly grammatical English sentence? I mean, some of us spend years trying to master that skill, and suddenly it becomes completely relevant.
[00:18:11] The robot can do that. Right. So it's very painful also for people, like, emotionally, there's this reaction, oh, my God, I studied. And then when I ask my colleagues, we'll stop assigning college essays, they are ready to kind of crucify me for that. What do you mean colleges say is learning well? No, there's actually a difference between thinking and writing.
[00:18:33] There's a big difference between thinking and writing. We kind of lost that edge and lost that difference. But anyway, I don't want to go into all the details. I just want to say that our industry is affected. We're hurting. We need really help. And the governments don't understand that we. Because that is very Difficult to explain.
[00:18:54] Why is it. Why don't you teach AI? Well, we can teach AI. It's actually easier than to teach writing or English or history, philosophy or social work or education or something like that. That's our problem, not teaching AI, but how do we teach all the subjects that we have? But again, there is this kind of.
[00:19:14] We kind of started to see the glimpses of solution. And I think in a few years, the colleges are not going to die. People are not going to college to interact with AI. They come to college to have, you know, a nice building to meet their future, you know, partner in life. AI just makes it a little more obvious that that's what very few people come.
[00:19:37] Just give me information. The information is available. You don't need college for information. You need experience, life experience. The education is really a relational in its essence. It's not informational, it's relational. So colleges are not dying and they're going to look a little different, not dramatically different, but we'll have different kinds of assignments.
[00:19:58] We have to revise the skills that we actually teach. And how we teach them will take us a while to look.
Speaker A
[00:20:04] So, Sasha, we heard the legal profession's going to be weeded out a bit, and computer studies and all. What other departments are you going to see that are going to be directly affected by AI in higher ed?
Speaker D
[00:20:18] Well, that's not what Eric said. He said that the junior lawyers are going to be wiped out.
Speaker A
[00:20:24] Right, Right.
Speaker D
[00:20:25] But the advanced expertise in law is absolutely irreplaceable. There's no robot even in Horizon, but.
Speaker A
[00:20:33] There'S going to be less law school students.
Speaker D
[00:20:37] Maybe temporarily.
Speaker A
[00:20:38] Okay.
Speaker D
[00:20:39] But once we figure out how to teach the more advanced, Once we teach them how to beat the robot, then they'll come back to us.
Speaker C
[00:20:48] I gotta jump in here.
Speaker A
[00:20:49] Go ahead.
Speaker C
[00:20:51] So I didn't talk about. Well, I only talked briefly about the large law firms.
Speaker A
[00:20:55] But let me use that mic a little more. No, the mic's a little closer to you.
Speaker C
[00:21:02] So I think that there are a lot of people making judgments about AI out there that have used the free version or the $20 version of ChatGPT or Claude or some of the other foundational models. And they. But there are two things I want to say about that. One is there's a big difference between the free version and the $200 a month version.
[00:21:33] And the $200 a month version is very impressive. But this is not going to stop. I think if we try to project the future on the basis of, well, it's just going to be more of what we have today. We are selling the future short. These models are going to get, these capabilities are going to get stronger and better.
[00:21:59] And I don't think just because you're an executive means that you're immune to being replaced. I saw a video of Sam Altman saying he's out to have a foundational model take over his job. So. I think we have to project in the future as to what the capabilities might be and assess what the impact on different professions will be then.
Speaker A
[00:22:35] Okay, Steve?
Speaker B
[00:22:36] Yeah, I think, you know, if we're looking in the horizon, the short term to 10 year horizon as we lose entry level positions, the number of them. What I talk about with leaders is how are you going to do workforce development if you still need wisdom and judgment at the top and your feeder is being decimated because how many of those are actually going to make it through to the top to be the wisdom and judgment in ten years from now, you know, so how are you going to restructure your classifications, your organizational structure such that you don't.
[00:23:15] You can develop people into the leaders we need in the future. Otherwise, you know, they're hollowing out the middle. And that's one of the terms they're using is the middle is being hollowed out because that's where AI right now can do the most replacement.
Speaker A
[00:23:29] Okay, great. Dasha, you want to start?
Speaker D
[00:23:32] Yeah, I just want to comment on. Eric's here. So first of all, don't listen to Sam Altman. He's trying to raise the money. Of course he's going to sell you the most hyped up version of the future. So there's a conflict of interest there. Don't listen to them. If you, in fact, if you're watching YouTube or something, that people who come up on top are either saying AI is total crap, it's not going to make any difference, or the people who say, oh, it's going to change the world.
[00:24:01] Well, it's neither. I'm sorry that, you know, life is not YouTube algorithm. It's a, it's. I agree it's a revolutionary technology, but I can clearly see that they hit certain ceiling at the large language models. So they're improving. Like, you know, just a couple days ago, Gemini 3 came out. It's awesome, it's wonderful.
[00:24:22] It's not qualitatively different from ChatGPT4. It's not that different. I mean, it's a little better. So when they promise you like a leap, what they forget to tell you is that the plan if you just pour more compute into it and have like more bigger context windows, somehow the models will become amazingly smart.
[00:24:42] And I did try a $200 model as well. I don't think it's that great. I'm just saying, you know, when you try to write like a book with that, it will not write a book for you. So you kind of have to have architecture of that book and ideas and you have to pour in ideas before it can.
[00:25:01] So again, but if, even if technology doesn't change at all from now on, we still have 20 years of innovation because how to use it is really hard to figure out and apply it.
Speaker A
[00:25:14] I can tell I don't have to ask many questions.
Speaker B
[00:25:17] I was just saying, remember we're in 1994 with the Internet right now. So the models are great, they're pretty darn good and they're just going to get better. And you know, we haven't talked about compute power which is driving all of this. So the more powerful our computers are getting, the better AI runs.
[00:25:40] The better AI runs, the more powerful our computer gets. You know, so it's going at an exponential rate. So we're in 1994. You know, our current chat GPT3 is like a PDF document, right? It's pretty powerful, but it's nothing like it's going to be in five years from now. So what I tell leaders are you don't make plans for today, you make plans for tomorrow.
[00:26:07] So what if ChatGPT was 100% right in 5 years it could do all these things that are hyped up right now in five years. How are you going to start changing your organization today to be ready for that? Change your classifications, upscale your workforce, change your organizational design because you're not going to need the same top down structure five years from now, ten years from now that you have today.
[00:26:37] But if you don't start doing it, it takes a long time to change an organization that goes back to what Eric was saying. That which is going to be kind of the governor in slowing down AI. It takes 5, 10 years to meaningfully change a big organization.
Speaker A
[00:26:54] So let's talk about one aspect of it that everybody talks that is going, we're going to lose creative thinking. Creative think. We will turn to the computer and it will do our creative thinking.
Speaker B
[00:27:06] Steve, I don't think it's gonna. AI is just math, right? It doesn't do. It doesn't have creativity built into it. It needs a human for that. What it can do is help you be a thought leader with you challenge, you help you think about things, spur that conversation. And that's what we're seeing.
[00:27:31] The real value when we upskill employees in using these tools is that it's not just use it like a Google search, use it like a conversation. Right? Go back and forth. When I use I like Claude, I use Claude a lot. I'll say, I'll write something. I'll say claude, ask me three more questions about what I just wrote.
[00:27:54] Tell me three things that I got wrong. Ask me four more clarifying questions to make me think bigger about this topic. So you go back and forth with it and it remembers that context. That's the power of using AI as a partner, not a replacement. That's the term that we're hearing a lot, is augment AI, is augmenting our employees to be more productive.
[00:28:19] I can see Sasha wants to say something.
Speaker D
[00:28:21] No.
Speaker A
[00:28:22] Okay, good, good. So that leads us into the conversation, Eric, of a lot of the people here are retired. They're out of the workforce. What is AI going to mean for us who are not in the workforce? Pick up that one.
Speaker C
[00:28:42] I did not prepare for that question or Sasha.
Speaker A
[00:28:45] Please don't, don't.
Speaker C
[00:28:46] I think it's going to be very entertaining. It's going to be a bumpy ride for those in the workforce over the next 20 years. It's so I was an avid Sci Fi reader when I was in my teens and twenties. I never thought I'd live to see the day off where there was a robot that would pass a Turing test.
[00:29:17] I feel privileged to be able to be a spectator in what's going on. And so I think it's.
Speaker D
[00:29:36] As I.
Speaker C
[00:29:37] Said, I think it's going to be enormously interesting. It's going to have profound impacts on society. Both what we see directly in terms of workforce changes, but a post labor economy, if that's what comes into being, is going to change many, many other things about society, about our culture. And it is, I think, enjoyable to sit out there and try to think through what those will be.
Speaker A
[00:30:17] Let's turn to Sasha.
Speaker C
[00:30:18] Yeah.
Speaker A
[00:30:19] What's it going to look like, Sasha, for many of us in this audience, I think your audio's on.
Speaker D
[00:30:25] The big question is what's the overall impact on the economy? We haven't seen the productivity growth yet from AI, but it will definitely happen. And the big question is how much. So if we increase our productivity economy wise, even by 1%, that means that your Social Security and your pension are probably going to be okay and the dollar is still going to be more or less a dollar.
[00:30:49] So if we fail to do that then of course you know where the long term trend is going. Population is aging, there's fewer people supporting the economy. So we have trouble down the road. So that might be kind of a saving grace. And we actually the productivity hasn't been growing very fast in the last 30 years.
[00:31:10] I don't know if you know that or not. We're not producing a lot more with the same people. So AI has a chance maybe to actually give a little boost to the economy. I don't expect, I mean the kind of a post labor economy is an interesting concept. I also see studied that very well a long time.
[00:31:28] We don't know yet. I mean we, it may or may not happen, but it really have, it will really have a huge effect of us to say 50% people of people cannot find work because everything is done by robot. That will mean a dramatic cataclysm to us. We don't know how to live without working.
[00:31:46] We don't respect people who don't work, we suppress them, we don't put them in miserable conditions. Anyway, it's going to be a huge cultural change which I'm not sure we can actually go through. The best hope that will happen slowly and gradually and then eventually, you know, somehow people will have more leisure time, less work.
[00:32:08] But none of us equipped yet to do that. So if sudden labor, post labor economy happens tomorrow, we're up for a social catastrophe, more or less.
Speaker A
[00:32:17] Steve?
Speaker B
[00:32:18] Yeah. I think short term and you and I had a conversation. I have my 88 year old father, we moved in with us and I think about the impacts of what AI is and technology always has a dual purpose, right? You have the good, you have the bad. So I worry more about that demographic with the sophistication AI is bringing to scams.
[00:32:44] You know, phishing emails are no longer broken English, it's no longer a prince from Nigeria, you know, it's, it's your nephew with audio or video clip that sounds and looks just like him saying hey, I'm, I'm trapped, I'm broken down, send me some money. And the sophistication with, you know, this is your bank trying to get information from you.
[00:33:08] I mean it's just gone up a thousand percent. So that is my biggest concern with that demographic. So we really have to think about within organizations. You know, how do we do checks and balances. No longer can you just rely on somebody calling you on the phone and it's like hey Terry, how you doing?
[00:33:29] This is Steve, Is it really Steve? How are you going to know that? I could zoom you. How would you really know that? You know, and that is going to be a big impact. It's that negative. It's that dual purpose. And every technology since the dawn of time has had dual purpose. And AI certainly has that because all the bad guys are pouring billions into using AI to do their line of work.
Speaker A
[00:33:57] Wow, Steve, I mean, go ahead, Eric, with your mic.
Speaker C
[00:34:00] Eric, sorry to go back to that earlier question. How does it affect you as a retiree? I think I focus on what Steve said and just mention that the scams are going to get a lot better and retirees are always a target for scams. So in terms of sort of immediate impact, that might change things, I would be very, very cautious about your interactions through the Internet because I see, you know, some pretty good scams already, many of them coming from other law firms or spoofing other law firms.
[00:34:47] And so, yes, be cautious.
Speaker A
[00:34:51] Sasha, you agree?
Speaker D
[00:34:53] Yeah, totally agree. I think it's all fair. I just went on the bright side, AI is also an accommodation. Like, you know, 20% of population are dyslexic. So if you use IRA, it kind of makes that problem more relevant. It doesn't solve it. I mean, they're still dyslexic as they always been. But like, your eyeglasses will fix your, you know, poor eyesight and functionally you're the same.
[00:35:18] Right. So if you wear them. So it's the same with dyslexic. Also. There's also very interesting experimentation with people with cognitive decline, like if you, you have any relatives with Alzheimer's. So people lose ability to write or sometimes to speak clearly. There are very interesting promising applications of how you can do that.
[00:35:37] How can you turn written speech into oral and then back again. So AI's accommodation, I think is underappreciated, although the community, the disability rights community, working actively on that as well.
Speaker A
[00:35:51] So in 1994, Steve, you and I were trying to figure out how to put computers into schools and things like that and all, and government didn't know how to react whatsoever. And here we are 30 years later with this new technology and government right now is sort of this totally hands off. Government seems to be always behind the eight ball relative to regulation.
[00:36:17] How much regulation needs to get involved? How much does does government need to get involved or does it need a hands off approach to this?
Speaker B
[00:36:25] Well, so my friend Eric and I had this conversation over coffee a couple of weeks ago. Yeah, yeah. If you read a lot of the literature about AI, Some of the advocacy groups on disadvantaged communities are really concerned with AI because they think AI is going to lead to redlining, enhanced disadvantaged aspects to those populations because it's going to be accessible to the more affluent, the more educated and it's going to amplify some of these social issues that we're already dealing with.
[00:37:10] So from that perspective, they want to see some, some intervention, some regulation. We're seeing a lot in California, I think there was four laws or legislative bills that were proposed. They're still in the legislature and they're being really pushed by the labor unions because they don't want AI to make decisions on people's employment and livelihood by itself.
[00:37:35] There was a law that Governor Newsom vetoed a month and a half ago that would require employers to give a 30 day notice to an employee if AI made a decision that impacted them. A lot of this is really new and you know, it's the unintended consequences and I think that's why the governor vetoed that one bill.
[00:37:56] But you know, it's that dual purpose. If you use AI. What was the one we talked about? Chat GPT being sued by a lot of people, not a lot of people, but by families of kids that did self harm because they had this interactive relationship and the, the AI harmed themselves or led to them harming themselves.
[00:38:23] And there was legislation passed through that. So it's, it's still purpose.
Speaker D
[00:38:29] There can be good regulation.
Speaker A
[00:38:30] We have, we have the federal government, Eric, sort of saying total hands off nationwide on AI. Where are we? What's you, you, you're in the legal business. What is, what's needed in terms of, to be proactive in regulating AI. With the mic, Eric, with the mic. Sorry.
Speaker C
[00:38:54] So I have a, a slightly different take on all of this. I heard somebody recently compare what's happening here with AI to the industrial revolution on steroids. The interesting thing about the end of the industrial revolution is that there was a lot of government regulation. There were railroad trusts, there were oil trusts and the government came in with some the Sherman act and the Clayton Antitrust act.
[00:39:37] And out of a concern about the acquisition of economic and political power, they broke the trust up. My biggest concern about AI is that we wind up in a situation where there is one AI rather than a diversity of AI. I think that's potentially the very worst outcome that could occur here. And we already have the tools to deal with that.
[00:40:09] We don't need new regulation. In fact, my concern here is that if we have 50 different sets of regulation from 50 different groups of politicians, all with slightly different concerns from their constituents. What we will do is we will raise the cost of AI to a degree where even these companies that are getting enormous funding can't really stay competitive.
[00:40:41] There will be huge barriers of entry and we will wind up with a handful of AI. If there are issues here, our solution is going to be a diversity of AI, not over regulation. It's a knee jerk reaction of politicians to when their constituents are concerned about something to adopt a regulation. But in many, many cases, and this goes far beyond AI, there in fact are 200 years, 250 years of regulation in the United States and there are well trod paths for how you handle those problems.
[00:41:27] If ChatGPT is giving bad advice to teenagers and there's harm resulting from that, then there are ways to to sue OpenAI or Anthropic. There are regulations in place on discrimination. A multitude of new regulation is going to lead to worse outcomes rather than better.
Speaker D
[00:42:00] Sasha, I just want to support what Eric just said. I agree with that. An additional point is that, you know, I got a call from state senate or one of the staffers. They say, well, we're considering this bill where companies will have to provide us the protocols of their safety testing every year.
[00:42:18] And I ask them who is going to read them? In your government you don't have anyone who has that kind of level of expertise in the entire state government of California. So I think the attempt to regulate something you really don't understand is a terrible idea. They tried to do that with the Internet.
[00:42:35] If you remember old enough, like you look at the Congress of the United States, most of these people didn't have an account or Facebook or anything. They were trying to regulate something. They had no idea what that is. With AI, it's even more difficult because even people who work in that very industry don't know what's going to happen.
[00:42:52] Like say 10 years ago we had a lot of concern about, you know, rise of the machine is going to replace that. I don't know if you've noticed it, but those concerns kind of died down in the last couple of years because there is nothing, no empirical evidence of anything like that happened.
[00:43:07] So we may have some disaster coming from AI, but we don't know where it's coming from. Like with the deep fakes, we kind of discussed that the deep fakes people were very concerned. I am still to see one real good deep fake. Show it to me. If you've seen it, usually you can kind of tell you know, six fingers or something, or unnatural movement.
[00:43:29] So if you're not completely. If you're not a child, you will tell the difference. So, I mean, is it a big risk or not? The problem is not that. I mean, we're not here, we can't predict the future, but we don't even have a methodology of risk assessment for those technologies yet. For example, if they're computing, the quantum computing comes on board.
[00:43:49] None of us have any idea how is it going to do with AI on it. So there's a lot of speculation. And again, if you want to get a lot of points on YouTube, you will either exaggerate them or you completely dismiss them. I'm sorry to say it, but the truth is we don't know.
[00:44:07] And the last small point about the kids who commit suicide because they talk to AI, I'm sorry, it's not the popular opinion, but suicide rates among teenagers been always high. Without AI, they would have been obsessed about something else. They would have unhealthy relationship with peers, with imaginary friends, with. With stars or celebrities and all of that.
[00:44:29] So just because the kid had a conversation with AI and committed suicide does not make it a cause for that event. So there's probably mental illness, probably instability before that. There's many factors that could cause that. And what people forget to mention is that one of the most common uses of AI is for psychotherapy.
[00:44:49] People who couldn't afford it were too ashamed to go to a therapist. Now get some sort of advice. We don't know whether it's really great advice, but we know that people use it, at least find some value in it. And there's some studies that show it's actually not that bad. I mean, it's better than.
[00:45:06] It's not as good as human therapist ever, but it's better than a stupid human therapist.
Speaker A
[00:45:15] Nothing like being frank about it. Sasha. So Steve, you mentioned one of your AI favorites, which, which you may want to. So what I'm asking all three of you is what are the things that you do on AI and what. What sites do you use? Since we're all neophytes out here and you think about this all the time, what sites do you use?
Speaker B
[00:45:37] Steve? My favorite site, the only one I pay for is Claude.
Speaker A
[00:45:42] And how's that spelled?
Speaker B
[00:45:44] C L A U, D E. Claude from anthropic. And it's a good.
Speaker A
[00:45:50] How much do you pay for that?
Speaker B
[00:45:52] I don't know, 20 bucks.
Speaker A
[00:45:54] 20 bucks a month?
Speaker C
[00:45:55] Yeah.
Speaker A
[00:45:55] Okay.
Speaker B
[00:45:56] And I use it. I use it a lot for research, for figuring out, you know, like I gotta do a course, you know, what are the latest greatest thinkings and research and papers and references. I use it a lot for that. I'm a horrible speller. I'm a little dyslexia.
Speaker D
[00:46:13] So.
Speaker B
[00:46:16] Editing of it helps me a lot. Proofreading helps me a lot. I use it as a thought partner. I'm thinking about this. What did I miss? You know, what are the three weak points in this article? I just wrote stuff like that.
Speaker A
[00:46:34] Great Eric, again with the microphone there.
Speaker C
[00:46:37] Sorry. So I spend over $600 a month on AI. I have a 200 subscription to the most advanced version of ChatGPT. I pay a hundred dollars a month to anthropic, I pay $20 a month to Grok and I pay $300 to a legal software AI company. I overpay the. But what do I use it for?
[00:47:11] I could not agree more with Steve that the interaction with AI is often a conversation. You get more out of it. The AIs have started to follow up on the conversation. I ask it about something and then it gives me back three different flags things I might places I might want to go in terms of another question.
[00:47:40] And I remember when ChatGPT5 first came out and I got an advanced version of it, I had like a 45 minute conversation with it about starting a new business and it was extraordinarily good. But I don't use it just for legal stuff, I use it for cooking recipes piece. I use it for many, many different things.
Speaker A
[00:48:09] So fabulous, fabulous stuff.
Speaker C
[00:48:12] It's great.
Speaker A
[00:48:13] Eric Sasha.
Speaker D
[00:48:15] Well yeah, I, I so what I pay for, I pay for Claude. Claude is an essential for writing. I mean it's way better writer than any of the other AIs. But I also have Chat GPT actually with two different paid accounts because my school pays for one and then it can do more things than cloud.
[00:48:36] So like if you want to build custom bot, the cloud is not as convenient for that so but they're not. The differences are not huge I would say but if you, if you really have want to have a like serious conversation, strongly recommend Cloud. It's just more a human like response and the rest of them are pretty alien kind of non human but, but as Eric said I use it for short answers for everything every step of the day, every day.
[00:49:07] I use it many times on things writing a long email to somebody turning like say if I were recording this conversation into maybe a blog post that, that kind of stuff. There are probably few hundred users that they can list.
Speaker A
[00:49:27] Great. Well we have a few minutes for some questions, so let's make it specific. Yes. Are the trustee here from Sierra College.
Speaker B
[00:49:37] Hi. I appreciate all of your comments. One topic you haven't touched on that I've been asked multiple times about is the impact to the environment.
Speaker A
[00:49:47] Impact to the environment? Yes. Electric rates are going up much due to PG and E's demand from AI. So any thoughts on environmental, environmental issues relative to AI?
Speaker B
[00:50:02] Yeah, it's big right now and I think it's big because we're still using yesterday's technology, chips, data center models to drive this new need of AI. So you're seeing these huge data centers and billions and billions invested from, from these big companies to build it, you know, even to the point where they're starting to reopen Three Mile Island.
[00:50:25] Right. And they're talking about other portable, smaller scale nuclear reactors. It's huge. And I, I think back to the professor's comment on quantum. Until we get to quantum and we have a more efficient line, that's, that's going to be a big impact.
Speaker A
[00:50:43] Anything else? How many of you?
Speaker C
[00:50:46] Well, I would add that a lot of this money on developing new power sources and the new data centers is being spent directly by the foundational models. So it's not all coming out of the taxpayer's pocketbook. And of course, being a step ahead of everybody else, Elon Musk really has the answer and the capability which is to put all this out in outer space where there is free, free energy and one that doesn't have a significant environmental impact.
Speaker A
[00:51:23] Wow.
Speaker C
[00:51:24] He's talked about doing it.
Speaker A
[00:51:26] Sasha.
Speaker D
[00:51:26] Well, for the, for fairness sake, you need to compare it to other things. Like if you, if you skipped a shower this morning, you saved more energy and water than your maybe entire week of use of AI. So per minute use, it's a lot less. It's like a 10% of what you spending by watching color TV.
[00:51:45] So if you want to cut somewhere, why start with this one? Yeah, maybe start with, you know, if you drove here, that's, that's you, you wasted a lot energy and water probably polluted a lot more than whatever, so many. So if you calculate per minute, it's really not that per minute per user, it's not that high.
[00:52:04] And actually the number is going down constantly because, you know, two years ago it was, I don't know, 200 milliliters of water and now it's 30 milliliters of water. So all of this number is going down and when you weigh it in, you have to say, what's the benefits and what's the cost analysis?
[00:52:22] However you do it, just don't forget the benefits. Okay.
Speaker A
[00:52:25] Other questions? Yes, sir.
Speaker C
[00:52:28] What impact you think AI will have on the entire medical profession and medical care?
Speaker A
[00:52:33] Impact on the medical profession? Anyone?
Speaker D
[00:52:37] It's already having an impact. I heard from a Kaiser executive that you don't know that, but we saved hundreds of lives already every year. So they actually been early to the game before all this whole hoopla. So in certain areas like X ray diagnostic, they're way better than humans in some areas. Areas.
[00:52:56] So they're very active and the effects have been very positive so far.
Speaker A
[00:53:00] Jim, ask your question about bias, if you don't mind. I think that's an interesting one.
Speaker C
[00:53:04] Yeah. And the experience I had was in testing it. I asked Chat GPT about myself and what I did during a certain decade of my life and by the time it was done, I thought I should have gotten a Nobel Prize. I've noticed that in almost all the things I review about companies, schools, whatever, there's a positive bias probably to motivate me to use it more.
Speaker A
[00:53:33] Is that a fair statement that there's a positive bias on AI?
Speaker D
[00:53:39] It's sycophantic, yeah. If it says you're great, don't believe it ever. But I have to say that Claude actually has less of a problem than ChatGPT. Cloud is a lot more critical. If you ask it for a fair opinion, I will give it. Yeah.
Speaker B
[00:53:52] And data is the foundation of AI, Right. AI only re, you know, it only acts on the data that's available. So what's in the data? I mean, mostly positive type information on.
Speaker A
[00:54:04] These companies, Sir, Back in the green back there.
Speaker C
[00:54:09] One of my greatest concerns is that AI is going to be taken over.
Speaker B
[00:54:12] By one, two, two or three mega corporations.
Speaker C
[00:54:17] What safeguards do you think could be taken to protect people from that happening?
Speaker B
[00:54:21] Because if it follows the Internet model, that's what it's bound for.
Speaker A
[00:54:24] So it seems like there's a race. There's a race in the stock market, there's a race out there in terms of only being the one. I think you sort of talked about this a bit and that's all your great. One of your great concerns, all of you, that, that we need multiple AI AI platforms other than, you know, Apples or Microsoft's, etc.
[00:54:45] Is that a fair statement?
Speaker B
[00:54:46] Yeah. And I, you know, support open source AI as well, which kind of democratizes the technology so more players can be in that. But again, there's a, there's a pro and A con to that. The only real company that's really pushing that is Deep Seat right from China, which has concerns in being a Chinese product.
[00:55:11] But they're the only ones really pushing an open source platform. Open source means that it's free to other companies to go ahead and take that technology and use it and modify it themselves. So I think it's back to Eric. We need a very robust competitive marketplace with multiple players in it and we don't.
Speaker A
[00:55:29] So let me ask a final question. Sorry, Eric, because we are out of time. Is the United States leading this, this and. And where is our chief competitor? Is it China? And. And what's that competition look like? Sasha, you're probably out there more than this.
Speaker D
[00:55:45] The United States is leading the AI and should. Should keep leading it and will keep.
Speaker A
[00:55:50] Leading it in your eyes.
Speaker D
[00:55:51] Should. Should keep leading. I. I totally agree with Eric. I think over regulation may kill it if we stop doing it. If California stopped doing it, Texas won't. If the whole country stop or slowing down, China won't. And China still not still catching up. Still have a lot of things to do before they catch up with us.
Speaker A
[00:56:12] Any final thoughts before we unleash the audience to come up to you and ask you their specific question? Anything else from all of you?
Speaker B
[00:56:22] It's an exciting time. I mean for people involved in this, it's an exciting time. But again, I always try to put the context or just in the beginning of this. Right. So it's not. It's not how great it is today, it's how good it's going to be tomorrow.
Speaker A
[00:56:36] Well, this hour has been phenomenal and I'm sure the audience would like to tell you that.
Speaker A
[00:00:00] County Superintendent of Schools and your moderator for today's forum. And you must look at the group here and say, where are the 20 somethings here? They should be running this. No, we went to, we went for leadership. We went for knowledge base and experience. So I want to introduce my very good friend who I've known for a long time, Steve Monahan, who's the retired Chief Information Officer for the county of Nevada, who's been awarded CIO of the year.
[00:00:31] Oh, he's got, he's, that's why I put Tech Guru on the, on the announcement. And then next to him is Eric Little. Eric's an attorney who's worked with lots of startup AI companies and is currently actually working with a company locally called Ladras that's here in Nevada County. AI firm. And then finally is Sasha Sidorkin.
[00:00:56] And Sasha is a professor of education for SAC State and known as the Tech Guru. Actually that's his title for Sacramento State and AI research and all. So welcome to you all. And let's begin with you, Mr. Monahan. Let's get right into this. So what is AI, the workplace going to look like here in the, in the next decade?
[00:01:21] What's going to happen in the American workplace? You keep hearing these things like, oh, it's going to be tremendous job losses. Then I heard last night, oh, there's going to be job increases. What's the workplace going to look like, Steve?
Speaker B
[00:01:33] Great question, Terry. And we don't really know yet, but we know it's going to be profoundly different than it is today. Right. So I teach IT leadership classes and over the last six month, AI classes to leaders of organizations about what they need to do now to prepare for what's coming and what's already here.
[00:01:56] So I like to put context to it. People are talking about AI. It's all new. And I'm sure Sasha will talk about this as well. AI has been around for 50 years. It's what's been out for the last three years, this generative AI that came out with ChatGPT and those technologies that has made it available to the masses, consumerized AI so that it's available to your average organization in all sorts of products.
[00:02:26] That's what's really changed. And it got really hyped up and now we're starting to see some disillusionment with it. You know, people are like, oh, it's not that good. I put it into Google, I do my search, it's wrong half the time. That's kind of that consumer version of AI and It's really just the tip of the iceberg of all the AI that's in place and it's coming down the pike right now.
[00:02:50] So I have this slide I put up in the beginning of my classes and it's the Internet in 1994. That's where we're at with AI right now. If you think about the Internet in 1994, web pages were like PDF documents, right? There wasn't a lot of functionality there. There was no Amazon, there was no Netflix, there was no streaming Spotify, there was no Zoom, there was no E commerce, there was no banking online, there was no telemedicine, there was no online education.
[00:03:25] And if you look at the last 30 years of what the Internet has done to every aspect of our lives and our society and our commerce and our education and healthcare, that's the impact that AI is going to have on our organizations. But it's not going to take 30 years. It's going to be in five years, 10 years.
[00:03:48] And it's happening at an accelerated rate compared to what the Internet did. Internet took some time, but AI is going to happen quicker. So I don't know if that answered your question.
Speaker A
[00:03:58] Well, do you see it as a, in the average workplace, as a threat these days to the average workplace or you as an employer excited about this opportunity?
Speaker B
[00:04:11] Well, I'm excited about the opportunity and what it can do in augmenting people and enhancing employees performance and capability. So that's what generative AI is doing. It's helping people who like, I'm not artistic, but I can create art now, right? I can create a song. It may not be a good song, but I could create it.
[00:04:33] I couldn't do it before at all. So it's enabling people to do things they otherwise couldn't do and it's doing general productivity enhancements. You know, manage your calendar better, manage meetings better, manage, you know, write memos better, you know, correct all my spelling errors, my grammar errors for me, those kind of things.
[00:04:55] AI as a thought partner, critical thinking, challenging your assumptions. So it's helping in those areas and it's just starting now to get into organizations to help with products and services, business processes, how we do workflows. Every major software provider is building AI into their applications. It's in our phones, it's in your Windows computers that you have at home.
[00:05:28] It's going to be in your tv, it's going to be in every piece of software.
Speaker A
[00:05:33] So back to the workplace. We had the conversation when we were chatting about coming to the forum and you noted you have one son who's going to be an electrician and another who's an undergraduate in computer studies. And you said, gee, I'm really excited about my son in going to be an electrician, but finish the sentence.
Speaker B
[00:05:54] Yeah, but my son who's, who's going into information science is going to have a little more challenging job market. So what we're seeing in big organizations, if you look at Google, if you look at Meta, Microsoft, they're all doing huge layoffs in that programmer development, that expert kind of layer. You know, 2,000 here, 3,000 there.
[00:06:25] We don't have a lot of organizations, especially in Nevada county that have first employees that only really do one function, program development. Right. Or do we have hundreds of them in that same that can be replaced by AI because AI can take over expert via an expert. So it's very, very effective for AI to take over and let a thousand developers do the work of what 1500 used to do through its productivity enhancements.
[00:06:58] We don't have a lot of opportunities like that. I don't know of any because most employees are fragmented. You know, point one of their time is doing that is zero point five of their time is doing that is point two of their time is doing that. So you can't, I don't see that mass transformation in the average business, especially small business.
Speaker A
[00:07:19] Great. Yeah, thanks. Eric, your thoughts? You deal with companies starting up and pick up the mic there if you don't mind and give us some thoughts about this changing workplace.
Speaker C
[00:07:30] Well, so my profession is being changed dramatically by AI and I think it will be, it will look very different in 15 to 20 years.
Speaker A
[00:07:44] The legal profession.
Speaker C
[00:07:45] The legal profession. There's a study by a Stanford economist that came out a couple of months ago that said an entry level jobs in the professions, the fields most affected by AI are down 13%. There is a writer about AI who's begun a conversation about a post labor economy. I think that what I read suggests that that won't happen quite as quickly as the technologists suggest.
[00:08:28] There seems to be a split of opinion. The technologists say this is going to happen in five years and the economists say no. There are various institutional barriers, human barriers that are going to slow the rate of adoption. But I completely agree with Steve. It's going to be faster than the Internet and way faster than the industrial revolution.
[00:08:57] There's, I think some people that feel that AI is in a bubble and we can unpack what might be meant by that. But one of the things that I believe is True, because I see it greatly in my own work that AI is becoming much more useful over the past couple of years and they're really.
[00:09:23] I don't believe that we've exhausted the sorts of new technologies that are going to related to AI that are going to have an impact. For example, on the horizon is.
Speaker D
[00:09:38] The.
Speaker C
[00:09:39] Not just AI, but robotics. And not to raise a controversial name, but Elon Musk is going to make a trillion dollars if he sells enough robots over the next 10 years. And he's pretty good at that sort of thing. And when you marry a robot with AI, you're going to get another technological change which is going to have dramatic effect.
Speaker A
[00:10:11] So let's just talk about your profession for a sure bit because we're not all lawyers. So tell us, in your field, what are you going to see change just relative to the law profession?
Speaker C
[00:10:23] Well, so let me talk first about myself and then about what I think I'm seeing in the larger firms. So I have a background in a couple of areas. I have done a lot of trouble transactional work for technology companies. I've done some financing and I actually have a background over 30 years ago in building expert systems for lawyers.
[00:10:57] What I found about AI in my own practice is it, it expands the areas where I feel comfortable practicing maybe more than just at the margin. So that while I had a fairly narrow lane that I normally like to keep in, now I can move more outside of that lane. And that in general makes me do two things happen.
[00:11:28] One is I can spend less time on a matter which means that my, my cost to a client is lower so I become somewhat more affordable. And secondly, I guess the practice of law is becoming more competitive. There are now a huge number of well funded legal software companies that will automate a variety of different aspects of your practice.
[00:12:03] And they are selling to everybody from small firms to the very large firms. A lot of the large firms are, I believe have millions of documents in their databases and they are taking AI training, AI on those documents so that their need for young lawyers who work research is declining. They will hire less.
[00:12:36] I would not want to be a law school graduate today unless I had serious expertise in some additional domain that would support perhaps a more specialized practice grant.
Speaker A
[00:12:50] Steve?
Speaker B
[00:12:51] Yeah, that brings up Eric had mentioned the Stanford study and that was a Stanford adp, the payroll company study. They came out last month and a half ago and what it showed was a 13% decline in new hires in the demographic of 22 year olds to 28 year olds. But it wasn't an overall decline in hiring.
[00:13:12] It shifted to an older demographic, the 29 year old to the 40 year old. So companies are not hiring the younger people. That goes back to my son. I'm telling them differentiate. You know, he's picking up a minor in project management. Because you can't just have the tech skills. Right. To try to what's that edge going to be?
[00:13:32] So what we're seeing is AI can replace or replace the need for a lot of the younger people, but they're hiring older, more well established people because you still need judgment. AI has no judgment. AI has no empathy. So you still need the older. You need a little wisdom, Terry. I'll get back in the workplace.
[00:13:56] So we're seeing that shift, which is going to make it harder. That's why we're seeing that, you know, people, you know, the low hiring rates out of college, you know, the new graduates not being able to find jobs in the workplace. And that's a real problem.
Speaker D
[00:14:14] Yeah.
Speaker A
[00:14:14] Well, let's talk about academia. So Sasha, what's the workplace going to look like in college and high school education? All then we'll focus a little bit more about the classroom. So talk to us about the changing workplace and the changing workplace in academia.
Speaker D
[00:14:37] So the AI affects different industries in a different way. So that's I think important to understand. So when we try to actually implement it in business and other organizations, what turn out to be the case is that if you have a short, relatively short workflows where you can see the final result and can judge whether it's good or not, that's the easiest ones to implement.
[00:15:00] So when you have long and complicated workflows where everybody has to trust the previous chain kind of a person in the chain, then it's a lot more difficult to implement. So like a beginner lawyer or like a paralegal beginning coder is the vulnerable professional sessions. The. If you look at customer service, right, customer service, you have one phone call, one interaction and then you fix the problem, then you go to the next one.
[00:15:30] It's the same thing, very easy to replace. So the customer service as a industry is really dying, being wiped out, because AI can do a better job actually than most humans to do that. Now in education it's very different because we are really affected directly more than any other industries because in education we've been moving away from multiple choice, test or oral exam assessments for decades.
[00:15:58] What we use now in education is what we call performance based assessment. So you wrote an essay, you must not know something or you created a project or you developed something, so you create the product and you show it to us as a student, and then we can judge whether you know anything or.
[00:16:13] Or can do things. So the problem with us is that AI kind of wipes out our most essential tool like that. You know, it can turn out, colleges say, pretty easy, and it's going to be better than any student. So it's. And of course, because we. We're not affected because there's some competition somewhere there.
[00:16:35] No, we're affected because that technology takes away some of our most essential tools. And, you know, just to be completely honest, we're a little bit in disarray in higher ed. We didn't expect that to happen. Nobody wanted to happen. People don't know what to do. We will figure it out. You know, there are solutions to that.
[00:16:56] The solutions are that you have to actually ask students to do more complicated things and allow them to use AI. But the complicated things where you can actually measure the gap between what AI can do and what actual human input will be doing. So the solution is to teach students to supervise AI to be an executive who.
[00:17:20] You know, there is a difference between, like, a manager and executive. Executive is the one that has a strategy and tells people what to do. So in a way, we're all turning into executives. But you still need to have an overall idea what needs to be done, what is worth doing, what's the difference between the good outcome and the crappy outcome, and judge it and make a difference.
[00:17:41] So those are more advanced skills that we need to teach students. The skills like, where do you put. Or whether you put in the Oxford comma somewhere is really becoming completely irrelevant. I would argue that it was always irrelevant, but. Or like, can you put together a perfectly grammatical English sentence? I mean, some of us spend years trying to master that skill, and suddenly it becomes completely relevant.
[00:18:11] The robot can do that. Right. So it's very painful also for people, like, emotionally, there's this reaction, oh, my God, I studied. And then when I ask my colleagues, we'll stop assigning college essays, they are ready to kind of crucify me for that. What do you mean colleges say is learning well? No, there's actually a difference between thinking and writing.
[00:18:33] There's a big difference between thinking and writing. We kind of lost that edge and lost that difference. But anyway, I don't want to go into all the details. I just want to say that our industry is affected. We're hurting. We need really help. And the governments don't understand that we. Because that is very Difficult to explain.
[00:18:54] Why is it. Why don't you teach AI? Well, we can teach AI. It's actually easier than to teach writing or English or history, philosophy or social work or education or something like that. That's our problem, not teaching AI, but how do we teach all the subjects that we have? But again, there is this kind of.
[00:19:14] We kind of started to see the glimpses of solution. And I think in a few years, the colleges are not going to die. People are not going to college to interact with AI. They come to college to have, you know, a nice building to meet their future, you know, partner in life. AI just makes it a little more obvious that that's what very few people come.
[00:19:37] Just give me information. The information is available. You don't need college for information. You need experience, life experience. The education is really a relational in its essence. It's not informational, it's relational. So colleges are not dying and they're going to look a little different, not dramatically different, but we'll have different kinds of assignments.
[00:19:58] We have to revise the skills that we actually teach. And how we teach them will take us a while to look.
Speaker A
[00:20:04] So, Sasha, we heard the legal profession's going to be weeded out a bit, and computer studies and all. What other departments are you going to see that are going to be directly affected by AI in higher ed?
Speaker D
[00:20:18] Well, that's not what Eric said. He said that the junior lawyers are going to be wiped out.
Speaker A
[00:20:24] Right, Right.
Speaker D
[00:20:25] But the advanced expertise in law is absolutely irreplaceable. There's no robot even in Horizon, but.
Speaker A
[00:20:33] There'S going to be less law school students.
Speaker D
[00:20:37] Maybe temporarily.
Speaker A
[00:20:38] Okay.
Speaker D
[00:20:39] But once we figure out how to teach the more advanced, Once we teach them how to beat the robot, then they'll come back to us.
Speaker C
[00:20:48] I gotta jump in here.
Speaker A
[00:20:49] Go ahead.
Speaker C
[00:20:51] So I didn't talk about. Well, I only talked briefly about the large law firms.
Speaker A
[00:20:55] But let me use that mic a little more. No, the mic's a little closer to you.
Speaker C
[00:21:02] So I think that there are a lot of people making judgments about AI out there that have used the free version or the $20 version of ChatGPT or Claude or some of the other foundational models. And they. But there are two things I want to say about that. One is there's a big difference between the free version and the $200 a month version.
[00:21:33] And the $200 a month version is very impressive. But this is not going to stop. I think if we try to project the future on the basis of, well, it's just going to be more of what we have today. We are selling the future short. These models are going to get, these capabilities are going to get stronger and better.
[00:21:59] And I don't think just because you're an executive means that you're immune to being replaced. I saw a video of Sam Altman saying he's out to have a foundational model take over his job. So. I think we have to project in the future as to what the capabilities might be and assess what the impact on different professions will be then.
Speaker A
[00:22:35] Okay, Steve?
Speaker B
[00:22:36] Yeah, I think, you know, if we're looking in the horizon, the short term to 10 year horizon as we lose entry level positions, the number of them. What I talk about with leaders is how are you going to do workforce development if you still need wisdom and judgment at the top and your feeder is being decimated because how many of those are actually going to make it through to the top to be the wisdom and judgment in ten years from now, you know, so how are you going to restructure your classifications, your organizational structure such that you don't.
[00:23:15] You can develop people into the leaders we need in the future. Otherwise, you know, they're hollowing out the middle. And that's one of the terms they're using is the middle is being hollowed out because that's where AI right now can do the most replacement.
Speaker A
[00:23:29] Okay, great. Dasha, you want to start?
Speaker D
[00:23:32] Yeah, I just want to comment on. Eric's here. So first of all, don't listen to Sam Altman. He's trying to raise the money. Of course he's going to sell you the most hyped up version of the future. So there's a conflict of interest there. Don't listen to them. If you, in fact, if you're watching YouTube or something, that people who come up on top are either saying AI is total crap, it's not going to make any difference, or the people who say, oh, it's going to change the world.
[00:24:01] Well, it's neither. I'm sorry that, you know, life is not YouTube algorithm. It's a, it's. I agree it's a revolutionary technology, but I can clearly see that they hit certain ceiling at the large language models. So they're improving. Like, you know, just a couple days ago, Gemini 3 came out. It's awesome, it's wonderful.
[00:24:22] It's not qualitatively different from ChatGPT4. It's not that different. I mean, it's a little better. So when they promise you like a leap, what they forget to tell you is that the plan if you just pour more compute into it and have like more bigger context windows, somehow the models will become amazingly smart.
[00:24:42] And I did try a $200 model as well. I don't think it's that great. I'm just saying, you know, when you try to write like a book with that, it will not write a book for you. So you kind of have to have architecture of that book and ideas and you have to pour in ideas before it can.
[00:25:01] So again, but if, even if technology doesn't change at all from now on, we still have 20 years of innovation because how to use it is really hard to figure out and apply it.
Speaker A
[00:25:14] I can tell I don't have to ask many questions.
Speaker B
[00:25:17] I was just saying, remember we're in 1994 with the Internet right now. So the models are great, they're pretty darn good and they're just going to get better. And you know, we haven't talked about compute power which is driving all of this. So the more powerful our computers are getting, the better AI runs.
[00:25:40] The better AI runs, the more powerful our computer gets. You know, so it's going at an exponential rate. So we're in 1994. You know, our current chat GPT3 is like a PDF document, right? It's pretty powerful, but it's nothing like it's going to be in five years from now. So what I tell leaders are you don't make plans for today, you make plans for tomorrow.
[00:26:07] So what if ChatGPT was 100% right in 5 years it could do all these things that are hyped up right now in five years. How are you going to start changing your organization today to be ready for that? Change your classifications, upscale your workforce, change your organizational design because you're not going to need the same top down structure five years from now, ten years from now that you have today.
[00:26:37] But if you don't start doing it, it takes a long time to change an organization that goes back to what Eric was saying. That which is going to be kind of the governor in slowing down AI. It takes 5, 10 years to meaningfully change a big organization.
Speaker A
[00:26:54] So let's talk about one aspect of it that everybody talks that is going, we're going to lose creative thinking. Creative think. We will turn to the computer and it will do our creative thinking.
Speaker B
[00:27:06] Steve, I don't think it's gonna. AI is just math, right? It doesn't do. It doesn't have creativity built into it. It needs a human for that. What it can do is help you be a thought leader with you challenge, you help you think about things, spur that conversation. And that's what we're seeing.
[00:27:31] The real value when we upskill employees in using these tools is that it's not just use it like a Google search, use it like a conversation. Right? Go back and forth. When I use I like Claude, I use Claude a lot. I'll say, I'll write something. I'll say claude, ask me three more questions about what I just wrote.
[00:27:54] Tell me three things that I got wrong. Ask me four more clarifying questions to make me think bigger about this topic. So you go back and forth with it and it remembers that context. That's the power of using AI as a partner, not a replacement. That's the term that we're hearing a lot, is augment AI, is augmenting our employees to be more productive.
[00:28:19] I can see Sasha wants to say something.
Speaker D
[00:28:21] No.
Speaker A
[00:28:22] Okay, good, good. So that leads us into the conversation, Eric, of a lot of the people here are retired. They're out of the workforce. What is AI going to mean for us who are not in the workforce? Pick up that one.
Speaker C
[00:28:42] I did not prepare for that question or Sasha.
Speaker A
[00:28:45] Please don't, don't.
Speaker C
[00:28:46] I think it's going to be very entertaining. It's going to be a bumpy ride for those in the workforce over the next 20 years. It's so I was an avid Sci Fi reader when I was in my teens and twenties. I never thought I'd live to see the day off where there was a robot that would pass a Turing test.
[00:29:17] I feel privileged to be able to be a spectator in what's going on. And so I think it's.
Speaker D
[00:29:36] As I.
Speaker C
[00:29:37] Said, I think it's going to be enormously interesting. It's going to have profound impacts on society. Both what we see directly in terms of workforce changes, but a post labor economy, if that's what comes into being, is going to change many, many other things about society, about our culture. And it is, I think, enjoyable to sit out there and try to think through what those will be.
Speaker A
[00:30:17] Let's turn to Sasha.
Speaker C
[00:30:18] Yeah.
Speaker A
[00:30:19] What's it going to look like, Sasha, for many of us in this audience, I think your audio's on.
Speaker D
[00:30:25] The big question is what's the overall impact on the economy? We haven't seen the productivity growth yet from AI, but it will definitely happen. And the big question is how much. So if we increase our productivity economy wise, even by 1%, that means that your Social Security and your pension are probably going to be okay and the dollar is still going to be more or less a dollar.
[00:30:49] So if we fail to do that then of course you know where the long term trend is going. Population is aging, there's fewer people supporting the economy. So we have trouble down the road. So that might be kind of a saving grace. And we actually the productivity hasn't been growing very fast in the last 30 years.
[00:31:10] I don't know if you know that or not. We're not producing a lot more with the same people. So AI has a chance maybe to actually give a little boost to the economy. I don't expect, I mean the kind of a post labor economy is an interesting concept. I also see studied that very well a long time.
[00:31:28] We don't know yet. I mean we, it may or may not happen, but it really have, it will really have a huge effect of us to say 50% people of people cannot find work because everything is done by robot. That will mean a dramatic cataclysm to us. We don't know how to live without working.
[00:31:46] We don't respect people who don't work, we suppress them, we don't put them in miserable conditions. Anyway, it's going to be a huge cultural change which I'm not sure we can actually go through. The best hope that will happen slowly and gradually and then eventually, you know, somehow people will have more leisure time, less work.
[00:32:08] But none of us equipped yet to do that. So if sudden labor, post labor economy happens tomorrow, we're up for a social catastrophe, more or less.
Speaker A
[00:32:17] Steve?
Speaker B
[00:32:18] Yeah. I think short term and you and I had a conversation. I have my 88 year old father, we moved in with us and I think about the impacts of what AI is and technology always has a dual purpose, right? You have the good, you have the bad. So I worry more about that demographic with the sophistication AI is bringing to scams.
[00:32:44] You know, phishing emails are no longer broken English, it's no longer a prince from Nigeria, you know, it's, it's your nephew with audio or video clip that sounds and looks just like him saying hey, I'm, I'm trapped, I'm broken down, send me some money. And the sophistication with, you know, this is your bank trying to get information from you.
[00:33:08] I mean it's just gone up a thousand percent. So that is my biggest concern with that demographic. So we really have to think about within organizations. You know, how do we do checks and balances. No longer can you just rely on somebody calling you on the phone and it's like hey Terry, how you doing?
[00:33:29] This is Steve, Is it really Steve? How are you going to know that? I could zoom you. How would you really know that? You know, and that is going to be a big impact. It's that negative. It's that dual purpose. And every technology since the dawn of time has had dual purpose. And AI certainly has that because all the bad guys are pouring billions into using AI to do their line of work.
Speaker A
[00:33:57] Wow, Steve, I mean, go ahead, Eric, with your mic.
Speaker C
[00:34:00] Eric, sorry to go back to that earlier question. How does it affect you as a retiree? I think I focus on what Steve said and just mention that the scams are going to get a lot better and retirees are always a target for scams. So in terms of sort of immediate impact, that might change things, I would be very, very cautious about your interactions through the Internet because I see, you know, some pretty good scams already, many of them coming from other law firms or spoofing other law firms.
[00:34:47] And so, yes, be cautious.
Speaker A
[00:34:51] Sasha, you agree?
Speaker D
[00:34:53] Yeah, totally agree. I think it's all fair. I just went on the bright side, AI is also an accommodation. Like, you know, 20% of population are dyslexic. So if you use IRA, it kind of makes that problem more relevant. It doesn't solve it. I mean, they're still dyslexic as they always been. But like, your eyeglasses will fix your, you know, poor eyesight and functionally you're the same.
[00:35:18] Right. So if you wear them. So it's the same with dyslexic. Also. There's also very interesting experimentation with people with cognitive decline, like if you, you have any relatives with Alzheimer's. So people lose ability to write or sometimes to speak clearly. There are very interesting promising applications of how you can do that.
[00:35:37] How can you turn written speech into oral and then back again. So AI's accommodation, I think is underappreciated, although the community, the disability rights community, working actively on that as well.
Speaker A
[00:35:51] So in 1994, Steve, you and I were trying to figure out how to put computers into schools and things like that and all, and government didn't know how to react whatsoever. And here we are 30 years later with this new technology and government right now is sort of this totally hands off. Government seems to be always behind the eight ball relative to regulation.
[00:36:17] How much regulation needs to get involved? How much does does government need to get involved or does it need a hands off approach to this?
Speaker B
[00:36:25] Well, so my friend Eric and I had this conversation over coffee a couple of weeks ago. Yeah, yeah. If you read a lot of the literature about AI, Some of the advocacy groups on disadvantaged communities are really concerned with AI because they think AI is going to lead to redlining, enhanced disadvantaged aspects to those populations because it's going to be accessible to the more affluent, the more educated and it's going to amplify some of these social issues that we're already dealing with.
[00:37:10] So from that perspective, they want to see some, some intervention, some regulation. We're seeing a lot in California, I think there was four laws or legislative bills that were proposed. They're still in the legislature and they're being really pushed by the labor unions because they don't want AI to make decisions on people's employment and livelihood by itself.
[00:37:35] There was a law that Governor Newsom vetoed a month and a half ago that would require employers to give a 30 day notice to an employee if AI made a decision that impacted them. A lot of this is really new and you know, it's the unintended consequences and I think that's why the governor vetoed that one bill.
[00:37:56] But you know, it's that dual purpose. If you use AI. What was the one we talked about? Chat GPT being sued by a lot of people, not a lot of people, but by families of kids that did self harm because they had this interactive relationship and the, the AI harmed themselves or led to them harming themselves.
[00:38:23] And there was legislation passed through that. So it's, it's still purpose.
Speaker D
[00:38:29] There can be good regulation.
Speaker A
[00:38:30] We have, we have the federal government, Eric, sort of saying total hands off nationwide on AI. Where are we? What's you, you, you're in the legal business. What is, what's needed in terms of, to be proactive in regulating AI. With the mic, Eric, with the mic. Sorry.
Speaker C
[00:38:54] So I have a, a slightly different take on all of this. I heard somebody recently compare what's happening here with AI to the industrial revolution on steroids. The interesting thing about the end of the industrial revolution is that there was a lot of government regulation. There were railroad trusts, there were oil trusts and the government came in with some the Sherman act and the Clayton Antitrust act.
[00:39:37] And out of a concern about the acquisition of economic and political power, they broke the trust up. My biggest concern about AI is that we wind up in a situation where there is one AI rather than a diversity of AI. I think that's potentially the very worst outcome that could occur here. And we already have the tools to deal with that.
[00:40:09] We don't need new regulation. In fact, my concern here is that if we have 50 different sets of regulation from 50 different groups of politicians, all with slightly different concerns from their constituents. What we will do is we will raise the cost of AI to a degree where even these companies that are getting enormous funding can't really stay competitive.
[00:40:41] There will be huge barriers of entry and we will wind up with a handful of AI. If there are issues here, our solution is going to be a diversity of AI, not over regulation. It's a knee jerk reaction of politicians to when their constituents are concerned about something to adopt a regulation. But in many, many cases, and this goes far beyond AI, there in fact are 200 years, 250 years of regulation in the United States and there are well trod paths for how you handle those problems.
[00:41:27] If ChatGPT is giving bad advice to teenagers and there's harm resulting from that, then there are ways to to sue OpenAI or Anthropic. There are regulations in place on discrimination. A multitude of new regulation is going to lead to worse outcomes rather than better.
Speaker D
[00:42:00] Sasha, I just want to support what Eric just said. I agree with that. An additional point is that, you know, I got a call from state senate or one of the staffers. They say, well, we're considering this bill where companies will have to provide us the protocols of their safety testing every year.
[00:42:18] And I ask them who is going to read them? In your government you don't have anyone who has that kind of level of expertise in the entire state government of California. So I think the attempt to regulate something you really don't understand is a terrible idea. They tried to do that with the Internet.
[00:42:35] If you remember old enough, like you look at the Congress of the United States, most of these people didn't have an account or Facebook or anything. They were trying to regulate something. They had no idea what that is. With AI, it's even more difficult because even people who work in that very industry don't know what's going to happen.
[00:42:52] Like say 10 years ago we had a lot of concern about, you know, rise of the machine is going to replace that. I don't know if you've noticed it, but those concerns kind of died down in the last couple of years because there is nothing, no empirical evidence of anything like that happened.
[00:43:07] So we may have some disaster coming from AI, but we don't know where it's coming from. Like with the deep fakes, we kind of discussed that the deep fakes people were very concerned. I am still to see one real good deep fake. Show it to me. If you've seen it, usually you can kind of tell you know, six fingers or something, or unnatural movement.
[00:43:29] So if you're not completely. If you're not a child, you will tell the difference. So, I mean, is it a big risk or not? The problem is not that. I mean, we're not here, we can't predict the future, but we don't even have a methodology of risk assessment for those technologies yet. For example, if they're computing, the quantum computing comes on board.
[00:43:49] None of us have any idea how is it going to do with AI on it. So there's a lot of speculation. And again, if you want to get a lot of points on YouTube, you will either exaggerate them or you completely dismiss them. I'm sorry to say it, but the truth is we don't know.
[00:44:07] And the last small point about the kids who commit suicide because they talk to AI, I'm sorry, it's not the popular opinion, but suicide rates among teenagers been always high. Without AI, they would have been obsessed about something else. They would have unhealthy relationship with peers, with imaginary friends, with. With stars or celebrities and all of that.
[00:44:29] So just because the kid had a conversation with AI and committed suicide does not make it a cause for that event. So there's probably mental illness, probably instability before that. There's many factors that could cause that. And what people forget to mention is that one of the most common uses of AI is for psychotherapy.
[00:44:49] People who couldn't afford it were too ashamed to go to a therapist. Now get some sort of advice. We don't know whether it's really great advice, but we know that people use it, at least find some value in it. And there's some studies that show it's actually not that bad. I mean, it's better than.
[00:45:06] It's not as good as human therapist ever, but it's better than a stupid human therapist.
Speaker A
[00:45:15] Nothing like being frank about it. Sasha. So Steve, you mentioned one of your AI favorites, which, which you may want to. So what I'm asking all three of you is what are the things that you do on AI and what. What sites do you use? Since we're all neophytes out here and you think about this all the time, what sites do you use?
Speaker B
[00:45:37] Steve? My favorite site, the only one I pay for is Claude.
Speaker A
[00:45:42] And how's that spelled?
Speaker B
[00:45:44] C L A U, D E. Claude from anthropic. And it's a good.
Speaker A
[00:45:50] How much do you pay for that?
Speaker B
[00:45:52] I don't know, 20 bucks.
Speaker A
[00:45:54] 20 bucks a month?
Speaker C
[00:45:55] Yeah.
Speaker A
[00:45:55] Okay.
Speaker B
[00:45:56] And I use it. I use it a lot for research, for figuring out, you know, like I gotta do a course, you know, what are the latest greatest thinkings and research and papers and references. I use it a lot for that. I'm a horrible speller. I'm a little dyslexia.
Speaker D
[00:46:13] So.
Speaker B
[00:46:16] Editing of it helps me a lot. Proofreading helps me a lot. I use it as a thought partner. I'm thinking about this. What did I miss? You know, what are the three weak points in this article? I just wrote stuff like that.
Speaker A
[00:46:34] Great Eric, again with the microphone there.
Speaker C
[00:46:37] Sorry. So I spend over $600 a month on AI. I have a 200 subscription to the most advanced version of ChatGPT. I pay a hundred dollars a month to anthropic, I pay $20 a month to Grok and I pay $300 to a legal software AI company. I overpay the. But what do I use it for?
[00:47:11] I could not agree more with Steve that the interaction with AI is often a conversation. You get more out of it. The AIs have started to follow up on the conversation. I ask it about something and then it gives me back three different flags things I might places I might want to go in terms of another question.
[00:47:40] And I remember when ChatGPT5 first came out and I got an advanced version of it, I had like a 45 minute conversation with it about starting a new business and it was extraordinarily good. But I don't use it just for legal stuff, I use it for cooking recipes piece. I use it for many, many different things.
Speaker A
[00:48:09] So fabulous, fabulous stuff.
Speaker C
[00:48:12] It's great.
Speaker A
[00:48:13] Eric Sasha.
Speaker D
[00:48:15] Well yeah, I, I so what I pay for, I pay for Claude. Claude is an essential for writing. I mean it's way better writer than any of the other AIs. But I also have Chat GPT actually with two different paid accounts because my school pays for one and then it can do more things than cloud.
[00:48:36] So like if you want to build custom bot, the cloud is not as convenient for that so but they're not. The differences are not huge I would say but if you, if you really have want to have a like serious conversation, strongly recommend Cloud. It's just more a human like response and the rest of them are pretty alien kind of non human but, but as Eric said I use it for short answers for everything every step of the day, every day.
[00:49:07] I use it many times on things writing a long email to somebody turning like say if I were recording this conversation into maybe a blog post that, that kind of stuff. There are probably few hundred users that they can list.
Speaker A
[00:49:27] Great. Well we have a few minutes for some questions, so let's make it specific. Yes. Are the trustee here from Sierra College.
Speaker B
[00:49:37] Hi. I appreciate all of your comments. One topic you haven't touched on that I've been asked multiple times about is the impact to the environment.
Speaker A
[00:49:47] Impact to the environment? Yes. Electric rates are going up much due to PG and E's demand from AI. So any thoughts on environmental, environmental issues relative to AI?
Speaker B
[00:50:02] Yeah, it's big right now and I think it's big because we're still using yesterday's technology, chips, data center models to drive this new need of AI. So you're seeing these huge data centers and billions and billions invested from, from these big companies to build it, you know, even to the point where they're starting to reopen Three Mile Island.
[00:50:25] Right. And they're talking about other portable, smaller scale nuclear reactors. It's huge. And I, I think back to the professor's comment on quantum. Until we get to quantum and we have a more efficient line, that's, that's going to be a big impact.
Speaker A
[00:50:43] Anything else? How many of you?
Speaker C
[00:50:46] Well, I would add that a lot of this money on developing new power sources and the new data centers is being spent directly by the foundational models. So it's not all coming out of the taxpayer's pocketbook. And of course, being a step ahead of everybody else, Elon Musk really has the answer and the capability which is to put all this out in outer space where there is free, free energy and one that doesn't have a significant environmental impact.
Speaker A
[00:51:23] Wow.
Speaker C
[00:51:24] He's talked about doing it.
Speaker A
[00:51:26] Sasha.
Speaker D
[00:51:26] Well, for the, for fairness sake, you need to compare it to other things. Like if you, if you skipped a shower this morning, you saved more energy and water than your maybe entire week of use of AI. So per minute use, it's a lot less. It's like a 10% of what you spending by watching color TV.
[00:51:45] So if you want to cut somewhere, why start with this one? Yeah, maybe start with, you know, if you drove here, that's, that's you, you wasted a lot energy and water probably polluted a lot more than whatever, so many. So if you calculate per minute, it's really not that per minute per user, it's not that high.
[00:52:04] And actually the number is going down constantly because, you know, two years ago it was, I don't know, 200 milliliters of water and now it's 30 milliliters of water. So all of this number is going down and when you weigh it in, you have to say, what's the benefits and what's the cost analysis?
[00:52:22] However you do it, just don't forget the benefits. Okay.
Speaker A
[00:52:25] Other questions? Yes, sir.
Speaker C
[00:52:28] What impact you think AI will have on the entire medical profession and medical care?
Speaker A
[00:52:33] Impact on the medical profession? Anyone?
Speaker D
[00:52:37] It's already having an impact. I heard from a Kaiser executive that you don't know that, but we saved hundreds of lives already every year. So they actually been early to the game before all this whole hoopla. So in certain areas like X ray diagnostic, they're way better than humans in some areas. Areas.
[00:52:56] So they're very active and the effects have been very positive so far.
Speaker A
[00:53:00] Jim, ask your question about bias, if you don't mind. I think that's an interesting one.
Speaker C
[00:53:04] Yeah. And the experience I had was in testing it. I asked Chat GPT about myself and what I did during a certain decade of my life and by the time it was done, I thought I should have gotten a Nobel Prize. I've noticed that in almost all the things I review about companies, schools, whatever, there's a positive bias probably to motivate me to use it more.
Speaker A
[00:53:33] Is that a fair statement that there's a positive bias on AI?
Speaker D
[00:53:39] It's sycophantic, yeah. If it says you're great, don't believe it ever. But I have to say that Claude actually has less of a problem than ChatGPT. Cloud is a lot more critical. If you ask it for a fair opinion, I will give it. Yeah.
Speaker B
[00:53:52] And data is the foundation of AI, Right. AI only re, you know, it only acts on the data that's available. So what's in the data? I mean, mostly positive type information on.
Speaker A
[00:54:04] These companies, Sir, Back in the green back there.
Speaker C
[00:54:09] One of my greatest concerns is that AI is going to be taken over.
Speaker B
[00:54:12] By one, two, two or three mega corporations.
Speaker C
[00:54:17] What safeguards do you think could be taken to protect people from that happening?
Speaker B
[00:54:21] Because if it follows the Internet model, that's what it's bound for.
Speaker A
[00:54:24] So it seems like there's a race. There's a race in the stock market, there's a race out there in terms of only being the one. I think you sort of talked about this a bit and that's all your great. One of your great concerns, all of you, that, that we need multiple AI AI platforms other than, you know, Apples or Microsoft's, etc.
[00:54:45] Is that a fair statement?
Speaker B
[00:54:46] Yeah. And I, you know, support open source AI as well, which kind of democratizes the technology so more players can be in that. But again, there's a, there's a pro and A con to that. The only real company that's really pushing that is Deep Seat right from China, which has concerns in being a Chinese product.
[00:55:11] But they're the only ones really pushing an open source platform. Open source means that it's free to other companies to go ahead and take that technology and use it and modify it themselves. So I think it's back to Eric. We need a very robust competitive marketplace with multiple players in it and we don't.
Speaker A
[00:55:29] So let me ask a final question. Sorry, Eric, because we are out of time. Is the United States leading this, this and. And where is our chief competitor? Is it China? And. And what's that competition look like? Sasha, you're probably out there more than this.
Speaker D
[00:55:45] The United States is leading the AI and should. Should keep leading it and will keep.
Speaker A
[00:55:50] Leading it in your eyes.
Speaker D
[00:55:51] Should. Should keep leading. I. I totally agree with Eric. I think over regulation may kill it if we stop doing it. If California stopped doing it, Texas won't. If the whole country stop or slowing down, China won't. And China still not still catching up. Still have a lot of things to do before they catch up with us.
Speaker A
[00:56:12] Any final thoughts before we unleash the audience to come up to you and ask you their specific question? Anything else from all of you?
Speaker B
[00:56:22] It's an exciting time. I mean for people involved in this, it's an exciting time. But again, I always try to put the context or just in the beginning of this. Right. So it's not. It's not how great it is today, it's how good it's going to be tomorrow.
Speaker A
[00:56:36] Well, this hour has been phenomenal and I'm sure the audience would like to tell you that.
Speaker A
[00:00:00] County Superintendent of Schools and your moderator for today's forum. And you must look at the group here and say, where are the 20 somethings here? They should be running this. No, we went to, we went for leadership. We went for knowledge base and experience. So I want to introduce my very good friend who I've known for a long time, Steve Monahan, who's the retired Chief Information Officer for the county of Nevada, who's been awarded CIO of the year.
[00:00:31] Oh, he's got, he's, that's why I put Tech Guru on the, on the announcement. And then next to him is Eric Little. Eric's an attorney who's worked with lots of startup AI companies and is currently actually working with a company locally called Ladras that's here in Nevada County. AI firm. And then finally is Sasha Sidorkin.
[00:00:56] And Sasha is a professor of education for SAC State and known as the Tech Guru. Actually that's his title for Sacramento State and AI research and all. So welcome to you all. And let's begin with you, Mr. Monahan. Let's get right into this. So what is AI, the workplace going to look like here in the, in the next decade?
[00:01:21] What's going to happen in the American workplace? You keep hearing these things like, oh, it's going to be tremendous job losses. Then I heard last night, oh, there's going to be job increases. What's the workplace going to look like, Steve?
Speaker B
[00:01:33] Great question, Terry. And we don't really know yet, but we know it's going to be profoundly different than it is today. Right. So I teach IT leadership classes and over the last six month, AI classes to leaders of organizations about what they need to do now to prepare for what's coming and what's already here.
[00:01:56] So I like to put context to it. People are talking about AI. It's all new. And I'm sure Sasha will talk about this as well. AI has been around for 50 years. It's what's been out for the last three years, this generative AI that came out with ChatGPT and those technologies that has made it available to the masses, consumerized AI so that it's available to your average organization in all sorts of products.
[00:02:26] That's what's really changed. And it got really hyped up and now we're starting to see some disillusionment with it. You know, people are like, oh, it's not that good. I put it into Google, I do my search, it's wrong half the time. That's kind of that consumer version of AI and It's really just the tip of the iceberg of all the AI that's in place and it's coming down the pike right now.
[00:02:50] So I have this slide I put up in the beginning of my classes and it's the Internet in 1994. That's where we're at with AI right now. If you think about the Internet in 1994, web pages were like PDF documents, right? There wasn't a lot of functionality there. There was no Amazon, there was no Netflix, there was no streaming Spotify, there was no Zoom, there was no E commerce, there was no banking online, there was no telemedicine, there was no online education.
[00:03:25] And if you look at the last 30 years of what the Internet has done to every aspect of our lives and our society and our commerce and our education and healthcare, that's the impact that AI is going to have on our organizations. But it's not going to take 30 years. It's going to be in five years, 10 years.
[00:03:48] And it's happening at an accelerated rate compared to what the Internet did. Internet took some time, but AI is going to happen quicker. So I don't know if that answered your question.
Speaker A
[00:03:58] Well, do you see it as a, in the average workplace, as a threat these days to the average workplace or you as an employer excited about this opportunity?
Speaker B
[00:04:11] Well, I'm excited about the opportunity and what it can do in augmenting people and enhancing employees performance and capability. So that's what generative AI is doing. It's helping people who like, I'm not artistic, but I can create art now, right? I can create a song. It may not be a good song, but I could create it.
[00:04:33] I couldn't do it before at all. So it's enabling people to do things they otherwise couldn't do and it's doing general productivity enhancements. You know, manage your calendar better, manage meetings better, manage, you know, write memos better, you know, correct all my spelling errors, my grammar errors for me, those kind of things.
[00:04:55] AI as a thought partner, critical thinking, challenging your assumptions. So it's helping in those areas and it's just starting now to get into organizations to help with products and services, business processes, how we do workflows. Every major software provider is building AI into their applications. It's in our phones, it's in your Windows computers that you have at home.
[00:05:28] It's going to be in your tv, it's going to be in every piece of software.
Speaker A
[00:05:33] So back to the workplace. We had the conversation when we were chatting about coming to the forum and you noted you have one son who's going to be an electrician and another who's an undergraduate in computer studies. And you said, gee, I'm really excited about my son in going to be an electrician, but finish the sentence.
Speaker B
[00:05:54] Yeah, but my son who's, who's going into information science is going to have a little more challenging job market. So what we're seeing in big organizations, if you look at Google, if you look at Meta, Microsoft, they're all doing huge layoffs in that programmer development, that expert kind of layer. You know, 2,000 here, 3,000 there.
[00:06:25] We don't have a lot of organizations, especially in Nevada county that have first employees that only really do one function, program development. Right. Or do we have hundreds of them in that same that can be replaced by AI because AI can take over expert via an expert. So it's very, very effective for AI to take over and let a thousand developers do the work of what 1500 used to do through its productivity enhancements.
[00:06:58] We don't have a lot of opportunities like that. I don't know of any because most employees are fragmented. You know, point one of their time is doing that is zero point five of their time is doing that is point two of their time is doing that. So you can't, I don't see that mass transformation in the average business, especially small business.
Speaker A
[00:07:19] Great. Yeah, thanks. Eric, your thoughts? You deal with companies starting up and pick up the mic there if you don't mind and give us some thoughts about this changing workplace.
Speaker C
[00:07:30] Well, so my profession is being changed dramatically by AI and I think it will be, it will look very different in 15 to 20 years.
Speaker A
[00:07:44] The legal profession.
Speaker C
[00:07:45] The legal profession. There's a study by a Stanford economist that came out a couple of months ago that said an entry level jobs in the professions, the fields most affected by AI are down 13%. There is a writer about AI who's begun a conversation about a post labor economy. I think that what I read suggests that that won't happen quite as quickly as the technologists suggest.
[00:08:28] There seems to be a split of opinion. The technologists say this is going to happen in five years and the economists say no. There are various institutional barriers, human barriers that are going to slow the rate of adoption. But I completely agree with Steve. It's going to be faster than the Internet and way faster than the industrial revolution.
[00:08:57] There's, I think some people that feel that AI is in a bubble and we can unpack what might be meant by that. But one of the things that I believe is True, because I see it greatly in my own work that AI is becoming much more useful over the past couple of years and they're really.
[00:09:23] I don't believe that we've exhausted the sorts of new technologies that are going to related to AI that are going to have an impact. For example, on the horizon is.
Speaker D
[00:09:38] The.
Speaker C
[00:09:39] Not just AI, but robotics. And not to raise a controversial name, but Elon Musk is going to make a trillion dollars if he sells enough robots over the next 10 years. And he's pretty good at that sort of thing. And when you marry a robot with AI, you're going to get another technological change which is going to have dramatic effect.
Speaker A
[00:10:11] So let's just talk about your profession for a sure bit because we're not all lawyers. So tell us, in your field, what are you going to see change just relative to the law profession?
Speaker C
[00:10:23] Well, so let me talk first about myself and then about what I think I'm seeing in the larger firms. So I have a background in a couple of areas. I have done a lot of trouble transactional work for technology companies. I've done some financing and I actually have a background over 30 years ago in building expert systems for lawyers.
[00:10:57] What I found about AI in my own practice is it, it expands the areas where I feel comfortable practicing maybe more than just at the margin. So that while I had a fairly narrow lane that I normally like to keep in, now I can move more outside of that lane. And that in general makes me do two things happen.
[00:11:28] One is I can spend less time on a matter which means that my, my cost to a client is lower so I become somewhat more affordable. And secondly, I guess the practice of law is becoming more competitive. There are now a huge number of well funded legal software companies that will automate a variety of different aspects of your practice.
[00:12:03] And they are selling to everybody from small firms to the very large firms. A lot of the large firms are, I believe have millions of documents in their databases and they are taking AI training, AI on those documents so that their need for young lawyers who work research is declining. They will hire less.
[00:12:36] I would not want to be a law school graduate today unless I had serious expertise in some additional domain that would support perhaps a more specialized practice grant.
Speaker A
[00:12:50] Steve?
Speaker B
[00:12:51] Yeah, that brings up Eric had mentioned the Stanford study and that was a Stanford adp, the payroll company study. They came out last month and a half ago and what it showed was a 13% decline in new hires in the demographic of 22 year olds to 28 year olds. But it wasn't an overall decline in hiring.
[00:13:12] It shifted to an older demographic, the 29 year old to the 40 year old. So companies are not hiring the younger people. That goes back to my son. I'm telling them differentiate. You know, he's picking up a minor in project management. Because you can't just have the tech skills. Right. To try to what's that edge going to be?
[00:13:32] So what we're seeing is AI can replace or replace the need for a lot of the younger people, but they're hiring older, more well established people because you still need judgment. AI has no judgment. AI has no empathy. So you still need the older. You need a little wisdom, Terry. I'll get back in the workplace.
[00:13:56] So we're seeing that shift, which is going to make it harder. That's why we're seeing that, you know, people, you know, the low hiring rates out of college, you know, the new graduates not being able to find jobs in the workplace. And that's a real problem.
Speaker D
[00:14:14] Yeah.
Speaker A
[00:14:14] Well, let's talk about academia. So Sasha, what's the workplace going to look like in college and high school education? All then we'll focus a little bit more about the classroom. So talk to us about the changing workplace and the changing workplace in academia.
Speaker D
[00:14:37] So the AI affects different industries in a different way. So that's I think important to understand. So when we try to actually implement it in business and other organizations, what turn out to be the case is that if you have a short, relatively short workflows where you can see the final result and can judge whether it's good or not, that's the easiest ones to implement.
[00:15:00] So when you have long and complicated workflows where everybody has to trust the previous chain kind of a person in the chain, then it's a lot more difficult to implement. So like a beginner lawyer or like a paralegal beginning coder is the vulnerable professional sessions. The. If you look at customer service, right, customer service, you have one phone call, one interaction and then you fix the problem, then you go to the next one.
[00:15:30] It's the same thing, very easy to replace. So the customer service as a industry is really dying, being wiped out, because AI can do a better job actually than most humans to do that. Now in education it's very different because we are really affected directly more than any other industries because in education we've been moving away from multiple choice, test or oral exam assessments for decades.
[00:15:58] What we use now in education is what we call performance based assessment. So you wrote an essay, you must not know something or you created a project or you developed something, so you create the product and you show it to us as a student, and then we can judge whether you know anything or.
[00:16:13] Or can do things. So the problem with us is that AI kind of wipes out our most essential tool like that. You know, it can turn out, colleges say, pretty easy, and it's going to be better than any student. So it's. And of course, because we. We're not affected because there's some competition somewhere there.
[00:16:35] No, we're affected because that technology takes away some of our most essential tools. And, you know, just to be completely honest, we're a little bit in disarray in higher ed. We didn't expect that to happen. Nobody wanted to happen. People don't know what to do. We will figure it out. You know, there are solutions to that.
[00:16:56] The solutions are that you have to actually ask students to do more complicated things and allow them to use AI. But the complicated things where you can actually measure the gap between what AI can do and what actual human input will be doing. So the solution is to teach students to supervise AI to be an executive who.
[00:17:20] You know, there is a difference between, like, a manager and executive. Executive is the one that has a strategy and tells people what to do. So in a way, we're all turning into executives. But you still need to have an overall idea what needs to be done, what is worth doing, what's the difference between the good outcome and the crappy outcome, and judge it and make a difference.
[00:17:41] So those are more advanced skills that we need to teach students. The skills like, where do you put. Or whether you put in the Oxford comma somewhere is really becoming completely irrelevant. I would argue that it was always irrelevant, but. Or like, can you put together a perfectly grammatical English sentence? I mean, some of us spend years trying to master that skill, and suddenly it becomes completely relevant.
[00:18:11] The robot can do that. Right. So it's very painful also for people, like, emotionally, there's this reaction, oh, my God, I studied. And then when I ask my colleagues, we'll stop assigning college essays, they are ready to kind of crucify me for that. What do you mean colleges say is learning well? No, there's actually a difference between thinking and writing.
[00:18:33] There's a big difference between thinking and writing. We kind of lost that edge and lost that difference. But anyway, I don't want to go into all the details. I just want to say that our industry is affected. We're hurting. We need really help. And the governments don't understand that we. Because that is very Difficult to explain.
[00:18:54] Why is it. Why don't you teach AI? Well, we can teach AI. It's actually easier than to teach writing or English or history, philosophy or social work or education or something like that. That's our problem, not teaching AI, but how do we teach all the subjects that we have? But again, there is this kind of.
[00:19:14] We kind of started to see the glimpses of solution. And I think in a few years, the colleges are not going to die. People are not going to college to interact with AI. They come to college to have, you know, a nice building to meet their future, you know, partner in life. AI just makes it a little more obvious that that's what very few people come.
[00:19:37] Just give me information. The information is available. You don't need college for information. You need experience, life experience. The education is really a relational in its essence. It's not informational, it's relational. So colleges are not dying and they're going to look a little different, not dramatically different, but we'll have different kinds of assignments.
[00:19:58] We have to revise the skills that we actually teach. And how we teach them will take us a while to look.
Speaker A
[00:20:04] So, Sasha, we heard the legal profession's going to be weeded out a bit, and computer studies and all. What other departments are you going to see that are going to be directly affected by AI in higher ed?
Speaker D
[00:20:18] Well, that's not what Eric said. He said that the junior lawyers are going to be wiped out.
Speaker A
[00:20:24] Right, Right.
Speaker D
[00:20:25] But the advanced expertise in law is absolutely irreplaceable. There's no robot even in Horizon, but.
Speaker A
[00:20:33] There'S going to be less law school students.
Speaker D
[00:20:37] Maybe temporarily.
Speaker A
[00:20:38] Okay.
Speaker D
[00:20:39] But once we figure out how to teach the more advanced, Once we teach them how to beat the robot, then they'll come back to us.
Speaker C
[00:20:48] I gotta jump in here.
Speaker A
[00:20:49] Go ahead.
Speaker C
[00:20:51] So I didn't talk about. Well, I only talked briefly about the large law firms.
Speaker A
[00:20:55] But let me use that mic a little more. No, the mic's a little closer to you.
Speaker C
[00:21:02] So I think that there are a lot of people making judgments about AI out there that have used the free version or the $20 version of ChatGPT or Claude or some of the other foundational models. And they. But there are two things I want to say about that. One is there's a big difference between the free version and the $200 a month version.
[00:21:33] And the $200 a month version is very impressive. But this is not going to stop. I think if we try to project the future on the basis of, well, it's just going to be more of what we have today. We are selling the future short. These models are going to get, these capabilities are going to get stronger and better.
[00:21:59] And I don't think just because you're an executive means that you're immune to being replaced. I saw a video of Sam Altman saying he's out to have a foundational model take over his job. So. I think we have to project in the future as to what the capabilities might be and assess what the impact on different professions will be then.
Speaker A
[00:22:35] Okay, Steve?
Speaker B
[00:22:36] Yeah, I think, you know, if we're looking in the horizon, the short term to 10 year horizon as we lose entry level positions, the number of them. What I talk about with leaders is how are you going to do workforce development if you still need wisdom and judgment at the top and your feeder is being decimated because how many of those are actually going to make it through to the top to be the wisdom and judgment in ten years from now, you know, so how are you going to restructure your classifications, your organizational structure such that you don't.
[00:23:15] You can develop people into the leaders we need in the future. Otherwise, you know, they're hollowing out the middle. And that's one of the terms they're using is the middle is being hollowed out because that's where AI right now can do the most replacement.
Speaker A
[00:23:29] Okay, great. Dasha, you want to start?
Speaker D
[00:23:32] Yeah, I just want to comment on. Eric's here. So first of all, don't listen to Sam Altman. He's trying to raise the money. Of course he's going to sell you the most hyped up version of the future. So there's a conflict of interest there. Don't listen to them. If you, in fact, if you're watching YouTube or something, that people who come up on top are either saying AI is total crap, it's not going to make any difference, or the people who say, oh, it's going to change the world.
[00:24:01] Well, it's neither. I'm sorry that, you know, life is not YouTube algorithm. It's a, it's. I agree it's a revolutionary technology, but I can clearly see that they hit certain ceiling at the large language models. So they're improving. Like, you know, just a couple days ago, Gemini 3 came out. It's awesome, it's wonderful.
[00:24:22] It's not qualitatively different from ChatGPT4. It's not that different. I mean, it's a little better. So when they promise you like a leap, what they forget to tell you is that the plan if you just pour more compute into it and have like more bigger context windows, somehow the models will become amazingly smart.
[00:24:42] And I did try a $200 model as well. I don't think it's that great. I'm just saying, you know, when you try to write like a book with that, it will not write a book for you. So you kind of have to have architecture of that book and ideas and you have to pour in ideas before it can.
[00:25:01] So again, but if, even if technology doesn't change at all from now on, we still have 20 years of innovation because how to use it is really hard to figure out and apply it.
Speaker A
[00:25:14] I can tell I don't have to ask many questions.
Speaker B
[00:25:17] I was just saying, remember we're in 1994 with the Internet right now. So the models are great, they're pretty darn good and they're just going to get better. And you know, we haven't talked about compute power which is driving all of this. So the more powerful our computers are getting, the better AI runs.
[00:25:40] The better AI runs, the more powerful our computer gets. You know, so it's going at an exponential rate. So we're in 1994. You know, our current chat GPT3 is like a PDF document, right? It's pretty powerful, but it's nothing like it's going to be in five years from now. So what I tell leaders are you don't make plans for today, you make plans for tomorrow.
[00:26:07] So what if ChatGPT was 100% right in 5 years it could do all these things that are hyped up right now in five years. How are you going to start changing your organization today to be ready for that? Change your classifications, upscale your workforce, change your organizational design because you're not going to need the same top down structure five years from now, ten years from now that you have today.
[00:26:37] But if you don't start doing it, it takes a long time to change an organization that goes back to what Eric was saying. That which is going to be kind of the governor in slowing down AI. It takes 5, 10 years to meaningfully change a big organization.
Speaker A
[00:26:54] So let's talk about one aspect of it that everybody talks that is going, we're going to lose creative thinking. Creative think. We will turn to the computer and it will do our creative thinking.
Speaker B
[00:27:06] Steve, I don't think it's gonna. AI is just math, right? It doesn't do. It doesn't have creativity built into it. It needs a human for that. What it can do is help you be a thought leader with you challenge, you help you think about things, spur that conversation. And that's what we're seeing.
[00:27:31] The real value when we upskill employees in using these tools is that it's not just use it like a Google search, use it like a conversation. Right? Go back and forth. When I use I like Claude, I use Claude a lot. I'll say, I'll write something. I'll say claude, ask me three more questions about what I just wrote.
[00:27:54] Tell me three things that I got wrong. Ask me four more clarifying questions to make me think bigger about this topic. So you go back and forth with it and it remembers that context. That's the power of using AI as a partner, not a replacement. That's the term that we're hearing a lot, is augment AI, is augmenting our employees to be more productive.
[00:28:19] I can see Sasha wants to say something.
Speaker D
[00:28:21] No.
Speaker A
[00:28:22] Okay, good, good. So that leads us into the conversation, Eric, of a lot of the people here are retired. They're out of the workforce. What is AI going to mean for us who are not in the workforce? Pick up that one.
Speaker C
[00:28:42] I did not prepare for that question or Sasha.
Speaker A
[00:28:45] Please don't, don't.
Speaker C
[00:28:46] I think it's going to be very entertaining. It's going to be a bumpy ride for those in the workforce over the next 20 years. It's so I was an avid Sci Fi reader when I was in my teens and twenties. I never thought I'd live to see the day off where there was a robot that would pass a Turing test.
[00:29:17] I feel privileged to be able to be a spectator in what's going on. And so I think it's.
Speaker D
[00:29:36] As I.
Speaker C
[00:29:37] Said, I think it's going to be enormously interesting. It's going to have profound impacts on society. Both what we see directly in terms of workforce changes, but a post labor economy, if that's what comes into being, is going to change many, many other things about society, about our culture. And it is, I think, enjoyable to sit out there and try to think through what those will be.
Speaker A
[00:30:17] Let's turn to Sasha.
Speaker C
[00:30:18] Yeah.
Speaker A
[00:30:19] What's it going to look like, Sasha, for many of us in this audience, I think your audio's on.
Speaker D
[00:30:25] The big question is what's the overall impact on the economy? We haven't seen the productivity growth yet from AI, but it will definitely happen. And the big question is how much. So if we increase our productivity economy wise, even by 1%, that means that your Social Security and your pension are probably going to be okay and the dollar is still going to be more or less a dollar.
[00:30:49] So if we fail to do that then of course you know where the long term trend is going. Population is aging, there's fewer people supporting the economy. So we have trouble down the road. So that might be kind of a saving grace. And we actually the productivity hasn't been growing very fast in the last 30 years.
[00:31:10] I don't know if you know that or not. We're not producing a lot more with the same people. So AI has a chance maybe to actually give a little boost to the economy. I don't expect, I mean the kind of a post labor economy is an interesting concept. I also see studied that very well a long time.
[00:31:28] We don't know yet. I mean we, it may or may not happen, but it really have, it will really have a huge effect of us to say 50% people of people cannot find work because everything is done by robot. That will mean a dramatic cataclysm to us. We don't know how to live without working.
[00:31:46] We don't respect people who don't work, we suppress them, we don't put them in miserable conditions. Anyway, it's going to be a huge cultural change which I'm not sure we can actually go through. The best hope that will happen slowly and gradually and then eventually, you know, somehow people will have more leisure time, less work.
[00:32:08] But none of us equipped yet to do that. So if sudden labor, post labor economy happens tomorrow, we're up for a social catastrophe, more or less.
Speaker A
[00:32:17] Steve?
Speaker B
[00:32:18] Yeah. I think short term and you and I had a conversation. I have my 88 year old father, we moved in with us and I think about the impacts of what AI is and technology always has a dual purpose, right? You have the good, you have the bad. So I worry more about that demographic with the sophistication AI is bringing to scams.
[00:32:44] You know, phishing emails are no longer broken English, it's no longer a prince from Nigeria, you know, it's, it's your nephew with audio or video clip that sounds and looks just like him saying hey, I'm, I'm trapped, I'm broken down, send me some money. And the sophistication with, you know, this is your bank trying to get information from you.
[00:33:08] I mean it's just gone up a thousand percent. So that is my biggest concern with that demographic. So we really have to think about within organizations. You know, how do we do checks and balances. No longer can you just rely on somebody calling you on the phone and it's like hey Terry, how you doing?
[00:33:29] This is Steve, Is it really Steve? How are you going to know that? I could zoom you. How would you really know that? You know, and that is going to be a big impact. It's that negative. It's that dual purpose. And every technology since the dawn of time has had dual purpose. And AI certainly has that because all the bad guys are pouring billions into using AI to do their line of work.
Speaker A
[00:33:57] Wow, Steve, I mean, go ahead, Eric, with your mic.
Speaker C
[00:34:00] Eric, sorry to go back to that earlier question. How does it affect you as a retiree? I think I focus on what Steve said and just mention that the scams are going to get a lot better and retirees are always a target for scams. So in terms of sort of immediate impact, that might change things, I would be very, very cautious about your interactions through the Internet because I see, you know, some pretty good scams already, many of them coming from other law firms or spoofing other law firms.
[00:34:47] And so, yes, be cautious.
Speaker A
[00:34:51] Sasha, you agree?
Speaker D
[00:34:53] Yeah, totally agree. I think it's all fair. I just went on the bright side, AI is also an accommodation. Like, you know, 20% of population are dyslexic. So if you use IRA, it kind of makes that problem more relevant. It doesn't solve it. I mean, they're still dyslexic as they always been. But like, your eyeglasses will fix your, you know, poor eyesight and functionally you're the same.
[00:35:18] Right. So if you wear them. So it's the same with dyslexic. Also. There's also very interesting experimentation with people with cognitive decline, like if you, you have any relatives with Alzheimer's. So people lose ability to write or sometimes to speak clearly. There are very interesting promising applications of how you can do that.
[00:35:37] How can you turn written speech into oral and then back again. So AI's accommodation, I think is underappreciated, although the community, the disability rights community, working actively on that as well.
Speaker A
[00:35:51] So in 1994, Steve, you and I were trying to figure out how to put computers into schools and things like that and all, and government didn't know how to react whatsoever. And here we are 30 years later with this new technology and government right now is sort of this totally hands off. Government seems to be always behind the eight ball relative to regulation.
[00:36:17] How much regulation needs to get involved? How much does does government need to get involved or does it need a hands off approach to this?
Speaker B
[00:36:25] Well, so my friend Eric and I had this conversation over coffee a couple of weeks ago. Yeah, yeah. If you read a lot of the literature about AI, Some of the advocacy groups on disadvantaged communities are really concerned with AI because they think AI is going to lead to redlining, enhanced disadvantaged aspects to those populations because it's going to be accessible to the more affluent, the more educated and it's going to amplify some of these social issues that we're already dealing with.
[00:37:10] So from that perspective, they want to see some, some intervention, some regulation. We're seeing a lot in California, I think there was four laws or legislative bills that were proposed. They're still in the legislature and they're being really pushed by the labor unions because they don't want AI to make decisions on people's employment and livelihood by itself.
[00:37:35] There was a law that Governor Newsom vetoed a month and a half ago that would require employers to give a 30 day notice to an employee if AI made a decision that impacted them. A lot of this is really new and you know, it's the unintended consequences and I think that's why the governor vetoed that one bill.
[00:37:56] But you know, it's that dual purpose. If you use AI. What was the one we talked about? Chat GPT being sued by a lot of people, not a lot of people, but by families of kids that did self harm because they had this interactive relationship and the, the AI harmed themselves or led to them harming themselves.
[00:38:23] And there was legislation passed through that. So it's, it's still purpose.
Speaker D
[00:38:29] There can be good regulation.
Speaker A
[00:38:30] We have, we have the federal government, Eric, sort of saying total hands off nationwide on AI. Where are we? What's you, you, you're in the legal business. What is, what's needed in terms of, to be proactive in regulating AI. With the mic, Eric, with the mic. Sorry.
Speaker C
[00:38:54] So I have a, a slightly different take on all of this. I heard somebody recently compare what's happening here with AI to the industrial revolution on steroids. The interesting thing about the end of the industrial revolution is that there was a lot of government regulation. There were railroad trusts, there were oil trusts and the government came in with some the Sherman act and the Clayton Antitrust act.
[00:39:37] And out of a concern about the acquisition of economic and political power, they broke the trust up. My biggest concern about AI is that we wind up in a situation where there is one AI rather than a diversity of AI. I think that's potentially the very worst outcome that could occur here. And we already have the tools to deal with that.
[00:40:09] We don't need new regulation. In fact, my concern here is that if we have 50 different sets of regulation from 50 different groups of politicians, all with slightly different concerns from their constituents. What we will do is we will raise the cost of AI to a degree where even these companies that are getting enormous funding can't really stay competitive.
[00:40:41] There will be huge barriers of entry and we will wind up with a handful of AI. If there are issues here, our solution is going to be a diversity of AI, not over regulation. It's a knee jerk reaction of politicians to when their constituents are concerned about something to adopt a regulation. But in many, many cases, and this goes far beyond AI, there in fact are 200 years, 250 years of regulation in the United States and there are well trod paths for how you handle those problems.
[00:41:27] If ChatGPT is giving bad advice to teenagers and there's harm resulting from that, then there are ways to to sue OpenAI or Anthropic. There are regulations in place on discrimination. A multitude of new regulation is going to lead to worse outcomes rather than better.
Speaker D
[00:42:00] Sasha, I just want to support what Eric just said. I agree with that. An additional point is that, you know, I got a call from state senate or one of the staffers. They say, well, we're considering this bill where companies will have to provide us the protocols of their safety testing every year.
[00:42:18] And I ask them who is going to read them? In your government you don't have anyone who has that kind of level of expertise in the entire state government of California. So I think the attempt to regulate something you really don't understand is a terrible idea. They tried to do that with the Internet.
[00:42:35] If you remember old enough, like you look at the Congress of the United States, most of these people didn't have an account or Facebook or anything. They were trying to regulate something. They had no idea what that is. With AI, it's even more difficult because even people who work in that very industry don't know what's going to happen.
[00:42:52] Like say 10 years ago we had a lot of concern about, you know, rise of the machine is going to replace that. I don't know if you've noticed it, but those concerns kind of died down in the last couple of years because there is nothing, no empirical evidence of anything like that happened.
[00:43:07] So we may have some disaster coming from AI, but we don't know where it's coming from. Like with the deep fakes, we kind of discussed that the deep fakes people were very concerned. I am still to see one real good deep fake. Show it to me. If you've seen it, usually you can kind of tell you know, six fingers or something, or unnatural movement.
[00:43:29] So if you're not completely. If you're not a child, you will tell the difference. So, I mean, is it a big risk or not? The problem is not that. I mean, we're not here, we can't predict the future, but we don't even have a methodology of risk assessment for those technologies yet. For example, if they're computing, the quantum computing comes on board.
[00:43:49] None of us have any idea how is it going to do with AI on it. So there's a lot of speculation. And again, if you want to get a lot of points on YouTube, you will either exaggerate them or you completely dismiss them. I'm sorry to say it, but the truth is we don't know.
[00:44:07] And the last small point about the kids who commit suicide because they talk to AI, I'm sorry, it's not the popular opinion, but suicide rates among teenagers been always high. Without AI, they would have been obsessed about something else. They would have unhealthy relationship with peers, with imaginary friends, with. With stars or celebrities and all of that.
[00:44:29] So just because the kid had a conversation with AI and committed suicide does not make it a cause for that event. So there's probably mental illness, probably instability before that. There's many factors that could cause that. And what people forget to mention is that one of the most common uses of AI is for psychotherapy.
[00:44:49] People who couldn't afford it were too ashamed to go to a therapist. Now get some sort of advice. We don't know whether it's really great advice, but we know that people use it, at least find some value in it. And there's some studies that show it's actually not that bad. I mean, it's better than.
[00:45:06] It's not as good as human therapist ever, but it's better than a stupid human therapist.
Speaker A
[00:45:15] Nothing like being frank about it. Sasha. So Steve, you mentioned one of your AI favorites, which, which you may want to. So what I'm asking all three of you is what are the things that you do on AI and what. What sites do you use? Since we're all neophytes out here and you think about this all the time, what sites do you use?
Speaker B
[00:45:37] Steve? My favorite site, the only one I pay for is Claude.
Speaker A
[00:45:42] And how's that spelled?
Speaker B
[00:45:44] C L A U, D E. Claude from anthropic. And it's a good.
Speaker A
[00:45:50] How much do you pay for that?
Speaker B
[00:45:52] I don't know, 20 bucks.
Speaker A
[00:45:54] 20 bucks a month?
Speaker C
[00:45:55] Yeah.
Speaker A
[00:45:55] Okay.
Speaker B
[00:45:56] And I use it. I use it a lot for research, for figuring out, you know, like I gotta do a course, you know, what are the latest greatest thinkings and research and papers and references. I use it a lot for that. I'm a horrible speller. I'm a little dyslexia.
Speaker D
[00:46:13] So.
Speaker B
[00:46:16] Editing of it helps me a lot. Proofreading helps me a lot. I use it as a thought partner. I'm thinking about this. What did I miss? You know, what are the three weak points in this article? I just wrote stuff like that.
Speaker A
[00:46:34] Great Eric, again with the microphone there.
Speaker C
[00:46:37] Sorry. So I spend over $600 a month on AI. I have a 200 subscription to the most advanced version of ChatGPT. I pay a hundred dollars a month to anthropic, I pay $20 a month to Grok and I pay $300 to a legal software AI company. I overpay the. But what do I use it for?
[00:47:11] I could not agree more with Steve that the interaction with AI is often a conversation. You get more out of it. The AIs have started to follow up on the conversation. I ask it about something and then it gives me back three different flags things I might places I might want to go in terms of another question.
[00:47:40] And I remember when ChatGPT5 first came out and I got an advanced version of it, I had like a 45 minute conversation with it about starting a new business and it was extraordinarily good. But I don't use it just for legal stuff, I use it for cooking recipes piece. I use it for many, many different things.
Speaker A
[00:48:09] So fabulous, fabulous stuff.
Speaker C
[00:48:12] It's great.
Speaker A
[00:48:13] Eric Sasha.
Speaker D
[00:48:15] Well yeah, I, I so what I pay for, I pay for Claude. Claude is an essential for writing. I mean it's way better writer than any of the other AIs. But I also have Chat GPT actually with two different paid accounts because my school pays for one and then it can do more things than cloud.
[00:48:36] So like if you want to build custom bot, the cloud is not as convenient for that so but they're not. The differences are not huge I would say but if you, if you really have want to have a like serious conversation, strongly recommend Cloud. It's just more a human like response and the rest of them are pretty alien kind of non human but, but as Eric said I use it for short answers for everything every step of the day, every day.
[00:49:07] I use it many times on things writing a long email to somebody turning like say if I were recording this conversation into maybe a blog post that, that kind of stuff. There are probably few hundred users that they can list.
Speaker A
[00:49:27] Great. Well we have a few minutes for some questions, so let's make it specific. Yes. Are the trustee here from Sierra College.
Speaker B
[00:49:37] Hi. I appreciate all of your comments. One topic you haven't touched on that I've been asked multiple times about is the impact to the environment.
Speaker A
[00:49:47] Impact to the environment? Yes. Electric rates are going up much due to PG and E's demand from AI. So any thoughts on environmental, environmental issues relative to AI?
Speaker B
[00:50:02] Yeah, it's big right now and I think it's big because we're still using yesterday's technology, chips, data center models to drive this new need of AI. So you're seeing these huge data centers and billions and billions invested from, from these big companies to build it, you know, even to the point where they're starting to reopen Three Mile Island.
[00:50:25] Right. And they're talking about other portable, smaller scale nuclear reactors. It's huge. And I, I think back to the professor's comment on quantum. Until we get to quantum and we have a more efficient line, that's, that's going to be a big impact.
Speaker A
[00:50:43] Anything else? How many of you?
Speaker C
[00:50:46] Well, I would add that a lot of this money on developing new power sources and the new data centers is being spent directly by the foundational models. So it's not all coming out of the taxpayer's pocketbook. And of course, being a step ahead of everybody else, Elon Musk really has the answer and the capability which is to put all this out in outer space where there is free, free energy and one that doesn't have a significant environmental impact.
Speaker A
[00:51:23] Wow.
Speaker C
[00:51:24] He's talked about doing it.
Speaker A
[00:51:26] Sasha.
Speaker D
[00:51:26] Well, for the, for fairness sake, you need to compare it to other things. Like if you, if you skipped a shower this morning, you saved more energy and water than your maybe entire week of use of AI. So per minute use, it's a lot less. It's like a 10% of what you spending by watching color TV.
[00:51:45] So if you want to cut somewhere, why start with this one? Yeah, maybe start with, you know, if you drove here, that's, that's you, you wasted a lot energy and water probably polluted a lot more than whatever, so many. So if you calculate per minute, it's really not that per minute per user, it's not that high.
[00:52:04] And actually the number is going down constantly because, you know, two years ago it was, I don't know, 200 milliliters of water and now it's 30 milliliters of water. So all of this number is going down and when you weigh it in, you have to say, what's the benefits and what's the cost analysis?
[00:52:22] However you do it, just don't forget the benefits. Okay.
Speaker A
[00:52:25] Other questions? Yes, sir.
Speaker C
[00:52:28] What impact you think AI will have on the entire medical profession and medical care?
Speaker A
[00:52:33] Impact on the medical profession? Anyone?
Speaker D
[00:52:37] It's already having an impact. I heard from a Kaiser executive that you don't know that, but we saved hundreds of lives already every year. So they actually been early to the game before all this whole hoopla. So in certain areas like X ray diagnostic, they're way better than humans in some areas. Areas.
[00:52:56] So they're very active and the effects have been very positive so far.
Speaker A
[00:53:00] Jim, ask your question about bias, if you don't mind. I think that's an interesting one.
Speaker C
[00:53:04] Yeah. And the experience I had was in testing it. I asked Chat GPT about myself and what I did during a certain decade of my life and by the time it was done, I thought I should have gotten a Nobel Prize. I've noticed that in almost all the things I review about companies, schools, whatever, there's a positive bias probably to motivate me to use it more.
Speaker A
[00:53:33] Is that a fair statement that there's a positive bias on AI?
Speaker D
[00:53:39] It's sycophantic, yeah. If it says you're great, don't believe it ever. But I have to say that Claude actually has less of a problem than ChatGPT. Cloud is a lot more critical. If you ask it for a fair opinion, I will give it. Yeah.
Speaker B
[00:53:52] And data is the foundation of AI, Right. AI only re, you know, it only acts on the data that's available. So what's in the data? I mean, mostly positive type information on.
Speaker A
[00:54:04] These companies, Sir, Back in the green back there.
Speaker C
[00:54:09] One of my greatest concerns is that AI is going to be taken over.
Speaker B
[00:54:12] By one, two, two or three mega corporations.
Speaker C
[00:54:17] What safeguards do you think could be taken to protect people from that happening?
Speaker B
[00:54:21] Because if it follows the Internet model, that's what it's bound for.
Speaker A
[00:54:24] So it seems like there's a race. There's a race in the stock market, there's a race out there in terms of only being the one. I think you sort of talked about this a bit and that's all your great. One of your great concerns, all of you, that, that we need multiple AI AI platforms other than, you know, Apples or Microsoft's, etc.
[00:54:45] Is that a fair statement?
Speaker B
[00:54:46] Yeah. And I, you know, support open source AI as well, which kind of democratizes the technology so more players can be in that. But again, there's a, there's a pro and A con to that. The only real company that's really pushing that is Deep Seat right from China, which has concerns in being a Chinese product.
[00:55:11] But they're the only ones really pushing an open source platform. Open source means that it's free to other companies to go ahead and take that technology and use it and modify it themselves. So I think it's back to Eric. We need a very robust competitive marketplace with multiple players in it and we don't.
Speaker A
[00:55:29] So let me ask a final question. Sorry, Eric, because we are out of time. Is the United States leading this, this and. And where is our chief competitor? Is it China? And. And what's that competition look like? Sasha, you're probably out there more than this.
Speaker D
[00:55:45] The United States is leading the AI and should. Should keep leading it and will keep.
Speaker A
[00:55:50] Leading it in your eyes.
Speaker D
[00:55:51] Should. Should keep leading. I. I totally agree with Eric. I think over regulation may kill it if we stop doing it. If California stopped doing it, Texas won't. If the whole country stop or slowing down, China won't. And China still not still catching up. Still have a lot of things to do before they catch up with us.
Speaker A
[00:56:12] Any final thoughts before we unleash the audience to come up to you and ask you their specific question? Anything else from all of you?
Speaker B
[00:56:22] It's an exciting time. I mean for people involved in this, it's an exciting time. But again, I always try to put the context or just in the beginning of this. Right. So it's not. It's not how great it is today, it's how good it's going to be tomorrow.
Speaker A
[00:56:36] Well, this hour has been phenomenal and I'm sure the audience would like to tell you that.