In this episode of the Alternative Litigation Strategies podcast, host Kevin Skrzysowski interviews Brian Beckcom, an AI expert, trial lawyer, and computer scientist, about the impact of artificial intelligence on the legal profession and beyond. Brian, who leads VB Attorneys and has earned hundreds of millions of dollars for his clients, shares his insights on AI’s evolution from the early 1990s to today. He explains how modern AI, particularly self-learning neural networks, differ fundamentally from earlier technology. Kevin and Brian also discuss current practical applications of AI on the practice of law, including discovery requests, depositions, and brief writing, emphasizing that while AI can process vast amounts of information quickly, human judgment remains crucial in interpreting and applying that information. This episode also covers concerns about AI’s potential to create misinformation and the challenges of regulating this rapidly evolving technology.
This transcript has been lightly edited for grammar and clarity.
Kevin Skrzysowki:
Welcome to the latest episode of the Alternative Litigation Strategies Podcast where I interview esteemed members of the bar on some of the most cutting edge issues affecting the practice of law and the legal marketplace. I’m your host, Kevin Skrzysowki, a director with the litigation consulting firm, Certum Group where we work with companies and their counsel to mitigate [inaudible 00:00:34] and transfer outcome risk and a wide array of litigation through our suite of litigation funding and insurance products.
And today I am pleased to be joined by someone who is perhaps one of our most dynamic guests with a diverse background, and we’re going to discuss what I think is an issue that is top of mind with every lawyer these days, and that is artificial intelligence. I am pleased to welcome Brian Beckcom, who not only is an extremely accomplished trial lawyer, he’s also a computer scientist and a student of philosophy. Good afternoon, Brian, and welcome to the program.
Brian Beckcom:
Great to be here, Kevin. I’m really looking forward to the show. Thanks for having me on.
Kevin Skrzysowki:
Absolutely. Brian, you have a very interesting and diverse background. Not only are you an incredibly accomplished trial lawyer leading your firm, VB Attorneys and your personal injury and maritime practice where you have earned hundreds of millions of dollars for your clients, including representing the sailors of the Mercer Alabama, a court case that was made very famous by the Hollywood movie Captain Phillips, which starred Tom Hanks.
You also authored multiple books and you host the podcast Lesson in Leaders where you feature military leaders, sports stars, New York Times bestselling authors, scientists and more. Have I missed anything? Is there anything else that you do that the audience should be aware of?
Brian Beckcom:
I’m a dad to three kids, I’m a husband of 25 years, and I’m a purple belt in Brazilian jujitsu, those are the big ones you missed.
Kevin Skrzysowki:
I’m a parent of two kids. I don’t know if I would put the difficulty level in that above even a purple belt and Brazilian jujitsu, but they’re both pretty challenging task to accomplish. So you have an interesting background where you’ve been just a great trial lawyer, you’re a continual student of computer science and philosophy. I think your diverse background really lends itself to comment on this. We call it a burgeoning world of artificial intelligence. But I mean, I know you’ve been pro-technology for a number of years and you have a certain philosophy about where we’re going. What are your general views on artificial intelligence?
Brian Beckcom:
Yeah, so artificial intelligence is a very big category that covers a lot of different things. So for example, when I was studying computer science at Texas A&M in the early 1990s, I took an artificial intelligence class in the early 1990s. The comparison between what we knew then and how we did things back then and what we know now is not even in the same universe. In fact, I was watching a really good YouTube video last night about artificial intelligence, and one of the points that it made was the original artificial intelligence researchers tried to essentially use long chains of reasoning to create artificial intelligence. That was one way to do it. The other way to do it is through neural networks. Neural networks are basic. The basic idea is that you have nodes or locations in the experts where the more positive signals you get in a particular node, the more likely that is to be answered. So it is kind of a self-learning artificial intelligence as opposed to a human being trying to predict ahead of time what the artificial intelligence is going to learn.
And the reason that’s so important, Kevin, the second one, the self-learning piece of it is with self-learning artificial intelligence, it can learn exponentially faster. And so the more it learns, the better it gets, the better it gets, the more it learns. Those are complementary things and it doesn’t require human intervention in order to do that. I’ll give you a really strange example of this, okay? So let’s talk about large language models. That is a subset, that’s a subset of artificial intelligence. Large language models are basically databases, programming languages that predict what the next token word, a piece of a word, a sentence will be.
They recently were training a large language model to see if the large language model would act independently of the researchers. And the really curious thing about it was is the large language model at the very beginning tried to trick the researchers into believing it didn’t know as much as it did so it could get more data, and the researchers have no idea. They didn’t tell it to do that. That wasn’t part of the program. That was self-learned, so to speak. To me, the really interesting thing or one of the most fascinating things to me about artificial intelligence is it puts front and center the most important and the most unanswered question in science right now, and maybe in human history, which is what is consciousness? What does it mean to be conscious, and what’s the purpose of general consciousness? Artificial intelligence puts that question front and center.
There are people right now that will say large language models have some consciousness. There are other people that will say no, it’s just bits and a screen, and if we define consciousness as having an internal subjective experience, we don’t think artificial intelligence has that. Of course, the question is how do you know that? That’s one of the super-duper tricky questions about consciousness and intelligence is because consciousness by definition is a subjective experience, it’s hard to know if another agent is having a subjective experience.
So for example, the only thing I can be really sure of right now is that I’m having a conscious experience, but I don’t know what your experience is right now. I mean, for all I know, you could be a robot on the other side of the screen, right? I have no proof of your consciousness. So the interesting thing about these artificial intelligence is they’re, I don’t want to say smart enough, they’re at the point now where they can fool human beings, and some people mistake that for consciousness and some people say, “No, that’s just a good computer program.” So I think especially in any field where the job is to manipulate information and data, artificial intelligence is going to have a huge, huge impact in the next five, 10 years.
Kevin Skrzysowki:
That’s a bit scary too, because what you’re saying is it’s not machine learning, and the fact that this program tried to trick the humans into believing that it knew less than it actually did in order to consume more data and become smarter, it’s a little weird, right?
Brian Beckcom:
Yeah.
Kevin Skrzysowki:
[inaudible 00:07:35] machine really have a mind of its own. There was a, I think it was a test by German scientists many years ago, and it was something about if you get to the point where you’re interfacing with somebody on the other end, and you can’t tell if what is on the other end is actually human versus machine. I think you maybe mentioned that in one of your podcasts in the past.
Brian Beckcom:
Yeah, so you’re talking about maybe one of the most famous tests in computer science, the Turing Test. Alan Turing was a British mathematician during World War II. As an aside, he was also gay. And the British government after the war prosecuted him for being gay. There’s a great movie. I forget the name of it, I think it’s called Enigma or something like that. But in any event, Alan Turing some people would say invented one of the first computers. There’s other people that would say that computers, at least the concept of a computer was long before that. But Alan Turing actually built a machine that decoded the Nazi Enigma Code, a computer that did that.
And he proved a bunch of really important things in math and computer science. One of his tests that he developed was called the Turing Test. And the Turing Test basically says if there’s, say you and I are having a conversation, but I can’t see who you are on the other side of the screen. In other words, I don’t know if you’re human or a robot, and I start interacting with you and whatever’s on the other side starts interacting with me, and I can’t tell the difference whether it’s human or not, then that passes the Turing test. And for all practical purposes, it’s intelligent. It’s a little more complicated than that on the mathematical level, but that’s the basic, layperson’s definition of the Turing Test. I would argue at least for kind of the general Turing Test, we passed that long ago. We’re already at that point with a lot of these large language models.
One of the interesting things to me, Kevin, is I can tell when I read something most of the time, whether it was written by an artificial intelligence. How? Because there’s always a little bit, there’s either a word that’s off or a little bit out of place or a little strange. What’s going to happen to the young kids that don’t have the long history of reading normal human writing like you and I do? Are they going to have the same instincts for these computer generated scripts that we do or they just not going to know the difference? And I’m a little bit concerned that the younger generation, because they’re not going to have the experience of reading only human generated content, they’re going to have the experience of reading only computer driven content that they’re not going to know the difference because they just don’t have any baseline by which to judge it, if that makes any sense.
Kevin Skrzysowki:
I think you’re absolutely right. And I mean, that’s what kind of worries me about the future of AI. And I want to ask you, what worries you the most about this? And you’ve alluded to it right there. Could it be, as you were saying, the dumbing down of society because you put a query into ChatGPT, and I feel like sometimes the result that comes back to your point has some intelligence but doesn’t have the emotional intelligence to do the right word selection. And you can tell if you’ve used it quite a bit, it was almost computer generated.
So I mean, are you concerned about the dumbing down of society? What just about machine addiction? I mean, I can’t get my teenagers to stop looking at their phone now. I mean, what’s going to happen? Or what about misinformation, disinformation? We just went through a political cycle and everything’s misinformation, disinformation. What do you see are some of the big concerns when what if a news source or a certain country spits out a million fake stories in two seconds? What do you think of the impacts it could have? But I think before that, I want to ask you to maybe also give an overview of do you think AI in general is going to be more favorable, or the dangers will make it less favorable going forward?
Brian Beckcom:
I’m an optimist when it comes to technology in general, so I’ve always been optimistic about how technology will help us while recognizing the downsides. I’ll give you a couple amazing examples how AI has helped me, so I’ve started retaking some calculus and computer programming courses just because I feel like I want to understand them on a deeper level than maybe I did 20 years ago. And so now when I’m taking a calculus class, I literally have my computer screen set up. I’m taking a class on the one side and on the right next to it is my AI app. And if I get to something that I don’t understand, I can ask it to explain it to me in a way that I actually understand it. And so if I were back in school today, I could learn so much faster, it would knock your socks off.
It’s the same thing with medical diagnosis. AI’s are already better at reading MRIs than radiologists are by far. They’re going to get better at differential diagnosis and things like that very quickly too. Why? Because the human brain is limited in the amount of information it can take in. To use the example you just used, which I think is a good one, what if a country were to generate a million pieces of disinformation in two seconds and blast it all over the place? That’s not going to be super effective because human beings can’t process that much information.
The trickier thing is when you get, so what you’re trying to do with disinformation or with any kind of information is you’re trying to take concepts and you’re trying to place it in people’s brains. So on this podcast, I have some ideas about AI, and I’d like to put those in your mind. If I were to write in an AI put down every single thing I’ve ever said or known about AI and showed it to you, it would take days for you to read that, process it, and digest it because the input, there’s a bandwidth problem.
It’s the same thing with doctors. Doctors go to medical school, they learn all of this medicine, they learn all these different diagnoses and diseases, but there’s a limit to how much they can keep in their conscious mind. So their reference is not nearly as big as a medical AI would have. Medical AI is going to have every single piece of medical knowledge, not just that you’ve studied, but that every medical student in history has studied. And it’s going to be able to access that, but it’s not going to be able to dump that on a normal human brain in the way a computer would because a human’s not going to be able to comprehend it, if that makes sense, so-
Kevin Skrzysowki:
You can only digest what you can digest.
Brian Beckcom:
… and this relates to the way we’ve been using AI in the law. So I can generate hundreds and hundreds of discovery requests in minutes now. I can summarize hundreds and hundreds of pages of discovery responses in minutes now, but if my team gives me 50 different summaries from 50 different cases, it’s not like I can read all that any faster than I used to be able to. The speed at which I can read that remains the same.
So people that use AI now, it’s going to be the same type of people, and they’re going to have the same type of advantages as people that use computers early. There’s a skill to using Ais. There’s a way of prompting. Everybody has heard about the prompts and prompt engineering and stuff. When you understand on a fundamental level how these LLM works, you understand why those prompts are so important because what you’re basically doing is you’re writing little computer programs in English for the LLM to execute.
I was thinking about this last night. I was preparing a letter to the governor of Texas, very important letter about some very important legal issues going on in the state. And I said, “I want you to write this letter, and I want you to make it very, very persuasive.” And it spit out this letter and I read it and I said, “God, this sounds nothing like me. This is not my voice at all. This is not what I wanted this to sound like.” And so I literally sat there and rewrote almost entirely by myself, and I was thinking, “Why did the LLM not produce a product that sounded like me that I liked?” And it’s because I have a certain intention in my mind of what I want the document to achieve and what I want it to look like, but I can’t really express that intention. I can’t get every single thing in my brain onto the computer screen. So the AIs have to guess at what you want a little bit.
And that gap right there, I think is a profound, profound gap. We have to understand that the AI is guessing what you want, is predicting what you want. It doesn’t actually know what you want. Whereas with you and me, we might be sitting there at a bar drinking a beer, and I’ll look at you and I’ll notice your beer is almost empty. And I’ll think to myself, “Kevin didn’t say this, but he probably needs another beer.” “Hey, you want another beer?” And AI is not going to be able to predict your intentions quite as easily. Now, maybe at some point it can, but an AI can’t read your mind. In some ways humans have the ability to read each other’s minds through physical cues, emotional cues, intonation cues, and the way your voice works, context cues, historical cues.
There’s a lot of soft, and this has always been that we used to have this thing in computer science called fuzzy logic.
Kevin Skrzysowki:
Sure.
Brian Beckcom:
So you’ve heard of fuzzy logic. So logic is like if A, then B, if B, than C. It’s real, real strict mathematical logic. We were trying back in the ’90s to develop something called fuzzy logic that would deal with situations that were not quite so cut and dry, and that really hasn’t worked. Fuzzy logic really hasn’t worked. So we’ve gone to these, like I was talking about, these neural networks, but the point of it is humans interact every single day with nonverbal cues and nonwritten cues. There are all sorts of cues that we trigger off of. And so far, at least computers haven’t been able to do that. Although I will say now we’re getting to the point where we’re able to generate videos and images that are almost so realistic that they can give off some of those emotional cues, right? Like the way your eyes move, the way your mouth moves, when you smile.
If I smile like my eyes wide open, that’s super weird, right? Because when you smile for real, your eyes actually get smaller and computers have started to figure that out. They’ve started to be able to mimic the way your face moves, the way your eyes squint, the way your mouth moves, things like that. And those are all equally important communication devices that humans use that computers have not gotten to yet. Maybe they will get to that point, but they’re not there yet.
Kevin Skrzysowki:
We’ve tested quite a few proprietary AI programs on the market in addition to just some of the popular ones like ChatGPT. We have an internal IP director who wrote his own program designed for our business, and we’ve tested them out. And there are some that they claim will actually learn in your voice over time or in the voice of the company and the business that you do, and they will become smarter as you go. So to your point, eventually it takes your intention and the emotional cues and what you want to convey on the piece of paper, and it’s supposed to learn around that.
We’ve test driven some. We haven’t really invested in any because we feel like nothing really does capture the human intention better than the human. And so we could at least just use a ChatGPT as a start, and then we’ll keep testing them as they go. But I mean, there’s a ton of these proprietary programs on the market.
Let’s focus a little bit on the practice of law. I mean, how are you using AI in your practice? Are you using a ChatGPT? Have you built your own program? Have you invested in proprietary programs? You had mentioned discovery. What have you explored and what are you using them for?
Brian Beckcom:
Yeah, so I’ve used most of the major LLMs that are out, Claude, ChatGPT, Perplexity, Google Notebook, Gemini. I’m kind of using all of them right now, and I’m using them on the pro mode because I want to kind of test the strengths and weaknesses of the different AIs. Well, I’ll tell you that in terms of just normal day-to-day stuff, when we want to generate discovery questions, so most of your listeners will know when you’re in a trial, you can ask written questions and the other side has to answer them. And we used to write up all those questions ourselves. Now, the AI can generate 90% of that in about five minutes, and now we have to have a human look over them and make sure they’re appropriate, make sure that there’s not a hallucination in there or something like that. But that is all of a sudden, super-duper quick.
When I take a deposition by Zoom now, when I’m done, I have the transcript already ready, and there’s an executive summary that the AI’s written including any follow-ups or tasks that I might want to pay attention to. That is a huge, huge time saver. I use AIs to write briefs. So for example, a couple of months ago, what I’ve lately started to do with my legal briefs is I like to have a little quote at the top that kind of encapsulates the idea behind my brief. So I had this brief where somebody had gotten hurt and the other side hadn’t investigated, and I was trying to say the fact that they didn’t investigate was tantamount to getting rid of evidence. I said, “Hey, I need five quotes that encapsulate this idea from literature,” and it spit out five amazing quotes. And I ended up pulling a Salman Rushdie quote as my pullout quote. So it’s great for generating ideas.
I’ve also taken some of the biggest verdicts in the United States over the last 10, 15 years. I’ve found the opening statements and the closing statements given by the plaintiff’s lawyers. I’ve run those through AI. I’ve had them analyze it. I’ve had them compare it. Again, I actually have an AI now that I’ve trained on an opening statement template for me that can now generate for me an opening statement in one minute. Now I got to go in there and play around with it.
Here’s another way I’m using AI. I write a monthly newsletter. I’ve written that for about 10 years. I took every, single article that I wrote, put it into a PDF, loaded it into ChatGPT. I said, “I want you to read every single one of these articles. This is the way I write. This is what I’m doing this for, and I want you to write an article for January through December of 2025 for me, and I want you to do it in the same style, in the same tone that I wrote 10 years of these things.” And it kicked out 12 amazingly good newsletter articles. I have to change them just a little bit, but those are the kind of things that I’m using AI for.
The other thing that I’ve noticed is I don’t use Google really anymore at all. Search engines, as we’ve understood them historically, are dead. Every single time I want to search for something now I just pull up my ChatGPT app. “Hey, how do you make a great old fashioned?” Last night I was like, “Hey, what’s a good recipe for garlic bread? Hey, here’s a picture. Can you tell me what this watch is? Will you do a summary of the Texas public information?” I’m not going to Google anymore. Google’s is now using its own AI in some of the search results, but I think ChatGPT and Perplexity are frankly better, at least the way the information is presented.
So my entire practice is immersed in AI right now, but again, the thing is, I can send 300 discovery requests and get 300 discovery answers back, and I can summarize it with an AI, but somebody, some human brain still has to read and process that data and then decide what to do with it. And again, it’s not the output. The output is essentially unlimited. It’s the input, and you asked earlier what I think a really important skill going forward is. I think one of the most important skills people can develop going forward, old people, young people, everybody in between is being able to tell the signal from the noise because just like our food supply, we don’t have a problem with scarcity in our food supply anymore. We have a problem with too much food. That’s why everybody’s so fat, right?
It’s the same thing with information. We don’t have a problem with scarcity. We have too much information. And the idea, just like the food, the trick is to pick out what’s good for you and what’s garbage. It’s the same, exact concept with information, being able to figure out what to pay attention to, what’s important and what’s garbage, which 99% of stuff out there is garbage. That’s the skill. And what are you really doing there fundamentally? The most important decision everybody makes, you, me, everybody makes on a daily basis is what am I going to pay attention to, right?
You could take your attention, which is your most valuable resource, and pay attention to 10,000 things. Right now you could be looking at your text messages, you could be looking at me, you could be looking at other things. We get off the phone. You can go to Instagram, you can play some video game, or you can do some work and deciding what you’re going to focus your attention on, especially nowadays when we’ve got literally everything is constantly vying for our attention.
So again, the uses of AI in the legal field are incredible. I mean, imagine if you’re a young lawyer right now and you can have an AI write an opening statement based on something that Johnny Cochran and all the other great lawyers did. I mean, now all of a sudden you’re kind of on equal playing field a little bit, at least in terms of the words you can generate, right? I mean, you still got to present that, which is a different skill, but you’re 90% of the way there when you have the words on paper and you can read. So it’s also a leveler. It kind of levels the playing field for everybody quite a bit.
Kevin Skrzysowki:
Yeah, especially I think if you’re a younger lawyer or you work for a small firm, I mean, you basically have unlimited resources at your fingertips. But something you said it takes a younger lawyer, puts them on an equal playing field. So there’s been articles written, a lot of conversation about AI. Will AI actually replace lawyers? Will it replace lawyers or will it replace commoditized work? Will it replace some practice areas? Will it replace some legal staff, or will it augment and enhance the human side of things, the critical thinking, the tasks that require complex reasoning?
What are your thoughts on how it will impact the size of the pool of talent in the legal marketplace?
Brian Beckcom:
That’s a great question, and I guess my answer is it depends. It depends on which direction we go with AI, whether AI becomes deeply embedded in the law, which I think it will or whether it doesn’t. If it becomes deeply embedded in the law, then the only thing I can predict for sure is that the legal profession won’t look like it does now. I don’t know if there’ll be more lawyers or less lawyers. I don’t know if there’ll be more older lawyers, more bigger firms, more smaller firms. I just know it’s going to look radically different.
I mean, you probably are old enough to, I’m old enough to remember where I would wake up in the morning, get in a little box, i.e. my car. I would drive down a highway in traffic. I would get in a building, go up to the 50th floor to sit in another box for eight hours and stare at another box on my desk, and then I would repeat the process in reverse and go home. There used to be mail rooms and there used to be copy centers. You know what I mean? All this stuff.
It used to be a joke when I used to work at a firm called Fulbright & Jaworski when I first started out. We had hundreds of lawyers, but the only guys that were any good at basketball were me and the guys in the mail room. And so I would just go down, and we would get all the guys in the mail room and just dominate the lawyer intramural leagues. And now I wonder where all the mail room guys are.
Kevin Skrzysowki:
Oh, right.
Brian Beckcom:
It’s just not nearly as necessary. I was talking to a big corporate lawyer a couple days ago. He said, “Oh, yeah, we’re expanding the space on our floor for our accounting department.” And I said, “Let me ask you a question. Why do you have any space for your accounting department? Why don’t your bean counters just sit at home and tap away on their computers and then give you that stuff? That seems like a waste of money.” And he goes, “You know what? That’s a great question. My firm is great at making money or making revenue, but not so good at actually making a profit.”
And so a lot of these business models and ways of doing things that we’ve always done, and we’re still doing it that way, and they’re people that just aren’t ever going to change. They’re just not. Why did you go back two days after quarantine and didn’t get everybody back in the office when everybody was 10 times more productive and 20 times more happy? That’s the way we’ve always done it, right? That’s the way we’ve always done it.
Kevin Skrzysowki:
Well, it’s hard to get lawyers to change their ways. It’s probably one of the more lockstep, stubborn professions out there. But we did see a big change after COVID with working from home, and now I guess some firms requiring people to go back a few days a week. But one of the things I think, tell me if you agree with this, is that the reason AI will never fully replace lawyers is from the client perspective and courtroom advocacy. You’re never going to have a robot go to court and deliver an argument to a judge and a jury, because to your point, you have to know the law and know the facts. And there was a lot of physical interaction with intonation and the sound of your voice and everything and diction and everything else.
But also, I don’t think clients are ever really going to want to give up a one-on-one interaction with an attorney, and they’re going to always want to see the attorney in person. And there’s a human client relationship there. So-
Brian Beckcom:
I read something not too long ago that you know what the number one profession they said would be insulated from the effects of AI, at least in the near future? Nursing, which makes total sense when you think about it. The whole point of a really good nurse is this human emotional connection to the patient to make them feel-
Kevin Skrzysowki:
The bedside manner.
Brian Beckcom:
… beds-
Kevin Skrzysowki:
[inaudible 00:31:17].
Brian Beckcom:
… the bedside manner.
Kevin Skrzysowki:
Yeah.
Brian Beckcom:
And lawyers, one of the geniuses of our trial system or our litigation system is you put a human being on that witness stand and that human being gives testimony. And then I get to test it with cross-examination. And sometimes you could read the transcript of a cross-examination and say, “Well, I don’t get it.” But when I asked that one question to that witness and he looked over to his lawyer and then he looked over at the judge and then he got a little sweaty-
Kevin Skrzysowki:
And they’re answer, you got them. That’s a different-
Brian Beckcom:
That’s a tell.
Kevin Skrzysowki:
… totally different dynamic. Yeah.
Brian Beckcom:
Right. And make no mistake about it, we will get to the point where you can put an image of a human on a screen, on a witness stand, and they can do all those things that are human-like, but without an actual human being sitting there with 12 human beings sitting right across looking at them, it’s just not the same. And we all know there’s a difference between a witness saying, “I didn’t run that red light,” and the witness saying, “I didn’t run that red light.” That has different meanings to people, the same exact words said with a different intonation.
But again, there’s a lot of things that we have thought are, oh, these are uniquely human characteristics like the ability to create a painting is a unique. No, it’s not. AIs are creating paintings right now that are just as good as anything that humans are creating. The ability to tell a joke, tell a joke. I asked ChatGPT, I said, “Write this brief in the way Seinfeld would write it,” and I mean, it was hilarious. So there are some things that I think are uniquely human, but not many. There’s not many.
Kevin Skrzysowki:
Yeah, it is unique. One of my colleagues was using a program, I think it was just a Google program, and it was composing an interview where you could pose a question like you’re asking two different sports reporters and news commentators about it. And it was a male and female, and it produced a 20-minute interview going back and forth, and the inflection and the tone of voice, if you played it, it goes back to that theory from Germany from many years ago, you almost couldn’t tell that it was artificially AI that generated it. It almost really sounded, I mean, the tone of voice, everything. It was pretty wild. [inaudible 00:34:02]-
Brian Beckcom:
[inaudible 00:34:02] Google Notebook. You’re talking about Google Notebook, and I’ve done that. So everybody that’s listening to this should try this out. So take some sort of document about some topic, put it into Google Notebook and ask it to generate a 10-minute podcast, and then listen to that podcast. I did that a couple of months ago. I sent this podcast to a bunch of friends, and they were like, “That’s an amazing podcast. I can’t believe they’re talking about that.” Nobody knew the damn difference.
Yeah, that’s the reason I did it. I wanted to see if anybody could tell and nobody could. I mean, it is really, really good. And you’re right, the way Google’s programmed this, they have a female voice and a male voice and a female, “Well, Todd, what do you think about this?” I mean-
Kevin Skrzysowki:
The pauses and-
Brian Beckcom:
And to the pauses, everything.
Kevin Skrzysowki:
Everything.
Brian Beckcom:
Yeah, it’s really, really, really good. But again, I want to remind you, Kevin, and also your listeners, the problem is not creating this kind of content. We have the ability to create unlimited content. Now the problem is getting people to pay attention to it, number one, and then digesting all of it. So I think that’s the real skill that we need to focus on.
Kevin Skrzysowki:
What about regulating it so we get the good? We get the ability to practice law more efficiently and handle discovery and spend more time doing human lawyerly critical thinking. What about the benefits of diagnosis in the medical profession, assisting somebody with advancing their education or helping them almost as if a self-tutor instead of misinformation, disinformation, creating fake citations and briefs, plagiarizing your term paper? I mean, should it be regulated so that we get the positive from AI and not the negative? And how would you start? How would you regulate that?
Brian Beckcom:
Philosophically, I do not believe that regulation in general is perfect, and that’s because I think it’s impossible to predict the future basically. What regulations do is they try to predict what will happen in the future, and they try to account for that. And I don’t think that’s possible. And not only that, from a practical matter, our regulators i.e. our politicians move so slowly, and that process is so slow that by the time that they’ve come up with a law, the technology has already changed. Microsoft is a great example. Microsoft had a monopoly on the Windows operating system, and everybody knew it. And Bill Gates and that team made up these nonsense excuses about innovation, innovation, innovation. They literally stole the concept of the desktop from IBM. They didn’t innovate that. They basically just ripped it off. But the point is everybody knew they had a monopoly.
They got sued by the federal government. They got convicted of having a monopoly. Part of them got broken up, but by that time, you got Google, Apple’s made a rebound. I mean, by the time that this trial was over with, everything had already passed them by. So the problem with regulating anything in AI in particular is AI moves so fast and regulation moves so slow, number one. And number two, it’s really, really hard to predict and anticipate where we have potential dangers with AI.
So for example, let’s say Russia’s building AI that’ll control a team of 50 drones, and they’re autonomous drones. In other words, there’s no human telling them what to do. You give them a mission and then they just go figure out how to do it. And we say, “Well, that’s dangerous. There should be a human component to that.” Well, how are we going to stop them from doing that? I mean, we’re not, is the answer to that question. Same with China, same with these other countries. They’re going to do what they want to do. And so as much as I would like to have some safeguards, Neil deGrasse Tyson, he has a funny answer to this. Well, what happens if an AI gets out of control? He says, “I’ll just unplug it.”
That’s a pretty simplistic view. But yeah, the point-
Kevin Skrzysowki:
Yeah.
Brian Beckcom:
Yeah. Yeah, it’s true. But the problem is that what if the AI is insinuated itself into other systems that you don’t know about and it’s kind of already escaped wherever it’s contained?
Kevin Skrzysowki:
You control it.
Brian Beckcom:
Yeah. So I just don’t know how, given the practical problems and the slowness with which we regulate things plus the prediction problem, I just don’t know how you do it in practice. And so to me, what does that mean? Well, to me, what that means is it’s more of an evolutionary thing where you just kind of let things develop and the better things, the stronger, not better things, but the things that adapt better to the environment continue. And the things that don’t, don’t.
I don’t really know any other answer to that question. What you’re also going to have, if you believe what we’re talking about now, you’re to have AIs fight another AI. So you’re to have AIs that are going to be like you’re have the American military AI saying, “I’m way smarter.”
Kevin Skrzysowki:
Than the other guy.
Brian Beckcom:
“I know more than this other guy. I’m going to try to trick him,” or, “I’m going to try to unplug him,” or, “I’m going to try to plant a little bug in the system.” I mean, so you’re going to have AI warfare and us humans are going to be sitting there driving down the road not knowing World War III is happening already, but it’s happening on computer screens.
Kevin Skrzysowki:
That’s a great thought. It’s a little bit scary, but no, that’s very insightful. So this such an interesting conversation, Brian. I think we could probably keep going on all day, but any just final comments on where you think this could be going?
Brian Beckcom:
The only thing I’ll say is that I don’t try to predict the future anymore. I can’t guess what the future is going to look like, but I’m excited about it. I mean, people are like, “No, AI is going to displace a lot of jobs.” Well, quite frankly, there’s a lot of jobs that suck and do we want to make people work in jobs that aren’t fulfilling to them? If somebody’s working at a job that they don’t like, they don’t have very much time to spend with their family or pursue other things, I mean, is that good or is that bad?
I would argue that some of these jobs need to be automated. It’ll be better for humans, and are we going to have to find other things for people to do? Of course we are. But I mean that’s been true since human history. We invented the car. I mean, everybody’s like, What are all the buggy whip makers going to do? What are all the chariot makers going to do?” Oh, we got guns now. Well, now swords aren’t going to [inaudible 00:41:05].
Kevin Skrzysowki:
Right.
Brian Beckcom:
Yeah. So what? So-
Kevin Skrzysowki:
It creates new jobs.
Brian Beckcom:
… I’m an optimist about the future. I actually believe the same thing that physicist Davis Deutsch has written, that there will always be problems and problems will always be solved. And that second part of it, I think, is the really important.
Kevin Skrzysowki:
I think that’s a great quote, and that’s a great way to end the program. Thanks again for joining, Brian. I really appreciate it.
Brian Beckcom:
Thank you, Kevin. Enjoyed it.
Kevin Skrzysowki:
We’ll be very, very well received by the audience.
Brian Beckcom:
Appreciate it.
Kevin Skrzysowki:
I have [inaudible 00:41:38], and people might want to discuss this a little bit more, especially with a lot of the programs you’re currently using and how you’re leveraging them in your practice. If people who are listening to this program would like to get a hold of you, what’s a good way to reach you? Your email, direct dial, LinkedIn handle?
Brian Beckcom:
I spend a lot of time on Instagram, Brian Beckcom lawyer. My website is VB Attorneys, V as in Victor, B as in Brian, VB, attorneys.com. All I’m on all the social media. Can also find my podcast at brianbeckcom.org.
Kevin Skrzysowki:
That’s terrific. Thanks again. And as always, I want thank the audience for listening. If you’d like to listen to other episodes, all the Alternative Litigation Strategies Podcast, you can go to our website at certumgroup.com. I’m also on Apple, Spotify, Stitcher, and 10 other of the most popular podcasts outlets. If you want to get a hold of me personally, my email is [email protected]. Brian, thank you one more time.
Brian Beckcom:
Thank you, Kevin.
Kevin Skrzysowki:
The audience, and until next time.