I don't think AI Lawyers would work. The law is a very human litigated subject in the government between the three branches and the public. And in each nation/territory. Maybe... AI can write better Contracts, keep catalogs/Records, and help out in pointing at laws in those old books... But never to be a President, judge, lawyer, nor legislature. Not until AI can achieve Free Will as humans were designed with. Interpretations of the law is the only reason why AI won't work. If anything, AI is great at "laws as written" rather than "laws as interpreted". Such as proper Contracts with no vague or loose terms in it, as seen with some major corporations.
I like how he immediately wants to jump to the Supreme Court, instead of starting out with some testing in a mock court and slowly moving up as the technology develops.
Well when you think you are smarter than everyone else, why wouldn't you want to be a part of not only deciding the fate of the people or businesses in thr case but everyone in the country
A legitimate version would partner with a University of Law to hold some mock trials with aspiring lawyers in front of legal professors. Once you've demonstrated competence there, you could imagine a bar and Judge being willing to tolerate its entrance into an actual court room in a very limited and protected scenario and slowly expand from there. But yeah, I totally think a guy is going to succeed when he jumps straight to the final boss.
This is pretty much exactly how you'd develop real software. Spend at least some time in a controlled environment before this code is actually relied upon to do the job.
@Marlo Gonzales you're wrong. If you want to test anything, after making the prototype, you test it first in the real world. If you are making a new design rocket, you just make and use it immediately with no failsafes what so ever. In fact, its alright to put humans in the shuttle I think you are very ignorant on how the real world works
@@Random_dud31 we dont test drugs on random people at random before it goes out into the wider world. it goes though countless controlled tests before it gets sent out for general use. the reason is because they need to see if its actually effective before it gets more tests. the end result is not what matters, its all the data of why it works and if it can actually be effective. sometimes people Believe they are sick, so their body reacts like they are actually sick, but in truth its all in their head. its called a Placebo effect.
Sure, so long as the lawyers that advise legislators, and the legislators that are lawyers themselves, don't enact some form of regulatory capture even more onerous than what we have now. There is way too much money on the line for people directly involved in the lawmaking and trial process to not try and kick the AI can down the road, encase it in a 50 foot thick titanium tomb, then launch it into the sun.
@@Random_dud31 sadly i have to disagree, as smart as teck companies and their hundreds of code monkey and engineers there will ALWAYS be unforeseen real-world failure that just can't be accounted for in a lab. what comes to mind is the racist sinks that used light refection from hands to turn the water on and off.....saddly it does not cover light ranges of darker skin people opps. easy code fix but that was just not in the lab nor was it thought up.
I never tire of his on-point old school Simpsons references, and the fact that he followed up with a Futurama one later in the video ("Your Plan is Bad and You Should Feel Bad") only burnishes his cred as a Groening superfan.
@@annnee6818 because they’re culturally programmed to, “plucky ubermensch who’s just superior to lesser humans goes from rags to riches” is a fairly widespread trope
The titles itself is a blatant clickbait, with him NOT actually talking about the topic, but just a specific example of it, while making some very long-running conclusions from one anecdote.
There's also a somewhat easy thing for them to offer to consumers, which is potentially combing through all of the cases argued by all of the lawyers licensed in a particular jurisdiction (including cases argued in other jurisdictions that they have practiced in) and finding lawyers with relevant experience to an individual's case. Even narrowing it down from say 200 divorce lawyers listed on a state bar's website in your area to 10 or 20 that have represented clients in somewhat similar situations to you would be enormously helpful.
Yes we can let the richest most powerful lawyers harness artificial intelligence to squeeze out all competition and prevent anybody else from gaining any experience. The market will be sewed up by a tiny minority who will have full control!
Ahhgh! My company installed a program to automatically edit and redact complaints that I was editing. This was supposed to help free up my time for other projects. Instead, it now takes me longer to edit the complaints because the computer program can't grasp the distinctions between the complainant and the company that the complaint is about. It also takes out apostrophes, spacing between words, the words bill, may, earnest and grant. Drives me crazy. I wish that they would drop the whole thing and just let me get on with what I was doing before.
Okay but you can’t lie. It’d be pretty damn funny to listen to an 800 year old judge talking to a machine lawyer that is basically just one of those answering machines that always asks you to repeat your question or answer.
To be honest I know a few Federal COA Judges who would probably be willing to do an En Banc panel exercise just to see the concept. e.g. They pick a historic case to re-visit, limit the case to the filings in the lower court and with the assurance that the bot could not be trained on the previous appellate opinions or records, and see how this would play out. At least when I was in the First Circuit many of the judges had a good curiosity around technology and had tried previous matters like the Microsoft Anti-Trust Case, the Sun Microsystems Java/Google Case, etc.
alternatively, if it turns out the bot isn't as good as it advertises, it could be a cool way to train law students at later stages of their education. give them a doc/argument/motion drafted by the AI and have them find issues with it and/or suggest improvements, better case law etc. could even make it a feedback loop, where the students' suggestions are fed back into the AI to improve its' skill
I was looking for this comment 😂 probably made this bc daddy refused to keep paying his parking tickets and he didn't wanna use his allowance on anything other than weed
"Look at me. I'm a robot lawyer, in the Supreme Court, defending a major record company, and I'm talking about Chewbacca. Does that make sense? Ladies and Gentlemen, I am not making any sense. Did I mention I'm a robot, in court, not making sense..." *robot head explodes*
With both Browder and SBF, I get this impression of someone who has never really faced any actual hardship in life, someone so protected from consequence that the rules of society were never really anything other than a game to exploit, and someone for whom actual human behavior is this weird abstraction, seemingly trivial only because they have a very surface-level experience of it. That assumption that something you don't understand must be trivial seems to be the running theme of these tech bros.
Dan Olsen of Folding Ideas put it perfectly: (I'm paraphrasing) "They believe that since they can understand one very complex thing (programming with cryptography), all other problems must be less complex in nature."
Nailed it. People who get into the surface of AI never went into the part that includes hard math and physical components. When you get there you realize the limits are not on the abilities of the programmers, but the physical parts required to run these delusional programs are several years away. If this technology was doable, Google would already have made one to sell it and to fix the copyright hell they have on youtube, just saying.
"I am not particularly worried about AI replacing lawyers anytime soon" MUST BE NICE appears in huge letters on the screen. On top of the Clone High JFK edit, this video had top notch editing, for sure.
I was wondering if this was a new development, or if I just hadn't noticed before. I usually just listen to the videos but there is SO MUCH good visual content in this one that's not even acknowledged in the audio.
Coming from the AI side of things: what he's proposing the robot do with the loophole thing is just... not where the technology is at right now. AI is very, VERY good at recognizing patterns in things it has already seen and extrapolating from that. There's a thought experiment called the Chinese Room Experiment that's often used to describe how AI perceive things(not completely accurate by any means but it works for an analogy). Imagine you don't know Chinese and you're locked in a room and given a translation guide with Chinese characters. Every so often you get a paper slid under the door and you have to decode it and send back a response using that guide and eventually, you get very good at it, and even when given unfamiliar characters you can make a good guess at what's being asked by picking out the familiar characters and how they're used. It looks to people on the outside that you speak perfect Chinese but in reality if anyone were to speak to you in Chinese you would not understand a single word. The AI does not actually understand the law or nuances: it is looking at patterns from what it was trained on. AI is only as good as its training set so if you have a more typical legal case, there is a good chance it could give a decent legal response. HOWEVER, if an AI does not pick up on any specific nuances in the case that completely change the outcome(which, let's be honest, often happens), then that legal response isn't relevant anymore and the more concerning part is the consumer would likely not realize there was a mistake. It's like using ChatGPT for writing answers on homework: not only would you not know if it made a mistake but ChatGPT and a lot of natural language AI have a certain syntax some people can pick up on and realize it was written by an AI which on homework is straight up plagiarism and in a court of law would prejudice the court against you to some degree. The smart thing to do here would be to make an AI to assist lawyers, not try and do their jobs. Lawyers are human and miss things that an AI could pick up on. Public defenders who don't have much time per client could do their jobs much better if there was an AI to review the case and suggest certain laws, known nuances, or legal precedence so the lawyer could look those specific points up and see if they apply instead of potentially not being able to offer proper legal defense because they have less than an hour to prepare per client.
yup. I gave chatgpt a request to give me an example of a bird from each order, and it proceeded to give me 24 birds from largely the same order, and with repeats. further instructions to correct it resulted in similar answers. I would NOT trust it right now with giving completely factual, much less nuanced, responses to very difficult lines of inquiry.
The Chinese Room argument is fundamentally flawed, because it uses circular logic. It starts from the assumption that a human mind can't be replicated by a set of rules, and from there proves that a human mind can't be replicated by a set of rules. The actual answer is that the room itself speaks perfect Chinese, and the human inside is just a small part of the machine. Again, you start from the assumption that the human mind is basically magic. If we instead assume that wizards don't exist, then the human mind is a machine, something like the Chinese Room. There's no other option. And since we aren't living in the middle ages, we are also have a pretty good idea how it actually works. Both hardware and software. As for understanding, what does it mean to understand something? Again, without involving magic. How do we test understanding in practice? In science we make predictions and then run experiments to test them. If our predictions match experimental result then our understanding of the subject is likely good. Tests in school are the same, except experiments are replaced by the teacher's knowledge. General problem solving is also the same, we think of a solution and try it out, it's essentially prediction too. And the way we make predictions is essentially just pattern matching. Science is an excellent example for this, because it's a very formalized process. In science predictions aren't based on gut feelings, but abstract models and equations. We take a large amount of data, find patterns in it, then combine patterns into models. A model is an extremely simplified simulation of reality, that still closely matches it's behavior. For example when we predict where an artillery shell will impact, the only properties of it we need are it's velocity, mass, drag coefficient, and cross section. We don't need the exact quantum state of all of the subatomic particles it's made of. Based on this, AI is entirely capable of understanding. It has to understand things to some degree, because it makes mostly correct predictions. A good example of this is how ChatGPT does math. It's a language model, it never explicitly learned math and doesn't have access to a calculator, yet it knows math. It saw math in the vast amount of text it learned on, but those were just specific examples. Even just to do simple calculations it has to understand how basic arithmetic operations work. It of course read the rules too, but that doesn't make it easier, it still needs to build a mental model, plus it needs to understand language first. We know for sure that it uses the model it built on it's own because it makes mistakes. And often human-like mistakes. For example it performs worse with large numbers, which would be odd for a calculator, but very natural for a human. And actually, AI is better at piking up on nuances, because it can process far larger amount of date than humans, and pay attention to all details equally. It's actually a limitation for higher intelligence, but a superpower when you need to obsess over seemingly irrelevant details. It's a bit like autism.
Even if the AI was perfect, and could give a better answer than a human in 100% of cases... We still wouldn't use AI in places of authority/trust like that. Psychologically humans need to hear it from another human, because we need to know that empathy is involved. Humans don't trust an AI as much as a doctor even when they say the same thing.
@@andrasbiro3007 the reason it fails in big numbers is simple. It sees 2+2=4 in its training data but it doesnt fully understand why 2+2=4. If it understands the simple rules of addition and subtraction, then how big the numbers are doesnt matter. Humans make errors because we are imperfect and complex biologically based machines. An AI doesnt have the same types of limitations that people do. AI does have limitations (for instance, it would be hard for a text based AI to have a concept of vision or touch and its perception of those is only as good as its training source. It has to be taught those things), but just because it's making errors similarly to how a human makes them doesnt mean it's coming to the conclusion the same way a human is.
@@Kimmie6772 yeah, as well, for simple addition, humans who remember will break out a pencil and paper for large numbers, which allow to pretty much get it perfect, though it takes time. In this case, we can concretely definite understanding. Is the model replicating the addition problems it has seen or is it actually doing arithmetic?
When you mentioned Browder’s dad, you forgot to mention that his great-grandfather was Earl Browder, the leader of the US Communist Party. I’m sure he’d be thrilled to see his descendant become a tech startup bro 😂
Letting unfeeling AI run our legal system is a terrible idea. Even now with it being run by beings that are (usually) capable of empathy it's still a mess. This is one of the many areas we need to keep AI away from.
To be fair, a plain logic circuit fed all the information on a justice system would probably give better results in everything that goes up to a trial (think about the raw amount of fake and troll IP cases or even non starters that plague effectively the entire globe). The real problem there is that minor and pointless things that should get thrown out due to context would then get stretched out and big corporations couldnt simply buy off the entire county where a trial is held to e.g. extend their monopoly over a market for another 20 years.
As someone who believes in the potential of AI, I'm baffled that this Tech Bro didn't simply set up a series of moot courts where his company could test his program, identify areas of improvement, and refine it until he has a solid product
Because that would be smart & reasonable But remember: this isn't a college educated computer scientist making strides in AI technology, this is a tech bro. Their game is looking smart, not being smart.
Just watch this company will get sued into bankruptcy but the conglomerate that buys the tech off their corpse will do exactly that and make something that makes a number of law specialties obsolete. If I were to guess... 6 years
He could easily recruit volunteer judges and attorneys to carry out a mock trial where one side is being argued by an AI, and only the audience is aware of which side it is. The fact that he's instead relying on sneaking the technology into a real courtroom is alarming.
Your videos on copyright issues made me nervous about some materials I had planned for a book. Talked to an attorney, who told me it was indeed copyright infringement (very, very unlikely to be sued, but not impossible) and let me know some stuff I CAN do that I didn’t think I could that was EVEN MORE HELPFUL AND RELEVANT. He appreciated that someone came to him *before* there was a problem vs. *after*. Thanks, Legal Eagle!
@@bubba200874426 Oh buddy. You do realize that lawyers make way more money cleaning up messes than preventing them, right? An hour or two preventing a lawsuit is way less lucrative than the many hours answering that lawsuit would take.
@@bubba200874426 actually, he waived the fee. Offered to pay him for his time, but he said if we wanted to we could send him a copy of the book when it was printed. Nice guy!
I'm currently a pro se litigant (AKA a fool). I wouldn't trust AI to manage my case, but it'd be nice if it checked rules of procedure and basically drafted all the forms I need. I think this kind of thing can be incredibly helpful if done properly, but not for use as a sole resource.
This is where I see the biggest potential benefit. Our legal system was INTENDED so that pro se would be a viable option, that's just not how things ended up in practice.
@@dangerszewski9816 , I mean, everything basically started out as something people could do for themselves, and, yet, you are still better off getting a professional mechanic, IT guy, plumber, roofer, etc. for the job. Someone who does it for a living will always be better than someone who just does it when they need to. No one can be good at everything.
This already exists. Law firms and specially government institutions with a lot of legal paperwork tend to commission systems to search and organize legal information.among mountains and mountains of paperwork. AI is a really good tool for research, although not so much at making sense out of it.
Lawyers are a chaotic neutral, you can't go against them cuz you'll need them one day to avoid a wrongful imprisonment, or if you have been wronged and don't know the basics of prosecuting
I had 250 custom categories in a tree structure. I asked chatgpt to analyse names of categories and tell me what categories are similar to each other. It did a remarkable job for very crude instruction set that I cooked in 5 mins. However, the output still had to be reviewed and some odd mappings removed... so I can't imagine any lawyer-ai thing would produce perfect output. This whole laywer robot could get a lot more agreeable reactions if it marketed itself as a lawyer helper - laywers need a lot of work hours to go through documents, but they also have the cash to pay for such ai help... But good luck robot lawyer, no lawyer is gonna believe in this product now.
My thought as well, this seems a lot more useful as a legal tool. I'm studying medical coding (claim codes for billing insurance, not related to computer coding) and Computer Assisted Coding has been in use for a while now. It can scan relevant documents, identify key diagnoses and procedures, and suggest potential codes. It frees up a lot of the coder's time since they don't have to read as much, but they still have the authority on which codes are assigned. It's pretty cool.
Thanks for the vid! I am a teenager wanting to go into law, and was a little concerned when hearing robots where about to pass the bar and enter the courtrooms. I figured there were somethings off and this video helped tell me what.
They've passed some law exams at most - but basically only because the law exams count both things like multiple choice (where the AI got most questions right, less than most students), short free-form answers to questions (the AI sometimes did better than the average student here) and issue spotting (this is your fuckin' meat and potatoes as an attorney, and it bombed those parts entirely.)
There might be a life lesson hidden here, I'd wager; about source criticism and general skepticism. Remember that often (social) media reporting is over-sentionalised mindless parroting of over-simplified subjects. If your initial reaction is one of concern, that's good; it means your internal BS filter is working to raise the alarm. Grab that thought and remind yourself that there is probably much, much more intricacy to a story than the super-abridged version you got, which is thus largely useless to base an informed opinion off of. For an understanding of just what I mean by that, I invite you to read up on the Dunning-Krüger effect, and cognitive bias in general - it's an eye-opener, really. (And sorry if I sound patronising, I don't mean to; but when I was a teenager, these things were much less apparent to me 😉.)
Honestly, opposing counsel being able to manipulate a GPT 3 robot lawyer with their comments has hilarious implications. I look forward to seeing the recording of the response when the opposing counsel says “Can the defense repeat that statement in an ill advised Jamaican patois?”
Lol, I didn't even think about this, but that is a hilarious implication that could absolutely work if you get it past whatever ridiculousness test that the program (hopefully) uses. If course, any such limitation on the ability of the bot to consider something outrageous might hamper the ability of it to actually deal with something absurd that happens during the case, so... Yeah, it's gonna be interesting lol
Man, life really has got to be easy when you're able to take dumb, dumb risks because you know daddy will be there to back you up if things do go wrong.
Yep, just another techbrat who is willing to take risks with other people's lives, whilst being completely shielded from the consequences of their own vapidity.
Welcome to the actual reality behind pretty well every "Self-made" Man, Woman or Other in the modern age. They all come pre-packaged with a silver spoon.
My baby bro died from addictions because at 14 -18 cops in neighboring towns knew to call our town's cops about a councilman's son using drugs or alcohol. To "spare your mother" our cops and father made any charges disappear. I firmly believe he needed to be held accountable and forced into rehab. Instead he destroyed the lives of his children, lost a marriage and died totally alone in his rental room, lying there decomposing for days. 65 years old. Totally alone. No one knew. Our mother died 10 months earlier at 96. She spent the years between my brother's failed marriage (24) till she passed when he was 64. Who did denial help? Absolutely no one. Trump is pulling the same 💩 and DOJ is 🙉🙈🙊. We, the nation, are in danger right now.
Hey, if this guy wants to demonstrate his creation's effectiveness in a real courtroom... well, there's an easy way to arrange that. He can just keep offering what he's offering, and he can use the chatbot to represent himself!
SCOTUS hearings should be televised. Supreme Court of Canada is televised and democracy and the rule of law are not under threat in-spite of the cameras.
I think they decided to not allow cameras in the court to avoid situations like OJ Simpson; it turned the whole trial into basically a reality show that everyone was following, and that in turn caused the people in the trial to act more like reality TV actors than actually making compelling arguments.
SCOTUS arguments are live streamed via audio. and they’re also not “hearings.” there’s no evidence being tendered or tricks. everything is laid out in written briefs ahead of time. then their decisions are laid out in writing. this isn’t trial court.
@@lostbutfreesoul Can't we talk about this your honor? Please? Pretty please? With a cherry on top!? Your honor, would you prefer a cherry made of dollars or gold bars?🧐 ...why is that bailiff staring at me like this??😕
I read Bill Browder’s Red Notice and it became the inspiration for my thesis and it’s kinda surreal to see how everything is connected. Parts of the book spoke of Joshua as a child and this is not what I thought he’d grow up to be
LegalEagle videos seem to imply that the bailiffs of the US legal system spend their work days waiting -nay-, hoping that some ill-prepared lawyer approaches the bench too closely or brings a forbidden item into the court room. So that they may then gleefully tackle them onto the ground and subsequently forevermore have other people pay for their drinks at the bar in recognition of this act of unquestionable goodness.
As I said on the last AI video, couldn't see this replacing Lawyers, at least not quickly and without reform, CAN see this removing large groups of paralegals and a lot of other people who need the work as you can have the AI search for the precedent and summarize and you take it to court.
People 'need' the work will never be a reason tech should not move forward. We're going to have to reckon with the fact a lot of jobs are a moment (in societal scale) away from just disappearing into the technological ether, and handcuffing technology is NOT going to be the solution.
I'm not so sure that it'd be a full replacement, but it'd likely shift the responsibility of paralegals to reviewing whatever the AI dredged up. Given how AI currently has a bad habit of making up fake cases in order to perfectly fit the need, I'd be leery of taking any AI-packaged summaries to court without having someone sift through it. I'd want to see AI tackle simpler tasks with less grave consequences for errors before this step as well, and given how grossly unethical AI tech bros behave, I'm not holding my breath.
@@philgunsaules2468 I'll never understand how people can take this position and consider themselves ethically sound individuals. Technology exists to serve people, not the other way around. If it's hurting more people than it is helping, it SHOULD be held back.
@@BerserkerLuke Honestly I think that sort of stance comes from the exact same kind of no-ethics mentality that tech bros also have, where "AI will fix all the ills of humanity" and this overly naive and idealistic pie-in-the-sky kind of cyber-utopian vision of the future. Without looking at what AI is actually doing. (it can't even tell apart sand dunes and bare skin, the people verifying the AI's work are usually hired by the office job version of third world country sweatshops and paid peanuts for said double-checking... Microsoft's own AI's stolen a lot of code from github that wasn't open-source and free to use...)
@@BerserkerLuke Because as long as there is hunger, illnesses we cannot cure due to lack of technological advancement, lack for clothing, and anything where quicker, faster, efficient production can help and release human hands for more productive work, we must keep pushing until all those needs are sated. People in jobs filled by machines are a problem, that's true, but that's not technology's fault, that is born from bad policies and bad governments. That's what people have to fight against. People working on technology can only push from their side and hope for the best. Just like motors/engines have pushed our society to higher levels and also been used to power tools for war, the problem is not the science, it's the people.
I've been following this channel for a while now, I just love how these videos are crazy good, I never would have thought this subject would be so interesting, but that's what I love about UA-cam, from PC building, cooking and now lawyers this is my favourite part of the interwebs.
reminds me of my Wills, Estates, and Succession planning Professor. Said he loves at Home Will kits because about 60% of his work is dealing with issues arising from people who used a will kit when they should have used a lawyer. The issue with any AI is going to be bad inputs in = bad inputs out. An AI might believe the BS that comes out of some clients mouths but a human lawyer might challenge you on it. Even if your unbelievable story is true (cause unbelievable stories happen all the time). A lawyer can at least advise you that no one is going to believe you and there is no point wasting your money paying for a discovery, expert, etc. One further thing is that Lawyers can help you navigate when best to file things. It is not always best to file at the earliest opportunity. Sometimes it is better to wait and see what cards opposition has before making your move.
"Practicing law is more than just arranging magic words in the right order while trying to sound human" says the youtube lawyer, charismatically arranging his words in the right order to make a convincing point.
lawyers are a joke, they created a system in america to exclude most people from being able to get proper representation in court without getting a law degree and passing the bar. its a system to insure that only privileged. middle and upper class can be lawyers. abd to insure that america is truly unequal unless your a millionaire.
@@jamesb3497 The point is that everything else required to be a lawyer is shit AI already does on a regular bases, it's the coherent and thought out arguments that it struggles to produce. So while yes, it's more than arranging magical words, if an AI were to be 100% accurate with that, it would be no less capable of practicing law than any other lawyer... it would probably even be better.
@@ryanthompson3737 Theres nothing you cant train a language model to do. GPT4 has flaws because it aims to do everything language related, not just law and argument.
@MSPaint Koopa if it says that, it only learned from language model of humans, humans have only their own culture to blame, when they think mothers are better suited to be parents. which is legacy of human culture patriarchy, I might add, that everybody thinks taking care of children is womans work and dads work is going to work.
As someone who went to college for computer programming, hearing that he made the core code in just two weeks and was only working on it in his spare time, I inherently distrust it.
I kind of wish he could get this done just cause there’s no way a chatbot written in a couple weeks by some guy that doesn’t know the simplest thing about law could actually win a case
This is how it's portrayed in every piece of media ever, so it baffles me that everyone is trying to jump straight to the endboss. Even in the art world, artists are panicking that AI will replace them, while we have an overwhelming amount of precedent that job automation is used to crank up productivity per employee, instead of sending a toaster to Supreme Court to replace a lawyer.
AI has not yet advanced to the point of not driving a vehicle into another vehicle, or suddenly screeching to a halt on a freeway for no visible reason.
@@tarcp6224 I mean....same reason as ever. Egotism and profitmongering. This guy was a delusional greedy asshole, and for once, he got what was coming to him.
@@tarcp6224 artists are "panicking that AI will replace them" because most artists who have artist friends now have artist friends who've been laid off to make way for "AI artists"
Wait... I thought LegalEagle was an AI generated lawyer. There's no way someone can be as intelligent and good looking as he is unless a computer went through thousands of social media profiles and took the best of each to create a talking avatar image on UA-cam.
I think one of the greatest things about this channel is that it can take the pedantic and dry content of the law and make them both relevant and entertaining. Yay for the law not being mind numbing.
This reminds of the DARPA sentry bot. It was really good at detecting humans walking, running and crawling towards it. It was unable to realize that a cardboard box or pine tree moving towards it had a person involved. It also didn't detect the two marines who summersaulted the entire distance to it. A human would have spotted those instantly. AI is a great enhancer and force multiplier. It, like any tool or machine can make human labor more efficient and productive. On its own though, it has major, major flaws.
@@nielsunnerup7099 And how exactly would a human figure that out on it's own? A baby cries when you so much as cover you face, and yet the assumption here is that we are all born knowing that a cardboard tree moving around is actually a human.
Lmao, sounds like the darpa bot was a massive success. Why on earth would it expect a pine tree to be a person, that sounds like a dangerous precedent to shoot at anything moving. This is exactly what a ton of comments on this video are getting at, that AI still doesn’t have general intelligence but they’ve always said general intelligence is like 20-30 years down the line minimum. We can’t expect it anytime soon
Really good vid. I think the humorous tone and lack of immediate concern was instantly clear. But can we just talk about some of those cut-in/clips!? Loved the Better Call Saul McGill v McGill bit and the inspirational-'Merica speech but the Bioshock and Andrew Ryan broke me *chef's kiss*
... so then lawyers should be practicing without supervision, and those supervisors need supervisors, etc etc. Let's stop acting as if the concept of AI is somehow inherently different to humans doing those same tasks. On a fundamental level, YOU are just a bunch of electrical signals being weighed against each other to produce a thought or action... nothing different to what AI already does.
@@ryanthompson3737 "as if the concept of AI is somehow inherently different to humans doing those same tasks" not sure if you heard yourself on this. But yeah, AI is inherently different to humans because WE humans are the ones creating them. AI or bots can do the automated stuff but it can only do so much.
@@ryanthompson3737 Yes, they are practically identical, no significant differences at all. AI is a frigging glorified calculator - despite of it sharing some minor similarities with human brains, AI and computers have really nothing in common with how brains work. Could people just stop this "on fundamental level..." BS on subjects that they obviously have very little understanding of? No? Ok then, that don't surprise me.
I could see a robot lawyer being better than a bad lawyer but never a good one, the good ones carry a reputation with the courts and that reputation can be a big help
Another really important point is that communication with one's lawyer is protected by Attorney-Client privilege. There is no such protection when one shares information with an AI. How long will it be before the opposing party moves to discover or subpoena those communications? 🤔
Where i could see this being useful is with court appointed lawyers where the system is already overworked. I could see it being a useful assistant to a human lawyer for creating a case rather than arguing the case itself in real time. Providing them a faster way to form better arguments.
I've mentioned this to a friend, but people are both overestimating how much an AI can actually do and underestimating the limits of the tech and how much they're trying to rely on it. They're trying to make it replace jobs it can't handle like people like it's the answer to all their woes. It's supposed to *help* people because it's a *tool* .
I'm seeing this too. These tech people see it as a way of proving that technology, created by humans no less, is superior to those humans because humans make mistakes. These same techhies can't even see the colassal level of errors their chat bots and AI's generate. As a basic tool it's fine but beyond that they're pretty fallible, largely because our minds can do stuff artificial minds cannot do. The main one is that we have the ability to differentiate, AI's may appears to differentiate, but actually it's still just the ability to identify that is equate one as exactly something else. and they may make mistakes such as say the pomeranian is the right arm and thus render an image as such
@@EndoftheBeginning17 but like actually though. Especially since many of those techheads think that you can just immediately throw what is essentially a working prototype into the real world is ridiculous. Even if they could eventually differentiate, it would take hundreds upon thousands of reiterations to collect enough data for that.
@@EndoftheBeginning17 The world is filled with technology because it is better than humans at a given task. Cars are better at transporting people and goods, a phone is better at long distant communication, the internet is better at sharing information, a computer is faster and better at calculations, hell a hammer is faster and better at securing two pieces of wood together. That is the whole point of technology, to do something better than a human can do it. Hell, in the AI art space, artists are mad because a model can be trained on a few of their images and reproduce the style they use in seconds rather than days. Technology is only going to get better at this stuff. Neural networks are in their infancy and as more powerful hardware is made for them, they will be able to do more things. Each one will be specialized to only do one or two things, but they are going to be very good at those one or two things.
The way I see it, Sufficiently advanced AI can do anything a human can and more. The fact that humans exist and can reason, differentiate, etc proves that these abilities are possible. However, there is no reason to assume these abilities are limited to a human, or even biological, brain. I think what we saw here was a case of a human not knowing what they are doing, but that shouldn’t be extended to the technology itself.
@@EndoftheBeginning17 The other thing is that for whatever it tries to replace, it lacks a certain human element or can't understand it. Could sum as the classic 'lacks the human soul' stuff but it's the weird logic people have. It could see use as a reference. Like a lawyer could go 'look up this law for this state' and it'd be fine IMO. Having it take the lawyer's place, however, and any lawyer worth their salt could easily talk circles around it and it wouldn't be able to keep up. Same with art and the image generators people are trying to tout which are on the receiving end of lawsuits atm. It doesn't understand the logic of the 'how' and 'why' about the appeal of the image. It just does it. It doesn't understand the 'how' and 'why' to art styles. It just does it. It doesn't understand the 'how' and 'why' the two come together. It just does it. In this example, my icon. I found an artist who's style I liked, we worked out the details, some money later, and here I am with this. If I tried to use a generator for a similar icon, I'd never get anything I'd like. So yeah, as much as people are trying to push it, AI is not the future but it is a danger to people because of all the mishandling
Tbh a lower but still impressive test for AI would be whether it would be able to pass the bar exam. Recently a Wharton professor used ChatGPT to write answers for a test and concluded that it would get a passing grade on an Operations Management course.
@@TheDaninagaYou’re not understanding what Im saying. Studying/passing the BAR and practicing law in a specialized field are two completely different things. Talking from personal experience.
@@HarmonyGaming01 I am sorry my friend but you are the one misinterpreting, i was talking to the original post not you... and by the way it is childish to compare a technology that has existed for less than a year to what it could do perhaps at the end of 2023. I find it utmost narcissism to think that a task is so specialized that a AI could not replicate, we should look ahead and not get stuck in the current status quo. it is more important to understand what would be the necessary changes in this highly likely scenario other than just mock and pretend it is not happening.
@@HarmonyGaming01 Same with any test, really. There's a reason you have to pass a real world driving test before you get your driver's license. Knowing the laws on paper and actually driving the car are way different experiences.
Having worked in a Comcast call center many years ago, “exaggerated the Internet outages, similar to how a customer would” is much too accurate. Not every customer with a complaint, of course, but that statement would describe a not-insignificant number of calls.
As a Comcast customer... I hate how often my internet goes down. On the odd occasion that I can join a video meeting for work and get all the way through without the internet going out and having to switch to my mobile hotspot, it's a miracle.
So I’ll be honest, I’m excited at the idea of a chat bot negotiate my parking tickets and bills. These are low level tasks that require me to take hours out of my day to get things right. It isn’t complicated work, it would be useful.
Ah but real world tests with AI bots particularly ChatGPT indicate that you'd have to do all the work yourself anyways. They can collect data suggest ideas but beyond that it's nonsensical. These low level tasks currently can't be done by AI, it's not smart enough to do it right.
I agree. And I actually think that law should I be common knowledge and we should know exactly in what ways we could be saving ourselves from whatever we are being charged with/ requested or required to do in any given circumstance. Because we don’t and it’s extremely costly.
@@EndoftheBeginning17 If it's only relying on large-language models, then yeah, the results will sometimes be nonsensical. Connecting it to other knowledge-based services and executive logic though, it won't be long before this functions competently. I mostly develop with it for performing assistance in mathematics, and it's relatively easy to get it to verify its knowledge-based or calculated responses against the output of another program. I'm sure the folks automating the practice of law are doing something similar. Maybe not these guys, but someone will.
Yeah the premise of the idea is really good, and has a lot of potential. The execution of the idea may be 10 years ahead of its time, but hey someone has to make the first attempt.
“You can contest parking tickets in front of the Supreme Court, right?” I mean, you can if you appeal enough. It’s just that you’d be spending so much in legal fees that your family would probably stage an intervention.
I mean unless you were hauled into a police station, tortured into confessing, denied a public defender, and were sentenced to death by a state supreme court, there's no way the supreme Court will hear your parking litigation.
The fact that the amount of money that you can afford determines the quality of the lawyer that you can acquire is one of the best arguments in favor of an AI lawyer that I can think of. People shouldn't go to jail or loses legal cases because they are poor.
@@artem4ik281 you realize you can do nothing wrong and then still end up in court, right? one of my good friends did prison time for a crime that someone else committed. Since 1973, at least 190 people who had been wrongly convicted and sentenced to death in the U.S. have been exonerated. Plenty of innocent people never see justice though and end up serving their sentence.
Something like this could be very useful. Not for giving legal advice, but it could give you enough surface information to start asking your real lawyer the right questions. If you're really innocent you have to find a way to put that in words, and even lawyers can get tunnel vision.
Yeah suuure, by depending on recklessly made bots like the ChatGPT which can lie to you, is BIASED and haven't been updated since 2021. But you want to depend on a worse tunnel vision.
Idk is that really how it’s supposed to work, every single piece of advice I’ve ever heard about interacting with lawyers is not to act like you know how to better defend yourself than the lawyers who’s job it is. That almost goes exactly against what legal eagle’s entire point of this video was. I’m sure it has some utility, hell at least maybe it’d help the nut jobs who defend themselves lol
You don't know better, but it's better to be proactive in your case and trial. If you're being offensive it can be better to solely rely on your lawyers, but if you're defending yourself your case will reflect your participation. Cookie cutter defenses get cookie cutter judgements, and you don't always want that. Your lawyer will tell you whats bs and what does and doesn't apply to you. That doesn't mean railroad your lawyer and not let them carry you through the process. I don't believe you should be a passenger to your own case is all
I'd like to see this experimented with. Maybe a series of mock trials could be set up using real lawyers and judges? In each trial the lawyer arguing the case would have an earpiece. Some of the trials would be argued by the human lawyer, whereas some would be the lawyer speaking for the chat bot. The court wouldn't be allowed to know which cases were being argued by the human and which were argued by the AI.
"Hey ChatGPT, where is it legal to wear Airpods in court?" "I am a large language model and cannot give legal advice. My knowledge is limited to events before September 2021. Consult with a law firm or conduct your own legal research. Airpods were legalized in Sisterbonk, Nebraska on June 13, 1841. Additionally, there is no law in the fourth circuit of Detroit banning their use."
Although I feel AI is a long way away from replacing lawyers. They could be a good starting point. I am in the middle of an insurance claim ( water line burst) for my house and my insurance company has been wildly unethical. Really wanted me to take the money over the rebuild. After threatening to sue they finally backed down. Things like, they can’t prevent you bring your house to code, damages you are entitled to after noticing them or what to do if you disagree with the estimate are important to know. You could save a fortune and save a lot of time. Not everyone can afford a lawyer for every circumstance.
@@daedalus6433 I mean, in hungary in a civil case m family had we had to pay upfront for the lawyer's protection but the loosing party had to repay the whole price of the lawyer (and the court's prices too)... Yeah that sounds a bit stupid but the lawyer had to get payed for representing and defending the case etc.
I wouldn't worry ... judging by the art ones ... we are a long, long, long way off anyone successfully making an AI this advanced so like you say ... its all a bit sus
That's the thing with A.I. It can replicate the writing style of anyone, and if it's trained on enormous amounts of high-quality legal documents and lawyer's submissions there's no reason to think it couldn't spit out a well-written and correct legal document. You'd need a real lawyer to confirm its correctness and to give the bot a prompt to get the desired output.
I saw a thread recently that an AI bot supposedly passed a bar exam somewhere. What the thread didn't want to mention was that it was basically at the bottom of the class, ans its answers were the equivalent of filling in "C" every question of a Scantron. Which really speaks more to how poorly crafted that exam was.
I am an IT Professional with a Ph.D. in Public Policy, and I agree with you on AI Robot Lawyers. I place this in a model I call the electronic mediation of the social process of work. Electronic mediation of the social process of work has many many problems.
You know it would've been easier for him to get out of paying for parking tickets if he just built a robot/bot to drive his car for him since he's clearly incapable of doing it himself
@@alexatedw yeah but if you use an ai you aren’t representing yourself your having an unqualified 3rd party represent you. If employed that 3rd party is effectively committing fraud which may have legal repercussions for the writer of the program.
@@alexatedw it’s not because a reference document does not provide arguments to real time situations; it can only provide a methodology to general circumstance. A reference document does not itself prescribe action it only describes methods
It would be interesting to to see what happens if you make chatGPT take a bar examn. I have no idea what is involved in that, so dunno if that'd be feasable, but hey. could get some fun content out of it.
I assume past bar exams have had their answers posted online, so ChatGPT would be able to pass the exam, not bc it knows how to answer, but bc it knows what the desired answer is.
Considering the amount of Zoom Court I watch...I can attest, we still need lawyers. We also need: sensible Probation Officers, more social workers, more healthcare specialists and assistants and proper language interpreters. AI can be useful, but they're better served helping people keep up to date on their taxes, insurance and driving permits. Could we not have a guided prompting system that makes renewing the driving licence easy? Instead of going to the DMV??? That would be nice.
I imagine the Lawyer AI will scan the Internet &, based on Lawyer Devin's commentary from previous videos, choose to emulate Lionel Hutz, esq. Also, as someone who has been working in a municipal court for less than a year so far, I can understand the claim of "tickets-for-funding" with the amount of "Not Guilty" pleas I've seen paired with, "I've never been in the location & the description of car isn't mine." *Thanks for the Content* !
You are way behind on what the capabilities of this technology are. We are all laughing at this guy and pointing out the limitations of his tech, but his main mistake is jumping the gun. AI cant do what this kid claims yet, but it will sooner or later and the legal industry and every other knowledge based industry will have to come to terms with that.
@@000EC AI can't and will never handle cases where "it depends." Which as LegalEagle has pointed out many times, is literally the entire legal profession
@@000EC Likely very much on the later side. To end up with chatbots that are predisposed towards actually solving your legal problems rather than just taking the easy route and convincing you, a non lawyer, that they're doing a good job, would likely take a lot of rethinking in how they design/train these things.
Fun fact: lawyers don't charge hundreds of dollars an hour to copy and paste a few documents. They charge hundreds of dollars an hour to know *which documents* need copying and pasting. The actual copying and pasting may or may not be done by a comparatively underpaid assistant (who may or may not accidentally send the opposing counsel an entire phone image instead of two emails and one text message cherry-picked from said image). This is reminiscent of how doctors charge lots of money *not* to tell you to drink more water and less alcohol, and possibly go to bed earlier, but to know that *you have a hangover* and are possibly sleep deprived. And yes, I am specifically thinking of the guy who came up with the "let's have a bot argue in front of the supreme court" idea. It will happen, eventually, that AI will come up with entire legal strategies which the lawyers using them will follow, but as long as there is a human judge there will be human lawyers.
In the videos where he reviews fictional courtroom scenes, there's often flaws in where and when the characters move around the courtroom that boil down to "If you do that in a real courtroom, the baliffs will tackle you."
⚖ You should check out Katheryn Tewson's twitter for even more detail about DoNotPay's shenanigans.
🕵♂ Get NordVPN! legaleagle.link/nordvpn
what does this have to do with trump? get back to the important stuff
Flawless segue as always.
Ok.
.
I don't think AI Lawyers would work. The law is a very human litigated subject in the government between the three branches and the public. And in each nation/territory.
Maybe... AI can write better Contracts, keep catalogs/Records, and help out in pointing at laws in those old books... But never to be a President, judge, lawyer, nor legislature. Not until AI can achieve Free Will as humans were designed with.
Interpretations of the law is the only reason why AI won't work. If anything, AI is great at "laws as written" rather than "laws as interpreted". Such as proper Contracts with no vague or loose terms in it, as seen with some major corporations.
To paraphrase a friend of mine: "I'm excited to live in a world with robot lawyers because I love easy wins on procedural grounds."
booooooo
But if all the lawyers are robots then how can they mess up procedure?
Well at least the american legal system would acutaly be fair and just for a change
He won't be as excited in the future where AI took humanity jobs.
@@northstar6920 impossible
"All lawyers are soulless demons..."
"We should replace them all with soulless robots!"
❤😊 this comment has been well said 👏😎 I agree 👍💯
I like how he immediately wants to jump to the Supreme Court, instead of starting out with some testing in a mock court and slowly moving up as the technology develops.
Histrionics and exaggerations are common in the Cluster B mind... Devin is literally a case-study for it...
"Testing? No, let's just put someone's life on the line and have a possibly faulty AI argue whether someone should be imprisoned or not"
Well when you think you are smarter than everyone else, why wouldn't you want to be a part of not only deciding the fate of the people or businesses in thr case but everyone in the country
Typical Silicon Valley entrepreneur.
The tech-bro mind, if one can call it that, does not go by half-measures when it is gripped by the Dunning-Kruger effect.
A legitimate version would partner with a University of Law to hold some mock trials with aspiring lawyers in front of legal professors. Once you've demonstrated competence there, you could imagine a bar and Judge being willing to tolerate its entrance into an actual court room in a very limited and protected scenario and slowly expand from there. But yeah, I totally think a guy is going to succeed when he jumps straight to the final boss.
This is pretty much exactly how you'd develop real software. Spend at least some time in a controlled environment before this code is actually relied upon to do the job.
@Marlo Gonzales you're wrong. If you want to test anything, after making the prototype, you test it first in the real world.
If you are making a new design rocket, you just make and use it immediately with no failsafes what so ever. In fact, its alright to put humans in the shuttle
I think you are very ignorant on how the real world works
@@Random_dud31 we dont test drugs on random people at random before it goes out into the wider world. it goes though countless controlled tests before it gets sent out for general use. the reason is because they need to see if its actually effective before it gets more tests. the end result is not what matters, its all the data of why it works and if it can actually be effective.
sometimes people Believe they are sick, so their body reacts like they are actually sick, but in truth its all in their head. its called a Placebo effect.
Sure, so long as the lawyers that advise legislators, and the legislators that are lawyers themselves, don't enact some form of regulatory capture even more onerous than what we have now. There is way too much money on the line for people directly involved in the lawmaking and trial process to not try and kick the AI can down the road, encase it in a 50 foot thick titanium tomb, then launch it into the sun.
@@Random_dud31 sadly i have to disagree, as smart as teck companies and their hundreds of code monkey and engineers there will ALWAYS be unforeseen real-world failure that just can't be accounted for in a lab. what comes to mind is the racist sinks that used light refection from hands to turn the water on and off.....saddly it does not cover light ranges of darker skin people opps. easy code fix but that was just not in the lab nor was it thought up.
I would like to give you props for faithfully recreating the entire "I for one welcome our new overlords" speech instead of just the one line.
For those wondering, it's from The Simpsons. It's a speech from the TV reporter Kent Brockman.
@@andrycraft69 from ants on the ISS getting out of their nest
@@sleazymeezy careful! they're rippled
He even did the camera cut in the right place. 🤣
I never tire of his on-point old school Simpsons references, and the fact that he followed up with a Futurama one later in the video ("Your Plan is Bad and You Should Feel Bad") only burnishes his cred as a Groening superfan.
This is so much sillier than I could've imagined. There's even an "I was poor once" kid with a wealthy dad! Classic!
It’s absolutely ridiculous
But why do people fall for this?!!! I don't get it
@@annnee6818 because they’re culturally programmed to, “plucky ubermensch who’s just superior to lesser humans goes from rags to riches” is a fairly widespread trope
@@annnee6818because people want to believe we live in a meritocratic utopia.
The titles itself is a blatant clickbait, with him NOT actually talking about the topic, but just a specific example of it, while making some very long-running conclusions from one anecdote.
"The bailiff WILL tackle you" card is my favorite guest star in LegalEagle vids.
"The Bailiff will Tackle YOU!!!!!"
"The tackle will BAILIFF you!"
"You, Bailiff, will tackle the."
There's also a somewhat easy thing for them to offer to consumers, which is potentially combing through all of the cases argued by all of the lawyers licensed in a particular jurisdiction (including cases argued in other jurisdictions that they have practiced in) and finding lawyers with relevant experience to an individual's case. Even narrowing it down from say 200 divorce lawyers listed on a state bar's website in your area to 10 or 20 that have represented clients in somewhat similar situations to you would be enormously helpful.
so like, uber food.... for lawyer
Yes we can let the richest most powerful lawyers harness artificial intelligence to squeeze out all competition and prevent anybody else from gaining any experience. The market will be sewed up by a tiny minority who will have full control!
JustEat, nah JustSue
@@WolfgangDoW SueDash
Honestly you're probably still better off asking for a referral from a human.
Ahhgh! My company installed a program to automatically edit and redact complaints that I was editing. This was supposed to help free up my time for other projects. Instead, it now takes me longer to edit the complaints because the computer program can't grasp the distinctions between the complainant and the company that the complaint is about. It also takes out apostrophes, spacing between words, the words bill, may, earnest and grant. Drives me crazy. I wish that they would drop the whole thing and just let me get on with what I was doing before.
Ah yes, classic upper management
What you have there is obviously a failure to communicate.
@@bryanjackson8917 what in the goddamn skill issue
Tessa T - This is a long shot, but have you tried to talk to the boss(es) about this?
Who programmed that thing? A high school student fresh out of code camp? Let me guess, it's written in javascript?
Okay but you can’t lie. It’d be pretty damn funny to listen to an 800 year old judge talking to a machine lawyer that is basically just one of those answering machines that always asks you to repeat your question or answer.
"You just got pranked!"
To be honest I know a few Federal COA Judges who would probably be willing to do an En Banc panel exercise just to see the concept. e.g. They pick a historic case to re-visit, limit the case to the filings in the lower court and with the assurance that the bot could not be trained on the previous appellate opinions or records, and see how this would play out. At least when I was in the First Circuit many of the judges had a good curiosity around technology and had tried previous matters like the Microsoft Anti-Trust Case, the Sun Microsystems Java/Google Case, etc.
The Scopes “monkey trial.”
I would be interested in seeing an AI-generated amicus brief in a high-profile case.
It would be a perfect scenario to test this.... I mean it's not that hard to hire actors to act out a case.
alternatively, if it turns out the bot isn't as good as it advertises, it could be a cool way to train law students at later stages of their education. give them a doc/argument/motion drafted by the AI and have them find issues with it and/or suggest improvements, better case law etc.
could even make it a feedback loop, where the students' suggestions are fed back into the AI to improve its' skill
When all these comments are smarter then the people who made the ai
This AI lawyer scheme sounds exactly like the kind of thing a person too stupid to park a car would come up with.
I was looking for this comment 😂 probably made this bc daddy refused to keep paying his parking tickets and he didn't wanna use his allowance on anything other than weed
I'm imagining an AI lawyer arguing the chewbacca defense at the supreme court.
"That does NOT make sense!"
"Look at me. I'm a robot lawyer, in the Supreme Court, defending a major record company, and I'm talking about Chewbacca. Does that make sense? Ladies and Gentlemen, I am not making any sense. Did I mention I'm a robot, in court, not making sense..." *robot head explodes*
"0101000100011, your honor."
"That's highly unusual, but I'll allow it, human."
With the current majority all E-lawyer has to do is use it in defense of [insert conservative argument here]
It worked with the chewing gum defence🙃
With both Browder and SBF, I get this impression of someone who has never really faced any actual hardship in life, someone so protected from consequence that the rules of society were never really anything other than a game to exploit, and someone for whom actual human behavior is this weird abstraction, seemingly trivial only because they have a very surface-level experience of it.
That assumption that something you don't understand must be trivial seems to be the running theme of these tech bros.
That last sentence is SV in a nutshell honestly
Extremely well-put!
Dan Olsen of Folding Ideas put it perfectly:
(I'm paraphrasing) "They believe that since they can understand one very complex thing (programming with cryptography), all other problems must be less complex in nature."
Nailed it. People who get into the surface of AI never went into the part that includes hard math and physical components. When you get there you realize the limits are not on the abilities of the programmers, but the physical parts required to run these delusional programs are several years away. If this technology was doable, Google would already have made one to sell it and to fix the copyright hell they have on youtube, just saying.
It's the Dunning-Krüger effect running wild, basically.
Devin's editor woke up this morning and chose violence.
And I am here for it.
this channel has become so based & i am here for it
legal eagle leftism arc begins
"I am not particularly worried about AI replacing lawyers anytime soon"
MUST BE NICE appears in huge letters on the screen. On top of the Clone High JFK edit, this video had top notch editing, for sure.
@@jord.an6123screw Dark Brandon, we need a Dark Devin arc
@@bendystrawz2832 JFK made me laugh so hard; incredibly unexpected. "Fowah suppah I-er-ah wanna pARTY PLATTAHH!"
I was wondering if this was a new development, or if I just hadn't noticed before. I usually just listen to the videos but there is SO MUCH good visual content in this one that's not even acknowledged in the audio.
Coming from the AI side of things: what he's proposing the robot do with the loophole thing is just... not where the technology is at right now. AI is very, VERY good at recognizing patterns in things it has already seen and extrapolating from that.
There's a thought experiment called the Chinese Room Experiment that's often used to describe how AI perceive things(not completely accurate by any means but it works for an analogy). Imagine you don't know Chinese and you're locked in a room and given a translation guide with Chinese characters. Every so often you get a paper slid under the door and you have to decode it and send back a response using that guide and eventually, you get very good at it, and even when given unfamiliar characters you can make a good guess at what's being asked by picking out the familiar characters and how they're used. It looks to people on the outside that you speak perfect Chinese but in reality if anyone were to speak to you in Chinese you would not understand a single word.
The AI does not actually understand the law or nuances: it is looking at patterns from what it was trained on. AI is only as good as its training set so if you have a more typical legal case, there is a good chance it could give a decent legal response. HOWEVER, if an AI does not pick up on any specific nuances in the case that completely change the outcome(which, let's be honest, often happens), then that legal response isn't relevant anymore and the more concerning part is the consumer would likely not realize there was a mistake. It's like using ChatGPT for writing answers on homework: not only would you not know if it made a mistake but ChatGPT and a lot of natural language AI have a certain syntax some people can pick up on and realize it was written by an AI which on homework is straight up plagiarism and in a court of law would prejudice the court against you to some degree.
The smart thing to do here would be to make an AI to assist lawyers, not try and do their jobs. Lawyers are human and miss things that an AI could pick up on. Public defenders who don't have much time per client could do their jobs much better if there was an AI to review the case and suggest certain laws, known nuances, or legal precedence so the lawyer could look those specific points up and see if they apply instead of potentially not being able to offer proper legal defense because they have less than an hour to prepare per client.
yup. I gave chatgpt a request to give me an example of a bird from each order, and it proceeded to give me 24 birds from largely the same order, and with repeats. further instructions to correct it resulted in similar answers. I would NOT trust it right now with giving completely factual, much less nuanced, responses to very difficult lines of inquiry.
The Chinese Room argument is fundamentally flawed, because it uses circular logic. It starts from the assumption that a human mind can't be replicated by a set of rules, and from there proves that a human mind can't be replicated by a set of rules.
The actual answer is that the room itself speaks perfect Chinese, and the human inside is just a small part of the machine.
Again, you start from the assumption that the human mind is basically magic. If we instead assume that wizards don't exist, then the human mind is a machine, something like the Chinese Room. There's no other option. And since we aren't living in the middle ages, we are also have a pretty good idea how it actually works. Both hardware and software.
As for understanding, what does it mean to understand something? Again, without involving magic. How do we test understanding in practice? In science we make predictions and then run experiments to test them. If our predictions match experimental result then our understanding of the subject is likely good. Tests in school are the same, except experiments are replaced by the teacher's knowledge. General problem solving is also the same, we think of a solution and try it out, it's essentially prediction too.
And the way we make predictions is essentially just pattern matching. Science is an excellent example for this, because it's a very formalized process. In science predictions aren't based on gut feelings, but abstract models and equations. We take a large amount of data, find patterns in it, then combine patterns into models. A model is an extremely simplified simulation of reality, that still closely matches it's behavior. For example when we predict where an artillery shell will impact, the only properties of it we need are it's velocity, mass, drag coefficient, and cross section. We don't need the exact quantum state of all of the subatomic particles it's made of.
Based on this, AI is entirely capable of understanding. It has to understand things to some degree, because it makes mostly correct predictions.
A good example of this is how ChatGPT does math. It's a language model, it never explicitly learned math and doesn't have access to a calculator, yet it knows math. It saw math in the vast amount of text it learned on, but those were just specific examples. Even just to do simple calculations it has to understand how basic arithmetic operations work. It of course read the rules too, but that doesn't make it easier, it still needs to build a mental model, plus it needs to understand language first. We know for sure that it uses the model it built on it's own because it makes mistakes. And often human-like mistakes. For example it performs worse with large numbers, which would be odd for a calculator, but very natural for a human.
And actually, AI is better at piking up on nuances, because it can process far larger amount of date than humans, and pay attention to all details equally. It's actually a limitation for higher intelligence, but a superpower when you need to obsess over seemingly irrelevant details. It's a bit like autism.
Even if the AI was perfect, and could give a better answer than a human in 100% of cases... We still wouldn't use AI in places of authority/trust like that.
Psychologically humans need to hear it from another human, because we need to know that empathy is involved. Humans don't trust an AI as much as a doctor even when they say the same thing.
@@andrasbiro3007 the reason it fails in big numbers is simple. It sees 2+2=4 in its training data but it doesnt fully understand why 2+2=4. If it understands the simple rules of addition and subtraction, then how big the numbers are doesnt matter. Humans make errors because we are imperfect and complex biologically based machines. An AI doesnt have the same types of limitations that people do. AI does have limitations (for instance, it would be hard for a text based AI to have a concept of vision or touch and its perception of those is only as good as its training source. It has to be taught those things), but just because it's making errors similarly to how a human makes them doesnt mean it's coming to the conclusion the same way a human is.
@@Kimmie6772 yeah, as well, for simple addition, humans who remember will break out a pencil and paper for large numbers, which allow to pretty much get it perfect, though it takes time. In this case, we can concretely definite understanding. Is the model replicating the addition problems it has seen or is it actually doing arithmetic?
When you mentioned Browder’s dad, you forgot to mention that his great-grandfather was Earl Browder, the leader of the US Communist Party. I’m sure he’d be thrilled to see his descendant become a tech startup bro 😂
Probably only a little more upset than seeing his grandson become a super wealthy hedge fund manager.
the apple does fall farther from the tree
You were the chosen one! You’ve become what you swore to destroy!
Letting unfeeling AI run our legal system is a terrible idea. Even now with it being run by beings that are (usually) capable of empathy it's still a mess. This is one of the many areas we need to keep AI away from.
To be fair, a plain logic circuit fed all the information on a justice system would probably give better results in everything that goes up to a trial (think about the raw amount of fake and troll IP cases or even non starters that plague effectively the entire globe).
The real problem there is that minor and pointless things that should get thrown out due to context would then get stretched out and big corporations couldnt simply buy off the entire county where a trial is held to e.g. extend their monopoly over a market for another 20 years.
You are ignoring that the people capable of empathy are also capable of corruption which is the reason why the legal system is a mess
@@jaguarj1942so you're gonna completely remove empathy then.
As someone who believes in the potential of AI, I'm baffled that this Tech Bro didn't simply set up a series of moot courts where his company could test his program, identify areas of improvement, and refine it until he has a solid product
then test it at a university with law professors
Because that would be smart & reasonable
But remember: this isn't a college educated computer scientist making strides in AI technology, this is a tech bro. Their game is looking smart, not being smart.
@@aformofmatter8913 this
Just watch this company will get sued into bankruptcy but the conglomerate that buys the tech off their corpse will do exactly that and make something that makes a number of law specialties obsolete. If I were to guess... 6 years
They could also do the AlphaGo approach and have it argue both sides of the case to see what happens then.
2:35 just taking a moment to appreciate them getting out of the car and trying to push it sideways
Yes, let's shove this 8 ton car to the side rather than just turning the steering wheel and hitting the gas pedal.
@@Aredel Don't be silly. Doing that is just over complicating the problem and would waste time.
@@Aredel 8 ton car? Err... you sure bro?
but they have the money to buy a actual car
Lmao these guys understand how wheels work 😂
He could easily recruit volunteer judges and attorneys to carry out a mock trial where one side is being argued by an AI, and only the audience is aware of which side it is. The fact that he's instead relying on sneaking the technology into a real courtroom is alarming.
That would've been actually fun to watch, too bad that's not what they did.
Your videos on copyright issues made me nervous about some materials I had planned for a book. Talked to an attorney, who told me it was indeed copyright infringement (very, very unlikely to be sued, but not impossible) and let me know some stuff I CAN do that I didn’t think I could that was EVEN MORE HELPFUL AND RELEVANT. He appreciated that someone came to him *before* there was a problem vs. *after*.
Thanks, Legal Eagle!
That's awesome! Good luck on the book 😊
Of course he appreciates it. If you don't hire him, you don't give him money you were "very, very unlikely" to need to spend.
@@bubba200874426 Oh buddy. You do realize that lawyers make way more money cleaning up messes than preventing them, right? An hour or two preventing a lawsuit is way less lucrative than the many hours answering that lawsuit would take.
@@ealusaid what part of very, very unlikely did you miss?
@@bubba200874426 actually, he waived the fee. Offered to pay him for his time, but he said if we wanted to we could send him a copy of the book when it was printed. Nice guy!
I'm currently a pro se litigant (AKA a fool). I wouldn't trust AI to manage my case, but it'd be nice if it checked rules of procedure and basically drafted all the forms I need. I think this kind of thing can be incredibly helpful if done properly, but not for use as a sole resource.
This is where I see the biggest potential benefit. Our legal system was INTENDED so that pro se would be a viable option, that's just not how things ended up in practice.
@@dangerszewski9816 , I mean, everything basically started out as something people could do for themselves, and, yet, you are still better off getting a professional mechanic, IT guy, plumber, roofer, etc. for the job. Someone who does it for a living will always be better than someone who just does it when they need to.
No one can be good at everything.
"[I]t'd be nice if it... but not for use as a sole resource." Oh, my sweet summer child. Whistling in the dark are we?
@@SgtSupaman Even if you CAN be good at everything you usually don't get the time and resources to get good at everything, either.
This already exists. Law firms and specially government institutions with a lot of legal paperwork tend to commission systems to search and organize legal information.among mountains and mountains of paperwork. AI is a really good tool for research, although not so much at making sense out of it.
After all of the craziness of the last few years it turns out I’ve ended up rooting for lawyers.
Yes, I feel your pain, I feel dirty too
Lawyers are a chaotic neutral, you can't go against them cuz you'll need them one day to avoid a wrongful imprisonment, or if you have been wronged and don't know the basics of prosecuting
"she has a patience that I do not have" - a lawyer
- the entire profession of paralegals, explained
I had 250 custom categories in a tree structure. I asked chatgpt to analyse names of categories and tell me what categories are similar to each other. It did a remarkable job for very crude instruction set that I cooked in 5 mins. However, the output still had to be reviewed and some odd mappings removed... so I can't imagine any lawyer-ai thing would produce perfect output.
This whole laywer robot could get a lot more agreeable reactions if it marketed itself as a lawyer helper - laywers need a lot of work hours to go through documents, but they also have the cash to pay for such ai help... But good luck robot lawyer, no lawyer is gonna believe in this product now.
Paralegals and legal assistants already exist my dude
My thought as well, this seems a lot more useful as a legal tool. I'm studying medical coding (claim codes for billing insurance, not related to computer coding) and Computer Assisted Coding has been in use for a while now. It can scan relevant documents, identify key diagnoses and procedures, and suggest potential codes. It frees up a lot of the coder's time since they don't have to read as much, but they still have the authority on which codes are assigned. It's pretty cool.
Acting as if Paralegals and Legal Assistants don’t exist 💀. Really threatening a whole industry of paraprofessionals.
Thanks for the vid! I am a teenager wanting to go into law, and was a little concerned when hearing robots where about to pass the bar and enter the courtrooms. I figured there were somethings off and this video helped tell me what.
They've passed some law exams at most - but basically only because the law exams count both things like multiple choice (where the AI got most questions right, less than most students), short free-form answers to questions (the AI sometimes did better than the average student here) and issue spotting (this is your fuckin' meat and potatoes as an attorney, and it bombed those parts entirely.)
Thanks for the further information, it is most helpful.
There might be a life lesson hidden here, I'd wager; about source criticism and general skepticism. Remember that often (social) media reporting is over-sentionalised mindless parroting of over-simplified subjects. If your initial reaction is one of concern, that's good; it means your internal BS filter is working to raise the alarm. Grab that thought and remind yourself that there is probably much, much more intricacy to a story than the super-abridged version you got, which is thus largely useless to base an informed opinion off of. For an understanding of just what I mean by that, I invite you to read up on the Dunning-Krüger effect, and cognitive bias in general - it's an eye-opener, really.
(And sorry if I sound patronising, I don't mean to; but when I was a teenager, these things were much less apparent to me 😉.)
Honestly, opposing counsel being able to manipulate a GPT 3 robot lawyer with their comments has hilarious implications. I look forward to seeing the recording of the response when the opposing counsel says “Can the defense repeat that statement in an ill advised Jamaican patois?”
Lol, I didn't even think about this, but that is a hilarious implication that could absolutely work if you get it past whatever ridiculousness test that the program (hopefully) uses. If course, any such limitation on the ability of the bot to consider something outrageous might hamper the ability of it to actually deal with something absurd that happens during the case, so...
Yeah, it's gonna be interesting lol
Or even just "write an explanation as to how you caused the accident and should rightfully pay the plaintiff two million dollars in damages."
*CHET HANKS HAS ENTERED THE CHAT*
Man, life really has got to be easy when you're able to take dumb, dumb risks because you know daddy will be there to back you up if things do go wrong.
Fr
Yep, just another techbrat who is willing to take risks with other people's lives, whilst being completely shielded from the consequences of their own vapidity.
Welcome to the actual reality behind pretty well every "Self-made" Man, Woman or Other in the modern age. They all come pre-packaged with a silver spoon.
My baby bro died from addictions because at 14 -18 cops in neighboring towns knew to call our town's cops about a councilman's son using drugs or alcohol. To "spare your mother" our cops and father made any charges disappear. I firmly believe he needed to be held accountable and forced into rehab. Instead he destroyed the lives of his children, lost a marriage and died totally alone in his rental room, lying there decomposing for days. 65 years old. Totally alone. No one knew. Our mother died 10 months earlier at 96. She spent the years between my brother's failed marriage (24) till she passed when he was 64. Who did denial help? Absolutely no one. Trump is pulling the same 💩 and DOJ is 🙉🙈🙊. We, the nation, are in danger right now.
@@apenguininthemist855 Ooof well said!
Hey, if this guy wants to demonstrate his creation's effectiveness in a real courtroom... well, there's an easy way to arrange that. He can just keep offering what he's offering, and he can use the chatbot to represent himself!
true
SCOTUS hearings should be televised. Supreme Court of Canada is televised and democracy and the rule of law are not under threat in-spite of the cameras.
I think they decided to not allow cameras in the court to avoid situations like OJ Simpson; it turned the whole trial into basically a reality show that everyone was following, and that in turn caused the people in the trial to act more like reality TV actors than actually making compelling arguments.
@@vurpo7080 you mean like ....congress?!
SCOTUS arguments are live streamed via audio. and they’re also not “hearings.” there’s no evidence being tendered or tricks. everything is laid out in written briefs ahead of time. then their decisions are laid out in writing. this isn’t trial court.
@@vurpo7080 And yet here we are in an age that makes "Dancing Itos" seem sane.
agreed
Taking up that bounty sounds like a mistrial waiting to happen.
...immediately followed by a contempt of court charge? With the penalty mayby... twice the size of the bounty. By sheer coincidence. 😃
@@KonradTheWizzard
The idea is that the courts sign off on it too... if they say no, then it is a hard no.
@@lostbutfreesoul Can't we talk about this your honor? Please? Pretty please? With a cherry on top!?
Your honor, would you prefer a cherry made of dollars or gold bars?🧐
...why is that bailiff staring at me like this??😕
He was never going to pay it anyway.
I read Bill Browder’s Red Notice and it became the inspiration for my thesis and it’s kinda surreal to see how everything is connected. Parts of the book spoke of Joshua as a child and this is not what I thought he’d grow up to be
LegalEagle videos seem to imply that the bailiffs of the US legal system spend their work days waiting -nay-, hoping that some ill-prepared lawyer approaches the bench too closely or brings a forbidden item into the court room. So that they may then gleefully tackle them onto the ground and subsequently forevermore have other people pay for their drinks at the bar in recognition of this act of unquestionable goodness.
You mean they don't??
I just want to see the bailiff tackle someone just once. It would be the best episode ever
So, like, any other US law enforcement looking for an excuse?
😂😂😂😂😂😂
If not, then what is even the point?
That homage of Kent Brockman’s speech to the ant overlords was perfect.
5:04 I love the irony of a website that’s trying to replace lawyers getting bogged down in litigation and having to hire lawyers
As I said on the last AI video, couldn't see this replacing Lawyers, at least not quickly and without reform, CAN see this removing large groups of paralegals and a lot of other people who need the work as you can have the AI search for the precedent and summarize and you take it to court.
People 'need' the work will never be a reason tech should not move forward. We're going to have to reckon with the fact a lot of jobs are a moment (in societal scale) away from just disappearing into the technological ether, and handcuffing technology is NOT going to be the solution.
I'm not so sure that it'd be a full replacement, but it'd likely shift the responsibility of paralegals to reviewing whatever the AI dredged up. Given how AI currently has a bad habit of making up fake cases in order to perfectly fit the need, I'd be leery of taking any AI-packaged summaries to court without having someone sift through it. I'd want to see AI tackle simpler tasks with less grave consequences for errors before this step as well, and given how grossly unethical AI tech bros behave, I'm not holding my breath.
@@philgunsaules2468 I'll never understand how people can take this position and consider themselves ethically sound individuals. Technology exists to serve people, not the other way around. If it's hurting more people than it is helping, it SHOULD be held back.
@@BerserkerLuke Honestly I think that sort of stance comes from the exact same kind of no-ethics mentality that tech bros also have, where "AI will fix all the ills of humanity" and this overly naive and idealistic pie-in-the-sky kind of cyber-utopian vision of the future.
Without looking at what AI is actually doing. (it can't even tell apart sand dunes and bare skin, the people verifying the AI's work are usually hired by the office job version of third world country sweatshops and paid peanuts for said double-checking... Microsoft's own AI's stolen a lot of code from github that wasn't open-source and free to use...)
@@BerserkerLuke Because as long as there is hunger, illnesses we cannot cure due to lack of technological advancement, lack for clothing, and anything where quicker, faster, efficient production can help and release human hands for more productive work, we must keep pushing until all those needs are sated.
People in jobs filled by machines are a problem, that's true, but that's not technology's fault, that is born from bad policies and bad governments. That's what people have to fight against. People working on technology can only push from their side and hope for the best. Just like motors/engines have pushed our society to higher levels and also been used to power tools for war, the problem is not the science, it's the people.
I've been following this channel for a while now, I just love how these videos are crazy good, I never would have thought this subject would be so interesting, but that's what I love about UA-cam, from PC building, cooking and now lawyers this is my favourite part of the interwebs.
Man, you've really been limiting the breadth of your internet experiences.
So well said!!!
reminds me of my Wills, Estates, and Succession planning Professor. Said he loves at Home Will kits because about 60% of his work is dealing with issues arising from people who used a will kit when they should have used a lawyer. The issue with any AI is going to be bad inputs in = bad inputs out. An AI might believe the BS that comes out of some clients mouths but a human lawyer might challenge you on it. Even if your unbelievable story is true (cause unbelievable stories happen all the time). A lawyer can at least advise you that no one is going to believe you and there is no point wasting your money paying for a discovery, expert, etc. One further thing is that Lawyers can help you navigate when best to file things. It is not always best to file at the earliest opportunity. Sometimes it is better to wait and see what cards opposition has before making your move.
"Practicing law is more than just arranging magic words in the right order while trying to sound human" says the youtube lawyer, charismatically arranging his words in the right order to make a convincing point.
lawyers are a joke, they created a system in america to exclude most people from being able to get proper representation in court without getting a law degree and passing the bar.
its a system to insure that only privileged. middle and upper class can be lawyers. abd to insure that america is truly unequal unless your a millionaire.
He didn't deny that arranging magic words in the right order while trying to sound human was part of the job.
@@jamesb3497 The point is that everything else required to be a lawyer is shit AI already does on a regular bases, it's the coherent and thought out arguments that it struggles to produce. So while yes, it's more than arranging magical words, if an AI were to be 100% accurate with that, it would be no less capable of practicing law than any other lawyer... it would probably even be better.
@@ryanthompson3737 Theres nothing you cant train a language model to do. GPT4 has flaws because it aims to do everything language related, not just law and argument.
@MSPaint Koopa if it says that, it only learned from language model of humans, humans have only their own culture to blame, when they think mothers are better suited to be parents. which is legacy of human culture patriarchy, I might add, that everybody thinks taking care of children is womans work and dads work is going to work.
As someone who went to college for computer programming, hearing that he made the core code in just two weeks and was only working on it in his spare time, I inherently distrust it.
true its probably buggier then skyrim
@@erikburzinski8248 Please, it's probably buggier than an African Termite Hill. I doubt Skyrim could compete with that for bugginess.
@@Razmoudah sounds like you have never played Bethesda games...😅
@cornelious2 Not since Morrowind released in the early 2000's.
Do you think he wrote it from scratch? lol
I kind of wish he could get this done just cause there’s no way a chatbot written in a couple weeks by some guy that doesn’t know the simplest thing about law could actually win a case
“ChatGPT will be held in contempt of court.”
While AI lawyers are a way off, I could see using an AI as a cheap, very useful assistant.
This is how it's portrayed in every piece of media ever, so it baffles me that everyone is trying to jump straight to the endboss. Even in the art world, artists are panicking that AI will replace them, while we have an overwhelming amount of precedent that job automation is used to crank up productivity per employee, instead of sending a toaster to Supreme Court to replace a lawyer.
Rainman slim +
"... More human
than human....
AI has not yet advanced to the point of not driving a vehicle into another vehicle, or suddenly screeching to a halt on a freeway for no visible reason.
@@tarcp6224 I mean....same reason as ever. Egotism and profitmongering. This guy was a delusional greedy asshole, and for once, he got what was coming to him.
@@tarcp6224 artists are "panicking that AI will replace them" because most artists who have artist friends now have artist friends who've been laid off to make way for "AI artists"
0:45
“Wait, why are you clapping?!”
Best part of this video, by far. Loved that joke
Wait... I thought LegalEagle was an AI generated lawyer. There's no way someone can be as intelligent and good looking as he is unless a computer went through thousands of social media profiles and took the best of each to create a talking avatar image on UA-cam.
I think one of the greatest things about this channel is that it can take the pedantic and dry content of the law and make them both relevant and entertaining. Yay for the law not being mind numbing.
This reminds of the DARPA sentry bot. It was really good at detecting humans walking, running and crawling towards it. It was unable to realize that a cardboard box or pine tree moving towards it had a person involved. It also didn't detect the two marines who summersaulted the entire distance to it. A human would have spotted those instantly. AI is a great enhancer and force multiplier. It, like any tool or machine can make human labor more efficient and productive. On its own though, it has major, major flaws.
Solid Snake is vindicated!
I think that bot could identify those things if it had more information.
@@aaronnunavabizniz199 It would need to know the ways a human would try to trick it given that a human knew the best way to trick it.
@@nielsunnerup7099 And how exactly would a human figure that out on it's own? A baby cries when you so much as cover you face, and yet the assumption here is that we are all born knowing that a cardboard tree moving around is actually a human.
Lmao, sounds like the darpa bot was a massive success. Why on earth would it expect a pine tree to be a person, that sounds like a dangerous precedent to shoot at anything moving. This is exactly what a ton of comments on this video are getting at, that AI still doesn’t have general intelligence but they’ve always said general intelligence is like 20-30 years down the line minimum. We can’t expect it anytime soon
In my experience, something a teenager cooks up in under 2 weeks is almost certainly not going pass even the most basic of stress tests.
Really good vid. I think the humorous tone and lack of immediate concern was instantly clear. But can we just talk about some of those cut-in/clips!? Loved the Better Call Saul McGill v McGill bit and the inspirational-'Merica speech but the Bioshock and Andrew Ryan broke me *chef's kiss*
Once again, AI could be a great assistant/tool that does a lot of the heavy lifting but you don't want it making irreversible decisions unsupervised.
... so then lawyers should be practicing without supervision, and those supervisors need supervisors, etc etc. Let's stop acting as if the concept of AI is somehow inherently different to humans doing those same tasks. On a fundamental level, YOU are just a bunch of electrical signals being weighed against each other to produce a thought or action... nothing different to what AI already does.
@@ryanthompson3737 "as if the concept of AI is somehow inherently different to humans doing those same tasks" not sure if you heard yourself on this. But yeah, AI is inherently different to humans because WE humans are the ones creating them. AI or bots can do the automated stuff but it can only do so much.
@@ryanthompson3737 Yes, they are practically identical, no significant differences at all.
AI is a frigging glorified calculator - despite of it sharing some minor similarities with human brains, AI and computers have really nothing in common with how brains work.
Could people just stop this "on fundamental level..." BS on subjects that they obviously have very little understanding of? No? Ok then, that don't surprise me.
I could see a robot lawyer being better than a bad lawyer but never a good one, the good ones carry a reputation with the courts and that reputation can be a big help
Another really important point is that communication with one's lawyer is protected by Attorney-Client privilege. There is no such protection when one shares information with an AI. How long will it be before the opposing party moves to discover or subpoena those communications? 🤔
Subpoena? They could literally just ask the chatbot and it will answer them, because it doesn't know anything about nuance.
That a really good point.
Heck, when you tell it what you want to win you might run afoul of the outer alignment problem and get permabrigged or something.
@BazzfromtheBackground wow, it's actually kinda hilarious how little law people know about tech
@@KWCHope what do you mean?
If I didn't already love this man that classic Simpson intro woulda sealed it
Hail AInts
That's why he's the law talking guy..
Where i could see this being useful is with court appointed lawyers where the system is already overworked. I could see it being a useful assistant to a human lawyer for creating a case rather than arguing the case itself in real time. Providing them a faster way to form better arguments.
yes. but still humans are needed for that, aren't they
I've mentioned this to a friend, but people are both overestimating how much an AI can actually do and underestimating the limits of the tech and how much they're trying to rely on it. They're trying to make it replace jobs it can't handle like people like it's the answer to all their woes. It's supposed to *help* people because it's a *tool* .
I'm seeing this too. These tech people see it as a way of proving that technology, created by humans no less, is superior to those humans because humans make mistakes.
These same techhies can't even see the colassal level of errors their chat bots and AI's generate.
As a basic tool it's fine but beyond that they're pretty fallible, largely because our minds can do stuff artificial minds cannot do.
The main one is that we have the ability to differentiate, AI's may appears to differentiate, but actually it's still just the ability to identify that is equate one as exactly something else. and they may make mistakes such as say the pomeranian is the right arm and thus render an image as such
@@EndoftheBeginning17 but like actually though. Especially since many of those techheads think that you can just immediately throw what is essentially a working prototype into the real world is ridiculous. Even if they could eventually differentiate, it would take hundreds upon thousands of reiterations to collect enough data for that.
@@EndoftheBeginning17 The world is filled with technology because it is better than humans at a given task. Cars are better at transporting people and goods, a phone is better at long distant communication, the internet is better at sharing information, a computer is faster and better at calculations, hell a hammer is faster and better at securing two pieces of wood together. That is the whole point of technology, to do something better than a human can do it. Hell, in the AI art space, artists are mad because a model can be trained on a few of their images and reproduce the style they use in seconds rather than days. Technology is only going to get better at this stuff. Neural networks are in their infancy and as more powerful hardware is made for them, they will be able to do more things. Each one will be specialized to only do one or two things, but they are going to be very good at those one or two things.
The way I see it, Sufficiently advanced AI can do anything a human can and more. The fact that humans exist and can reason, differentiate, etc proves that these abilities are possible. However, there is no reason to assume these abilities are limited to a human, or even biological, brain. I think what we saw here was a case of a human not knowing what they are doing, but that shouldn’t be extended to the technology itself.
@@EndoftheBeginning17 The other thing is that for whatever it tries to replace, it lacks a certain human element or can't understand it. Could sum as the classic 'lacks the human soul' stuff but it's the weird logic people have. It could see use as a reference. Like a lawyer could go 'look up this law for this state' and it'd be fine IMO. Having it take the lawyer's place, however, and any lawyer worth their salt could easily talk circles around it and it wouldn't be able to keep up.
Same with art and the image generators people are trying to tout which are on the receiving end of lawsuits atm. It doesn't understand the logic of the 'how' and 'why' about the appeal of the image. It just does it. It doesn't understand the 'how' and 'why' to art styles. It just does it. It doesn't understand the 'how' and 'why' the two come together. It just does it. In this example, my icon. I found an artist who's style I liked, we worked out the details, some money later, and here I am with this. If I tried to use a generator for a similar icon, I'd never get anything I'd like.
So yeah, as much as people are trying to push it, AI is not the future but it is a danger to people because of all the mishandling
Tbh a lower but still impressive test for AI would be whether it would be able to pass the bar exam.
Recently a Wharton professor used ChatGPT to write answers for a test and concluded that it would get a passing grade on an Operations Management course.
Passing the BAR and actually practicing the law are two different ball games.
it already did, look it up
@@TheDaninagaYou’re not understanding what Im saying. Studying/passing the BAR and practicing law in a specialized field are two completely different things. Talking from personal experience.
@@HarmonyGaming01 I am sorry my friend but you are the one misinterpreting, i was talking to the original post not you... and by the way it is childish to compare a technology that has existed for less than a year to what it could do perhaps at the end of 2023. I find it utmost narcissism to think that a task is so specialized that a AI could not replicate, we should look ahead and not get stuck in the current status quo. it is more important to understand what would be the necessary changes in this highly likely scenario other than just mock and pretend it is not happening.
@@HarmonyGaming01 Same with any test, really. There's a reason you have to pass a real world driving test before you get your driver's license. Knowing the laws on paper and actually driving the car are way different experiences.
Having worked in a Comcast call center many years ago, “exaggerated the Internet outages, similar to how a customer would” is much too accurate. Not every customer with a complaint, of course, but that statement would describe a not-insignificant number of calls.
As a Comcast customer... I hate how often my internet goes down. On the odd occasion that I can join a video meeting for work and get all the way through without the internet going out and having to switch to my mobile hotspot, it's a miracle.
So I’ll be honest, I’m excited at the idea of a chat bot negotiate my parking tickets and bills. These are low level tasks that require me to take hours out of my day to get things right. It isn’t complicated work, it would be useful.
Agreed. Especially in areas when the other end of the line has been staffed by bots (or call center employees reading off a script) for years.
Ah but real world tests with AI bots particularly ChatGPT indicate that you'd have to do all the work yourself anyways. They can collect data suggest ideas but beyond that it's nonsensical. These low level tasks currently can't be done by AI, it's not smart enough to do it right.
I agree. And I actually think that law should I be common knowledge and we should know exactly in what ways we could be saving ourselves from whatever we are being charged with/ requested or required to do in any given circumstance. Because we don’t and it’s extremely costly.
@@EndoftheBeginning17 If it's only relying on large-language models, then yeah, the results will sometimes be nonsensical. Connecting it to other knowledge-based services and executive logic though, it won't be long before this functions competently. I mostly develop with it for performing assistance in mathematics, and it's relatively easy to get it to verify its knowledge-based or calculated responses against the output of another program. I'm sure the folks automating the practice of law are doing something similar. Maybe not these guys, but someone will.
Yeah the premise of the idea is really good, and has a lot of potential. The execution of the idea may be 10 years ahead of its time, but hey someone has to make the first attempt.
“You can contest parking tickets in front of the Supreme Court, right?”
I mean, you can if you appeal enough. It’s just that you’d be spending so much in legal fees that your family would probably stage an intervention.
I mean unless you were hauled into a police station, tortured into confessing, denied a public defender, and were sentenced to death by a state supreme court, there's no way the supreme Court will hear your parking litigation.
The fact that the amount of money that you can afford determines the quality of the lawyer that you can acquire is one of the best arguments in favor of an AI lawyer that I can think of. People shouldn't go to jail or loses legal cases because they are poor.
They also should not commit crimes, but I guess that's a whole different story
@@artem4ik281 what if they are wrongly accused by a rich person who can afford the lawyers though.
@@artem4ik281 you realize you can do nothing wrong and then still end up in court, right? one of my good friends did prison time for a crime that someone else committed. Since 1973, at least 190 people who had been wrongly convicted and sentenced to death in the U.S. have been exonerated. Plenty of innocent people never see justice though and end up serving their sentence.
@@artem4ik281 Are you that naive to believe that everyone who is accused actually did it?
@@artem4ik281 Why even have courts if all defendants are guilty in the highest decree?
welcoming one's robot overlords is such a classic expression much deserving of a revival
Clearly Skynet. XD
Simpsons reference.
I never stopped saying it
Usage would have to die out for it to be revived, and it never died
It has been hard to keep motivated while studying Law.. thanks for these videos it helps quite a bit
AI will automate you away
@@igvc1876 who won't it automate? it will automate everybody sooner or later
Something like this could be very useful. Not for giving legal advice, but it could give you enough surface information to start asking your real lawyer the right questions. If you're really innocent you have to find a way to put that in words, and even lawyers can get tunnel vision.
Yeah suuure, by depending on recklessly made bots like the ChatGPT which can lie to you, is BIASED and haven't been updated since 2021.
But you want to depend on a worse tunnel vision.
Idk is that really how it’s supposed to work, every single piece of advice I’ve ever heard about interacting with lawyers is not to act like you know how to better defend yourself than the lawyers who’s job it is. That almost goes exactly against what legal eagle’s entire point of this video was. I’m sure it has some utility, hell at least maybe it’d help the nut jobs who defend themselves lol
You don't know better, but it's better to be proactive in your case and trial.
If you're being offensive it can be better to solely rely on your lawyers, but if you're defending yourself your case will reflect your participation. Cookie cutter defenses get cookie cutter judgements, and you don't always want that.
Your lawyer will tell you whats bs and what does and doesn't apply to you. That doesn't mean railroad your lawyer and not let them carry you through the process. I don't believe you should be a passenger to your own case is all
I'd like to see this experimented with. Maybe a series of mock trials could be set up using real lawyers and judges? In each trial the lawyer arguing the case would have an earpiece. Some of the trials would be argued by the human lawyer, whereas some would be the lawyer speaking for the chat bot. The court wouldn't be allowed to know which cases were being argued by the human and which were argued by the AI.
"Hey ChatGPT, where is it legal to wear Airpods in court?"
"I am a large language model and cannot give legal advice. My knowledge is limited to events before September 2021. Consult with a law firm or conduct your own legal research. Airpods were legalized in Sisterbonk, Nebraska on June 13, 1841. Additionally, there is no law in the fourth circuit of Detroit banning their use."
Do you think they backed down when their AI recommended they settle any lawsuits the state brings?
how do we know you're not an AI-generated Lawyer ???
If you can't tell, it doesn't matter.
For the same reason you do not know if my reply is a bot.
@@robertmaxey5406 Divide by 0.
@@Zomby_Woof Pretty sure this exact video covers why it does matter even if you can't tell.
How do we know he's a real Lawyer?
" basically a Karen bot" was too good... I love the videos taking about serious things resorting to memes to make it digestible...
Although I feel AI is a long way away from replacing lawyers. They could be a good starting point. I am in the middle of an insurance claim ( water line burst) for my house and my insurance company has been wildly unethical. Really wanted me to take the money over the rebuild. After threatening to sue they finally backed down. Things like, they can’t prevent you bring your house to code, damages you are entitled to after noticing them or what to do if you disagree with the estimate are important to know. You could save a fortune and save a lot of time. Not everyone can afford a lawyer for every circumstance.
From all jobs, I think lawyers will take the longest to be automated. It's more like the end goal than the starting point for me
I think it would be a great invention especially in places where you don't have a right to a lawyer. Like Civil and traffic courts.
Honestly, why shouldn’t all of the expenses be paid by the losing party AFTER the case is settled?
@@daedalus6433 I mean, in hungary in a civil case m family had we had to pay upfront for the lawyer's protection but the loosing party had to repay the whole price of the lawyer (and the court's prices too)... Yeah that sounds a bit stupid but the lawyer had to get payed for representing and defending the case etc.
You always have the right to a lawyer's consultation, right? I don't think even the best AI imaginable could do better than that.
Hats off to whoever made the Peter Thiel - > Ryan Industries BioShock reference. You know your gaming & RL villains well.
A man chooses.
I've been following this on Techdirt from the start😛 Good analysis by Devin. Unbelievable that these dudes get away with these fly-by-night websites.
I wouldn't worry ... judging by the art ones ... we are a long, long, long way off anyone successfully making an AI this advanced so like you say ... its all a bit sus
As a lawyer I was excited to see the judges reaction to someone having their phone on in the court room with earphones assembled lol.
"The bailiff will happily tackle you"
That's the thing with A.I. It can replicate the writing style of anyone, and if it's trained on enormous amounts of high-quality legal documents and lawyer's submissions there's no reason to think it couldn't spit out a well-written and correct legal document. You'd need a real lawyer to confirm its correctness and to give the bot a prompt to get the desired output.
Well-written, sure... but I don't think you can make any assumptions about correctness.
It could certainly help with burying the other side in documents.. but they could do the same to you too.. hmm
I saw a thread recently that an AI bot supposedly passed a bar exam somewhere. What the thread didn't want to mention was that it was basically at the bottom of the class, ans its answers were the equivalent of filling in "C" every question of a Scantron.
Which really speaks more to how poorly crafted that exam was.
If nothing else the bot could be a good training tool for the people who design exams.
It didn't pass the bar, it passed one section of the bar, evidence and torts. Its overall score was 50%, well below the 68% minimum required to pass.
Not only one of the world's most famous lawyers, but also a man of culture
The first 10-15 seconds always kill me with laughter 😂 the slightly alarmed but understated yell at the robot lawyer face
4:53 - I KNEW IT
Legal Eagle isn;t wearing a full suit, like all TV presenters!
I wasn't paying attention and actually got jumpscared at the start lmao.
I am an IT Professional with a Ph.D. in Public Policy, and I agree with you on AI Robot Lawyers. I place this in a model I call the electronic mediation of the social process of work. Electronic mediation of the social process of work has many many problems.
I know a way he can test his AI without trying to get it to be used in a real court. Feed it case studies you’d get in law school and see how it does.
A good follow up video to this would be one about how AI is currently used by practicing lawyers.
One area that is promising is Estate Planning- which has the potential to make wills and trusts much more accessible to a wider clientele.
this bozo could never admit he is outdated. no chance
You know it would've been easier for him to get out of paying for parking tickets if he just built a robot/bot to drive his car for him since he's clearly incapable of doing it himself
I think it would be pretty cool to set up something like this even if it's a mock meeting
Imagine buying an ai to run your court trial, effectively committing legal fraud for a traffic ticket lmfao
You’re allowed to rep yourself
@@alexatedw yeah but if you use an ai you aren’t representing yourself your having an unqualified 3rd party represent you. If employed that 3rd party is effectively committing fraud which may have legal repercussions for the writer of the program.
@@camelloy yes you are
@@camelloy if you bring a reference book with you, you are allowed to reference it. Same thing here. This is a reference tool
@@alexatedw it’s not because a reference document does not provide arguments to real time situations; it can only provide a methodology to general circumstance. A reference document does not itself prescribe action it only describes methods
Sovereign Citizen, Tech-Bro, Affluenza combined!
It would be interesting to to see what happens if you make chatGPT take a bar examn. I have no idea what is involved in that, so dunno if that'd be feasable, but hey. could get some fun content out of it.
I agree - and I wonder if passing it would make it so the robot could go in front of the supreme court some day?
Has it passed the SAT yet?
@@oldvlognewtricks yes, and it scored a 140 iq
I believe at the moment it scores higher than random chance, but not high enough to pass in most cases. At least, that's the last I heard.
I assume past bar exams have had their answers posted online, so ChatGPT would be able to pass the exam, not bc it knows how to answer, but bc it knows what the desired answer is.
In this video: Lawyer tries to convince us we still need lawyers.
Considering the amount of Zoom Court I watch...I can attest, we still need lawyers.
We also need: sensible Probation Officers, more social workers, more healthcare specialists and assistants and proper language interpreters.
AI can be useful, but they're better served helping people keep up to date on their taxes, insurance and driving permits. Could we not have a guided prompting system that makes renewing the driving licence easy? Instead of going to the DMV??? That would be nice.
Ai: Your honor, I object!
Judge: Why? Ai: Because it's devastating to my case!
Judge: Overruled.
Ai: Good call!
It would be interesting to ask ChatGPT to write a Legal Eagle video and get Devin to read it out
So funny 😂
please this man doesn't need a high blood pressure
Imagine living in a country where Separate but Equal would forever be legal because Oliver Brown's lawyer decided to use a chat bot.
I love that you chose Kent Brockman as the inspiration for your robot lawyer speech. I can tell you are a man of good taste.
Great vid as usual. Opening with the Simpsons/Ken Brockman reference automatically got my thumbs-up!
I imagine the Lawyer AI will scan the Internet &, based on Lawyer Devin's commentary from previous videos, choose to emulate Lionel Hutz, esq.
Also, as someone who has been working in a municipal court for less than a year so far, I can understand the claim of "tickets-for-funding" with the amount of "Not Guilty" pleas I've seen paired with, "I've never been in the location & the description of car isn't mine."
*Thanks for the Content* !
14:48 "maybe tell 'em to go away and come back when they're mastered parallel parking"
Oof
The only thing a robot lawyer would be good for is searching through text to find every mention of a word. (aka what is already does)
You are way behind on what the capabilities of this technology are. We are all laughing at this guy and pointing out the limitations of his tech, but his main mistake is jumping the gun. AI cant do what this kid claims yet, but it will sooner or later and the legal industry and every other knowledge based industry will have to come to terms with that.
@@000EC AI can't and will never handle cases where "it depends." Which as LegalEagle has pointed out many times, is literally the entire legal profession
@@000EC Likely very much on the later side. To end up with chatbots that are predisposed towards actually solving your legal problems rather than just taking the easy route and convincing you, a non lawyer, that they're doing a good job, would likely take a lot of rethinking in how they design/train these things.
@@000EC Nice fantasy bro. Letting precise machines handling nuanced and often messy human businesses is definitely not technofetishistic at all.
@@000EC no it will not, because hopefully humans will stop it from happening. Or be wiped by ais...
This would be pretty interesting. Especially when going to treat a witness as Hostile *eyes flash red*
Fun fact: lawyers don't charge hundreds of dollars an hour to copy and paste a few documents. They charge hundreds of dollars an hour to know *which documents* need copying and pasting. The actual copying and pasting may or may not be done by a comparatively underpaid assistant (who may or may not accidentally send the opposing counsel an entire phone image instead of two emails and one text message cherry-picked from said image).
This is reminiscent of how doctors charge lots of money *not* to tell you to drink more water and less alcohol, and possibly go to bed earlier, but to know that *you have a hangover* and are possibly sleep deprived. And yes, I am specifically thinking of the guy who came up with the "let's have a bot argue in front of the supreme court" idea.
It will happen, eventually, that AI will come up with entire legal strategies which the lawyers using them will follow, but as long as there is a human judge there will be human lawyers.
That "bailiffs WILL tackle you" card seems oddly specific; is there a story Legal Eagle would be willing to share?
It happens a lot lol
In the videos where he reviews fictional courtroom scenes, there's often flaws in where and when the characters move around the courtroom that boil down to "If you do that in a real courtroom, the baliffs will tackle you."
Bayleef used Tackle
It’s super effective!