How to Use ChatGPT to Ruin Your Legal Career

Поділитися
Вставка
  • Опубліковано 21 лис 2024

КОМЕНТАРІ • 7 тис.

  • @LegalEagle
    @LegalEagle  Рік тому +1431

    ⚖ Was I too harsh on these guys?
    📌 Check out legaleagle.link/80000 for a free career guide from 80,000 Hours!

    • @danielsantiagourtado3430
      @danielsantiagourtado3430 Рік тому +64

      You're always honest and telling it like it is and that's why we love You!😊😊❤❤❤❤

    • @BylerIsCannon
      @BylerIsCannon Рік тому +10

      Im early somehow

    • @ViableGibbon
      @ViableGibbon Рік тому +16

      Please Do A JFK 1991 FILM REVIEW on it's LAW ACCRUCY PLEASE PLEASE PLEASE!!!!!!!!!!!!!!!!!

    • @dragonprincess8205
      @dragonprincess8205 Рік тому +17

      You were perfect as usual. Adore your channel. Thank you for bringing laughter to us in these stressful times

    • @pueblonative
      @pueblonative Рік тому +1

      Confess, you had a moment where you would have liked to just beat these two knuckleheads around the courtroom with the Federal Reporter.

  • @grfrjiglstan
    @grfrjiglstan Рік тому +28336

    Imagine calling up your lawyer to see how the case is going and finding out he's now in bigger legal trouble than you ever were.

    • @henotic.essence
      @henotic.essence Рік тому +1315

      That would be my 13th reason 😩 legal stuff is already so stressful, the costs are ridiculous, so finding out my attorney went and caught a case would be brutal 🤣

    • @Officialmartymars
      @Officialmartymars Рік тому +496

      ​@@henotic.essence these would be no-win-no-fee lawyers for sure. Real money buys real lawyers

    • @jackryan444
      @jackryan444 Рік тому +638

      Tbf… a judge might go lenient on you if it turns out your lawyers doing this. Bigger fish ya know.

    • @phoebehill953
      @phoebehill953 Рік тому +22

      It happens

    • @o0alessandro0o
      @o0alessandro0o Рік тому

      @@jackryan444 If you are a defendant (and lose), you may get a mistrial out of your lawyers being... Incompetent. If you are a plaintiff, you are probably SOL.

  • @mcdonnell761
    @mcdonnell761 Рік тому +9708

    This will be used as reference in law schools for decades to come. Ethics professors have just gained hours of material for presentations.

    • @novastar6112
      @novastar6112 Рік тому +543

      2023 edition textbooks are gonna go insane over this one xd

    • @SpitefulAZ
      @SpitefulAZ Рік тому +248

      The lawyers will finally make their mark on history! 😅😂

    • @player400_official
      @player400_official Рік тому +219

      I once read an ethics board case about a lawyer who got into a brawl with a judge and a court reporter. He got disbarred.

    • @Mr.Feckless
      @Mr.Feckless Рік тому +19

      Id say they have about 29mins

    • @f.g.5967
      @f.g.5967 Рік тому +11

      Or alternatively, you can invent your own references!

  • @puck5370
    @puck5370 Рік тому +4852

    I'm a law student, got tired of searching for cases to reference that matched a very specific criteria, 3 years of looking through Jade and CaseLaw is like trying to find the holy grail, tried using ChatGPT to find the cases to give myself a break, the absolute confidence that it had when giving me a list of non-existant cases is something I aspire to have, I have never gone from happiness to hopelessness as quick as I did when I looked to see if they were real

    • @katarh
      @katarh Рік тому +816

      And now you understand why lawyers are well paid. The bulk of work in law is boilerplate templates, but people pay a LOT of money to have those templates be correct. And lawyers are also one of the few professions punishable by license loss when they fail to keep that promise (medical doctors and professional engineers being some of the other ones.)
      I wish you luck in school!

    • @puck5370
      @puck5370 Рік тому +167

      ​@@katarh thankyou!! (you're so right on that though btw)

    • @shenghan9385
      @shenghan9385 Рік тому

      If you are dumb enough to think ChatGPT is smarter than an average lawyer, then you are probably not entirely suitable to be a lawyer.

    • @alainportant6412
      @alainportant6412 Рік тому +122

      Bing sounds like it would do a better job at finding relevant cases, since it can actually search the internet.

    • @webbowser8834
      @webbowser8834 Рік тому +298

      Good news: you are already a better lawyer than the two subjects of this video.

  • @chouyi007
    @chouyi007 Рік тому +2573

    Man, my blood ran cold when I heard that the Judge himself had contacted the circuit from which the fake decision had purportedly come. I was a clerk at the Federal Circuit from '15 to '17, and I remember once when Chief Judge Prost had discovered a case that had been cited in support of a contention that it did not actually support, she really let the citing attorney have it in oral arguments. That was the scariest scene I ever saw as a new lawyer, and that was worse than I could have imagined, so I cannot even begin to conceive how bad it was for these plaintiff attorneys.
    Side note, Chief Prost was a fantastic and fair judge, and a very nice and kind person, but the righteous wrath of a judge catching an attorney trying to hoodwink her/him is about the most frightening thing for a lawyer.

    • @CleopatraKing
      @CleopatraKing Рік тому +213

      When a Judge catches u being a shitter they channel Athena's wrath

    • @icahopilm898
      @icahopilm898 Рік тому +17

      @@CleopatraKing lmfao

    • @artsyscrub3226
      @artsyscrub3226 Рік тому +105

      @@CleopatraKing
      Athena personally comes and chews out lawyers for disrespecting her creation

    • @Mordecrox
      @Mordecrox 11 місяців тому +54

      By the way didn't expect that a judge would use "civilese" words as gibberish when civilians often use "legalese" to describe their mumbo-jumbo.
      He truly was volcanic as LE said.

    • @asole100
      @asole100 9 місяців тому +4

      Well if a judge is in the wrong there is no real punishment for them which makes them even more scary...IMO

  • @bookcat123
    @bookcat123 Рік тому +3066

    The thing is, I’ve had a coworker do something similar. They asked for a report on data we don’t have access to, I tried to explain it wasn’t possible, they then turned around and asked ChatGPT to write the report and sent that to me with instructions to “just clean it up a bit” - I say we can’t use it. They say we can. I then spend hours digging into everything it said and looking for every instance that’s contradictory or references data we do have access to so I can compare. Send a full report on the report. Finally get shock & horror “I didn’t know it could lie!” and we can finally start the actual project, redefined within the bounds of what we can access. 🤦🏼‍♀️

    • @arturoaguilar6002
      @arturoaguilar6002 Рік тому +654

      “I didn’t know ChatGPT could lie” is going to be the phrase of 2023, isn’t it?

    • @LimeyLassen
      @LimeyLassen Рік тому +417

      You can't even open the chatgpt page without seeing a popup telling you that it lies

    • @gcewing
      @gcewing Рік тому +362

      I don't think "lying" is the right word. That implies that it's self-aware enough to know that it's saying something that isn't true. But it's not aware of anything. It's just a glorified Markov chain, generating text according to a probability distribution.

    • @bookcat123
      @bookcat123 Рік тому +264

      @@gcewing Yes, but try explaining that to non-tech people who still don’t understand why they can’t name a file “Bob’s eggs” and have it return when you do a text search on “Robert” or “breakfast” (your search program is broken! That’s your problem not mine!) and think that every single number in Google ad predictive recommendations is guaranteed truth. 🤦🏼‍♀️🤷🏼‍♀️

    • @birdn4t0r7
      @birdn4t0r7 Рік тому +48

      @@bookcat123 this is so weirdly specific, i'm not even in tech but i understand how search functions work cuz i have done some stuff with scientific database searching…has this actually happened to you?

  • @NaudVanDalen
    @NaudVanDalen Рік тому +7761

    Imagine paying a lawyer thousands of dollars and they use ChatGPT. I'd sue them in addition to the original lawsuit to get my money back.

    • @CapitalistSpy
      @CapitalistSpy Рік тому +395

      I would bring these lawyers right through their Bar discipline to get them disbarred ASAP!

    • @gabrote42
      @gabrote42 Рік тому +15

      Word

    • @JL-xv9di
      @JL-xv9di Рік тому +67

      Plaintiffs' lawyers are paid if they win, so there wouldn't have been money given to him.

    • @Tomas81623
      @Tomas81623 Рік тому +49

      I mean, would you trust yet another lawyer to handle yet another case after these guys did this? Although, if they defend themselves, it may be an easy case.

    • @charliehamnett5880
      @charliehamnett5880 Рік тому +101

      @@Tomas81623 I would but only because I'd know the idiots I hired the first time have just made sure no one else is stupid enough to try what they did especially not with the same client.

  • @ellewoods6549
    @ellewoods6549 Рік тому +1188

    FYI: when a judge asks you to produce cases (that their law clerk could have found) it means THEY DON’T EXIST. That was the FIRST clue that this was not going to end well.

    • @Sugarman96
      @Sugarman96 Рік тому +154

      Absolutely insane. Not a lawyer, but from Devon's explanation on the citations, it seems like finding a case is almost instant, it's so obvious that it's a gotcha when you're asked to find the cases that you cited.

    • @williamharris8367
      @williamharris8367 Рік тому +125

      I have encountered the very occasional situation where something is mis-cited and so a trek to the library is required to check the paper volumes or reference sources, but most case law can readily be found online.

    • @groofay
      @groofay Рік тому +82

      I remember Devon saying on this channel multiple times, in court you don't ask a question unless you already know the answer. That lawyer's case was dead on arrival.

    • @claiternaiter446
      @claiternaiter446 Рік тому +48

      Westlaw and Lexis are basically search engines for legal cases. You can search for relevant cases by keywords or name of the case, but if you have the citation, it should pretty much instantly find it for you. It even keeps you updated on if parts of the case are outdated due to new case law.

    • @stefanowohsdioghasdhisdg4806
      @stefanowohsdioghasdhisdg4806 Рік тому

      The *best* case scenario is that you made a typo or something so that it wasn't able to be found - which just sounds very careless and unprofessional. And when the *best* case is that you are an unprofessional nincompoop who doesn't proofread their important legal documents... yeah you're pretty SOL

  • @zoecollins3057
    @zoecollins3057 11 місяців тому +718

    I finally have confirmation if the background is a greenscreen. Seeing him pull a book from behind him made me happy

    • @efulmer8675
      @efulmer8675 7 місяців тому +94

      Everybody's talking about ChatGPT but this tiny little nugget was the most fascinating part of the whole thing. Also the car alarm sirens after he yeets the book into the background going on for several more seconds while he's talking made me laugh.

    • @silveryin4341
      @silveryin4341 4 місяці тому +28

      When he grabbed that book it broke my entire brain. Now I want to know what all of the books are.

    • @typacsk
      @typacsk 4 місяці тому +17

      "These books behind me don't just make the office look good, they're filled with useful legal tidbits just like that!" -- Lionel Hutz, attorney* at law

    • @theGhostWolfe
      @theGhostWolfe 3 місяці тому +2

      @@silveryin4341​​⁠​⁠They look like reporters (the books of case law he describes around 10:56 ).

    • @blackleague212
      @blackleague212 2 місяці тому

      @@typacsksome of those books are from the 70s

  • @TalkingVidya
    @TalkingVidya Рік тому +4596

    As a computer engieener with a deep love of law, it drives me crazy that they even tried to do this.
    ChatGPT does not give you facts, it gives you fact shapped sentences. Chatgpt does not fact check, it only checks that the generated text has gramatical sense

    • @baronvonlobotomus7530
      @baronvonlobotomus7530 Рік тому +22

      Verified account without any likes or comment?

    • @qwqk0xkx
      @qwqk0xkx Рік тому +10

      Shaped?

    • @Varthismal
      @Varthismal Рік тому +4

      Que haces aqui fred?

    • @stevezagieboylo9172
      @stevezagieboylo9172 Рік тому +300

      It's a little more than grammatical, but you're essentially right. ChatGPT makes a realistic-looking document. If that document requires citations, footnotes, or a bibliography, the AI makes realistic-looking ones. It does not understand that citations actually refer to something that actually exists in the world, it just understands from millions of samples what citations look like, and it is able to make ones like them.

    • @mikicerise6250
      @mikicerise6250 Рік тому +83

      *shrug* The ChatGPT website literally warns you before you sign up that it is not always factual and sometimes makes things up. If you don't want to take that warning seriously, knock yourself out.

  • @Am-Not-Jarvis
    @Am-Not-Jarvis Рік тому +1669

    I’m a civil engineer, and “if your name is on it, you’re responsible for it” is an extremely important principal. A lot of our documents need to be signed and stamped by a Professional Engineer, and the majority of us (especially the younger ones) don’t have this, yet we do most of the work anyway. Ultimately, if a non-PE does the work, a PE stamps it, and something goes awry, then it’s on the PE. You’d be surprised at how little time the PEs spend reviewing work that they’re responsible for.

    • @Ferretsnarf
      @Ferretsnarf Рік тому +135

      There's a reason I never got my PE. I didn't want to be the professional fall guy. A PE is never going to realistically be given the time needed to actually verify all that work to a good standard - he's just put there by the firm to slap his name on it.

    • @candice_ecidnac
      @candice_ecidnac Рік тому +22

      You mean principle not principal but yes, if your name is on it then you need to make sure it's above board.

    • @colintroy7739
      @colintroy7739 Рік тому +54

      Hello fellow civil engineer(s). I was IMMEDIATELY drawing parallels to PE stamps when he brought up local council, and yeah... The barely check before stamping is wild to me with how much responsibility then falls on your shoulders.

    • @meghanhenderson6682
      @meghanhenderson6682 Рік тому +30

      Hell, I work at a clothing store and we don't use our sales password to let our coworkers check people out unless we're positive they did a good job because we don't want to take the flack if they didn't. Imagine having fewer standards than people working sales.

    • @lostprincess3452
      @lostprincess3452 Рік тому +14

      Mechanical engineering student here, this is exactly why I haven't decided if i want my PE or not yet

  • @supersonic7605
    @supersonic7605 Рік тому +5535

    Honestly, even if ChatGPT didn't exist, it really seems like these lawyers would've still done something stupid and incompetent that would've gotten them sanctioned

    • @sownheard
      @sownheard Рік тому +297

      😂 they didn't even check the source 😭 rookie mistake.
      ChatGPT clearly states it can make stuff up.

    • @ericmollison2760
      @ericmollison2760 Рік тому +206

      Schwartz explained he used ChatGPT because he thought it was a search engine and made several references to Google. If only it was a real search engine like he apparently usually uses he could be certain it would only say the truth ;)

    • @TextiX887
      @TextiX887 Рік тому +33

      @@ericmollison2760 I see what you did there ;)

    • @deletedTestimony
      @deletedTestimony Рік тому +42

      Tbh if the claim of the lawyers working together since 1996 is true they've been handling it for a good while, this may have been a slip-up by the elderly

    • @alex_zetsu
      @alex_zetsu Рік тому +21

      @@sownheard He says _he_ did try to check, but couldn't find it and assumed it was just something Google couldn't find and assumed ChatGPT must have given him a summary.

  • @praus
    @praus Рік тому +451

    I’ve never worked directly with a judge, but I’m going to guess that making a judge research several cases that you refuse to research yourself (not to mention the AI crap) is going to make them very very angry.

    • @angelachouinard4581
      @angelachouinard4581 10 місяців тому +69

      Making a judge do work you should have done is like doing the same to anyone but judge has many ways to get back at the person and yeah, it does make them mad.

  • @valdonchev7296
    @valdonchev7296 Рік тому +1286

    The fact that ChatGPT has warnings about it not being a source of legal advice is the most damning evidence that these lawyers did not read through what they presented to the court. Perhaps if they had been more observant, they would have followed ChatGPT's advice to "consult a qualified attorney".

    • @Jazzisa311
      @Jazzisa311 Рік тому +56

      I use ChatGPT as a tool to narrow stuff down, basically to find out what I should google, but I know to ALWAYS CHECK EVERYTHING. And if my question ever gets too specific, it always states: 'I'm an AI model, I'm not qualified to advise on this, ask a professional. Seriously, I can't believe they'd thought they'd get away with this...

    • @ZT1ST
      @ZT1ST Рік тому +22

      My immediate first thought is a pretty common set of phrases that internet comments use: "IANAL", "You'd have to check with a lawyer", "Get a lawyer to check this", "This is not legal advice.".
      You know, the type of language ChatGPT probably was trained on, and probably had in its results somewhere.

    • @valdonchev7296
      @valdonchev7296 Рік тому +13

      @@ZT1ST Possible, but I think this response might have been implemented intentionally, for the same reason that all thise phrases are common in the first place. Kind of like how there are certain topics GPT will avoid (unless asked very nicely)

    • @a2falcone
      @a2falcone Рік тому +19

      @@valdonchev7296 ChatGPT is specifically programmed to warn people that they shouldn't use it as replacement for proffessional advice.

    • @VuLamDang
      @VuLamDang Рік тому +23

      their warning about not able to produce reliable code has never stopped my students from trying to use it... then fail the course. human ability to selective filtering the text is just...

  • @emmamakescake
    @emmamakescake Рік тому +4928

    I'm a medical student and one day the residents and I used ChatGPT for fun. I cannot even articulate how bad it is at medicine. So many random diagnoses and blatant wrong information. I'm not surprised the same is true for law

    • @catastrophicblues13
      @catastrophicblues13 Рік тому +265

      Not surprised. I don't know what data it was trained on, since I'm not in the field, but it does not appear to have been fed research.

    • @chickensalad3535
      @chickensalad3535 Рік тому +362

      ​@@rickallen9099Why are you copy pasting this everywhere?

    • @I_am_Toro
      @I_am_Toro Рік тому

      ​@@chickensalad3535it's a bot

    • @lilyeves892
      @lilyeves892 Рік тому

      ​@@chickensalad3535dudes trying to look good for our inevitable AI overlords

    • @universe1879
      @universe1879 Рік тому

      @@rickallen9099yes but it ain’t here for like at least 5-10 years

  • @TyphinHoofbun
    @TyphinHoofbun Рік тому +3320

    Having ChatGPT write the argument with the fake citations was incompetence.
    Having ChatGPT generate the cases and submitting them as if they were real was malice.
    I say they should both be heavily sanctioned, if not outright disbarred.

    • @dracos24
      @dracos24 Рік тому +260

      It doesn't matter *how* the papers were generated. What matters is that the information was verifiably false, they signed it, and submitted them to the court.

    • @LiveWire937
      @LiveWire937 Рік тому +93

      Maybe malice was the point, and their whole goal was to martyr themselves to set the precedent on how using AI to prepare a legal argument will be treated. Honestly, one could probably do a halfway decent job of using GPT 4 to speed up legal research, and potentially even have it fact check itself, but it would involve heavy utilization of API calls, the creation of a custom trained model that's basically been put through the LLM equivalent to law school, application of your own vector databases to keep track of everything, and of course, a competent approach to prompting backed by the current and best research papers in the field... not just asking it via the web interface "is this real?"
      In short, their approach to using ChatGPT in this case is to prompt engineering what a kindergartener playing house is to home economics. All they really proved here was that they're bad lawyers and even worse computer scientists, but now that this is the first thing that comes to mind when "AI" and "lawyer" are used in the same sentence, what good lawyer would be caught dead hiring an actual computer scientist to do real LLM-augmented paralegal work? What judge would even be willing to hear arguments made in "consultation" with a language model?
      I realize this thought doesn't get past Hanlon's Razor, of course. It's far more likely that a bad lawyer who doesn't understand much of anything about neural networks just legitimately, vastly overestimated ChatGPT's capabilities, compared to a good lawyer deciding to voluntarily scuttle their own career in order to protect the jobs of every other law professional in the country for a few more years... but it's an entertaining notion.

    • @a2falcone
      @a2falcone Рік тому +144

      @@dracos24 It does matter. It's wrong to submit information provided by a third party (to LoDuca by Schwartz, and to Schwartz by ChatGPT) without having verified it. It's much worse to fabricate that information yourself when you're being ordered by the judge to explain yourself. At first it was severe negligence, but then they were outright lying.

    • @ShireNomad
      @ShireNomad Рік тому +38

      Welcome to the 2020s, in which lawyers, finding themselves in self-constructed holes, just. Keep. Digging.

    • @larrywest42
      @larrywest42 Рік тому +74

      If clear evidence of intentionally misleading a federal court, after being put on notice (show cause order), isn't sufficient for disbarment, what is?

  • @dilfpickler
    @dilfpickler Рік тому +54

    The fact that at 18:00 you straight face yell, yet you can feel every bit of your emotion behind it, excellent. This is such a great channel!

  • @chrismcdonald2947
    @chrismcdonald2947 Рік тому +2329

    Being asked as not only an adult but an adult lawyer if something is a book is embarrassing at the highest level

    • @TheRuthPo
      @TheRuthPo Рік тому +141

      under oath

    • @ptorq
      @ptorq Рік тому +44

      Honestly I don't know the answer to that question. My gut feeling would be to say "no, it's A LOT OF books", but IANAL and maybe technically/legally the entire compendium is regarded as a single "book" even though it apparently has enough pages to justify being bound into at least 925 volumes.

    • @Krahazik
      @Krahazik Рік тому +49

      That's the point you know the judge is done with them...

    • @shieldgenerator7
      @shieldgenerator7 Рік тому +1

      LOL

    • @negative6442
      @negative6442 7 місяців тому +2

      As opposed to a child lawyer?

  • @TeamDreamhunter
    @TeamDreamhunter Рік тому +3180

    It's not just that CGPT *can* make stuff up, it's that that's *all* it's designed to do. It's a predictive text algorithm. It looks at its data set and feeds you the highest match for what you're asking, and literally nothing else. It looks at the sort of data that goes in a particular slot, fills that slot with data, and presents it to you. It can't lie to you because it also can't tell you the truth, it just puts words together in an algorithmic order.

    • @Thetarget1
      @Thetarget1 Рік тому +317

      Chat GPT is trained to generate text which humans see as looking real. That´s it. There´s no implementation of truthfulness in it´s training, at least not originally.

    • @小鹿-p8f
      @小鹿-p8f Рік тому +227

      it's truly mind boggling how many people don't understand the basics of how these models work. "It'S LyInG!!" no mate, the predictive language model doesn't have an intention, it's just stringing words together based on an algorithm...

    • @hannahk1306
      @hannahk1306 Рік тому +112

      ​@@ApexJanitor It can't lie, because it can't think or have intent. Nobody fully understands how these models produce their results, but they do understand the kinds of things that are happening and what its limitations are.

    • @Twisted_Code
      @Twisted_Code Рік тому +61

      @@ApexJanitor there's a difference between not fully understanding something and having no idea what's going on. I don't think this model is close enough to sentient to be able to "lie" in the moral sense or "want" anything (though it certainly does a good job passing the Turing test, so I can understand the confusion). It's utility function is essentially a fill in the blank algorithm, so of course if you ask it subjective questions, as the idiot lawyer did, it's going to seem to lie.
      also what's with the tone of your message? Seems kinda hostile, and the "Hahaha"'s make me feel like The Joker has had a hand in writing this, why not LOL?

    • @jonathanrichards593
      @jonathanrichards593 Рік тому

      @@ApexJanitor I see what you're driving at, but the fact that a neural network of this scale is not comprehensible does not mean that we don't know what it is doing. It's predicting words, nothing more and nothing less. It's not some new and unfathomable way of thinking and responding to the world, it's just mimicking human language (and not very well, at that). You wrote "... it lies if it wants" but that assumes some sort of mind that "wants". ChatGPT and its ilk don't have minds.

  • @TheBoxyBear
    @TheBoxyBear Рік тому +7921

    Asking Chat GPT to validate its own text is like asking a child if they're lying. What do you expect?

    • @justherbirdy
      @justherbirdy Рік тому +584

      That's seriously the best bit, "are you sure this is all true?" "of course! check anywhere!"
      And then they DIDN'T CHECK. Because how could anything on the internet be false?

    • @genericname2747
      @genericname2747 Рік тому +280

      The source is literally "I made it up"

    • @snowball_from_earth
      @snowball_from_earth Рік тому +199

      ​@@genericname2747source: trust me, bro

    • @alex_zetsu
      @alex_zetsu Рік тому +112

      Honestly this is particularly bizarre. If they had unquestioning faith in AI and didn't think they needed to validate, well that's bad but I can understand the train of thought. So imagine if one of them called an expert testimony, he sounded good and decided that didn't need to be validated. But maybe the so called expert seems a bit shady or his documents didn't seem to be in order. If you decided to validate that expert, would you ask _himself_ about his work?

    • @PetyrC90
      @PetyrC90 Рік тому +54

      This could be said for literally every human. It is extremely bad argument against AI. The person creating the fact can't be the one validating it. That's exactly why there is something called "peer reviewed" in academics.

  • @Willow_Sky
    @Willow_Sky Рік тому +220

    A recent survey of ChatGPTs performance when it came to math was published and it really illustrates why you shouldn't try to rely on these things to answer questions for you. It went from answering the test question correctly more than 98% of the time to barely 2% in a matter of months. Not only that, it has in some cases started to refuse to show its work (aka why it is giving you the answer it is giving you).

    • @MegaBlair007
      @MegaBlair007 10 місяців тому +24

      So it turned into a 5th grader?

    • @miickydeath12
      @miickydeath12 9 місяців тому +9

      ive noticed this, its like they dumbed it down on purpose to stop people from doing this. what happened to chatgpt being capable of passing medical and law classes?

    • @Willow_Sky
      @Willow_Sky 9 місяців тому +42

      @@miickydeath12 it doesn't seem like it was intentional. The engineers seemed pretty baffled by that survey. If I had to guess it has more to do with people intentionally inputting incorrect information to mess with the AI

    • @TMilla0
      @TMilla0 9 місяців тому +11

      @@Willow_Sky Probably similar to what happened to Tay when she released.. wow 8 years ago now. I remember Internet Historian doing a great video on it. Going to have to go watch it again.

    • @bydlokun
      @bydlokun 7 місяців тому +7

      @@Willow_Sky AI is very dependent on learning material. Worse quality of learning data - worse quality of results. GPT4 has much bigger quantity of learning data compared to GPT3.5, but it's quality is under question.
      Also, in cases, where GPT3.5 had return 'no data found', GPT4 generates random gibbish.

  • @Bazil496
    @Bazil496 Рік тому +5078

    As a Machine Learning Engineer, seeing Devin explain Chatbots better than 99% of the people in the world who think it's magic or something made me tear up

    • @jooleebilly
      @jooleebilly Рік тому +340

      It's because he's smart and he and his team do their research. That's why he's in The Bigs. P.S. Congrats on being a Machine Learning Engineer, that's amazing! Please help keep us safe from them? Or at least keep it obvious when someone is being an idiot when they use it. Thanks, Your Friendly Content Writer and IT Specialist -

    • @Bazil496
      @Bazil496 Рік тому +91

      @@jooleebilly Thanks 😊

    • @eudstersgamersquad6738
      @eudstersgamersquad6738 Рік тому +97

      While Julie made that really nice comment, I just have to say that at first I read your name as Brazil.

    • @gavros9636
      @gavros9636 Рік тому +115

      He understands it better than these two lawyers did.
      As a hobbyist programmer I knew where this was going from the very start, I use ChatGPT to help me learn and write code, I ask it how to perform a specific action in Python and it tells me the answer, but I am always double checking it just to make sure it's not bullshitting me, I simply do not trust it since I know it's just predicting text. I this is one where it is very good but I still am completely suspicious of it since I am very aware of the chatbots habit of making things up.

    • @mubeensgh
      @mubeensgh Рік тому +18

      It’s because he is a very good lawyer that does his research and doesn’t make up citations.

  • @rossjennings4755
    @rossjennings4755 Рік тому +1018

    This story just supports my opinion that the biggest problem with ChatGPT is that people trust it despite having no real basis for that trust. It's exposing the degree to which people rely on how authoritative something sounds when deciding whether to trust it, rather than bothering to do any kind of cross-referencing or comparison.

    • @aoeu256
      @aoeu256 Рік тому +18

      There are prompt-engineering techniques that get chatGPT to do cross-referencing on itself that might improve it a bit, but you still have to find the sources in the end and do your own research.

    • @spacebassist
      @spacebassist Рік тому +16

      ​@@aoeu256I was literally thinking about this today because I have no imagination for bing's AI search and I thought "I can't look up facts since I'm better off doing that the normal way, so what do I use this for?"
      Not to impose but if you have any ideas I'm all for them lmao, AI advancements are wasted on me until it's an AGI

    • @sjs9698
      @sjs9698 Рік тому +6

      @@spacebassist have you tried asking gpt what it could usefully do for you?

    • @spacebassist
      @spacebassist Рік тому +18

      @@sjs9698 we're both finding out just how bad I am at this lmfao. No, I did not think of that
      I've been fixated on the fact that it can't provide unbiased fact or act like a person, that it's "just a language model that can kinda trick you"

    • @panagea2007
      @panagea2007 Рік тому +20

      Sounds like ChatGPT is a Republican.

  • @juliav.mcclelland2415
    @juliav.mcclelland2415 Рік тому +929

    As a legal assistant, watching this feels EXACTLY like watching a horror movie. No, I did NOT guess the cited cases didn't exist because that means nobody in this law firm checked the chat bot's writing for accuracy! You have to do that even when humans write it! They did NO shepardizing, no double-checking AT ALL?! How? Just... how?! And, oh Mylanta, that response to the show cause order... Dude, that... doesn't comply with the order. At all. What kind of lawyers were these guys?!

    • @TheVallin
      @TheVallin Рік тому +56

      Bad one's obviously. And a little more than just plain lazy.

    • @flamingspinach
      @flamingspinach Рік тому +107

      TIL a new word - Shepardizing: "The verb Shepardizing (sometimes written lower-case) refers to the process of consulting Shepard's Citations [a citator used in United States legal research that provides a list of all the authorities citing a particular case, statute, or other legal authority] to see if a case has been overturned, reaffirmed, questioned, or cited by later cases."

    • @juliav.mcclelland2415
      @juliav.mcclelland2415 Рік тому +79

      @@flamingspinach And you are now smarter than these 2 lawyers!

    • @503leafy
      @503leafy Рік тому +31

      The fact that they didn't double check it at all astounds me.

    • @SoManyRandomRamblings
      @SoManyRandomRamblings Рік тому +68

      The fact they didn't double check anything tells me these guys haven't done any work themselves in ages, they have grow so used to passing off the work and having others do it and haven't been double checking that work for such a long time that they didn't even bother to double check the "new hire" (doesn't matter if it is AI or human.....for them to not bother verifying reveals they have a pattern)

  • @trishitatiwari4264
    @trishitatiwari4264 Рік тому +72

    I am a PhD student currently working on building models like ChatGPT, and this is hilarious! Really enjoy all your videos!!!
    But this completely makes sense, since these pre-trained models are typically trained on webtext so that they can learn how English (or any other human language), functions, and how to converse in human languages. But these models are not trained on any sort of specialized data for any given field so they won't do well when used for these purposes.

  • @m0L3ify
    @m0L3ify Рік тому +2255

    Doing this in Federal court was bold (or just plain stupid.) The rules and standards are SO much stricter in Federal court!

  • @jsalsman
    @jsalsman Рік тому +3153

    This is the first time in my life I've seen a lawyer sitting in front of a bookcase full of law books, AND ACTUALLY PULL ONE OUT. (edit: 25:30)

    • @joemck85
      @joemck85 Рік тому +167

      I have to assume they do research when they aren't in the middle of a consultation. They mostly wouldn't use a physical book anyway since electronic databases can find things instantly and are always up to date with the latest info.

    • @parry3439
      @parry3439 Рік тому +28

      @@joemck85 then what are the books there for? the branding?

    • @lesboobas
      @lesboobas Рік тому +189

      ​@@parry3439 just for style

    • @MekamiEye
      @MekamiEye Рік тому +229

      @@parry3439 Before online databases became as thorough as they are (probably likely only in the last 10 years or so), people did have to have written books, especially if they were gonna use them often. I think Devin has been practicing long enough that he probably had physical copies before online databases. Noticed how he stated the book in hand was a 2nd edition, which looking it up, that's 1925 to 1993. Long before things got scanned and put into binary. Devin himself gained his JD in 2008 from UCLA. wiki'd legaleagle.
      Meaning, yeah, he prolly keeps them as a memoir of his early carrier and/or his university days. Lawyers needed LOTs of books. Mostly cases and laws in their area of practice.

    • @kunegund9690
      @kunegund9690 Рік тому +76

      ​@@MekamiEye There is a huge gap between 1993 and 2008 in computers and data storage. For example, 1993 is the game Doom on PC with floppy discs, and 2008 is Metal Gear Solid 4 on PS 2.
      In 2003, most big journals were moving to the internet, and there were probably buyable databases offline. That's probably why those books look so pristine! I thought it was a Zoom background or something.

  • @andruchuk
    @andruchuk Рік тому +633

    I'm not a lawyer, but I used to work with the local government with some quasi-judicial hearings where some appellants would retain lawyers to argue for them. One of the funniest cases I had dealing with lawyers, the lawyer quoted a particular case in a written brief which was old enough that it wasn't in the legal databases and he didn't have the full case to provide for review. I walked down to my local library, grabbed the book with the decision, and actually read the decision. The lawyer was then surprised when I forwarded the scanned copy of the case on to him, and I had to point out that it would appear the quote was out of context, and that the decision actually supported the Crown's position. The appeal was then abandoned shortly thereafter.

    • @jeanmoke1
      @jeanmoke1 Рік тому +24

      Begs the question though, how did he find said case? Also, clearly a number of lawyers are not reading the cases they cite, very concerning.

    • @williamharris8367
      @williamharris8367 Рік тому +83

      ​@@jeanmoke1 The original decision was probably cited in a later decision or a secondary source.
      That is a legitimate way to do legal research, but, as noted, it is necessary to actually _read_ a decision before citing it.
      I did legal research for government lawyers for more than a decade. I would summarize the salient case law and provide excerpts as applicable, but I always attached the full text of the decisions as well. I know that some (but not all) of the lawyers carefully reviewed my work.

    • @yuki-sakurakawa
      @yuki-sakurakawa Рік тому +3

      @@jeanmoke1
      Good lawyers can argue a ruling to make it appear that it supports their client. 🫡

    • @KingLarbear
      @KingLarbear Рік тому +2

      On a list of things that never happened

    • @carlodave9
      @carlodave9 Рік тому +10

      I’m not a lawyer, but I think a judge’s order that repeats the word “bogus” three times in one sentence in response to your legal filing is probably not good.

  • @flickcentergaming680
    @flickcentergaming680 11 місяців тому +1454

    The fact that "bogus" is apparently a legal term makes me very happy.

    • @Lili-ey1nd
      @Lili-ey1nd 10 місяців тому +58

      Life is just silly sometimes 😂 we want it to be deep down but we don’t actually know life IS that silly , study human history in terms of the silly

    • @unclesam8862
      @unclesam8862 10 місяців тому +6

      why? i only ever heard that word used in a professional setting, whats so funny about it?

    • @bobthegamingtaco6073
      @bobthegamingtaco6073 9 місяців тому

      ​@@unclesam8862there are two groups of people that use Bogus. Serious business people, and carefree surfers lol. I imagine neither group is happy to have something in common with each other

    • @Stargate-over-starwars
      @Stargate-over-starwars 9 місяців тому +33

      Thats bogus mann ​@unclesam8862

    • @-alovelygaycat-
      @-alovelygaycat- 9 місяців тому +74

      @@unclesam8862
      Bogus is a way to say ‘nonsense’ that’s usually associated with ‘80’s and ‘90’s slang. That’s why it’s funny.

  • @wurdnurd1
    @wurdnurd1 Рік тому +4132

    Public service announcement from your friendly librarian: DO NOT ASK FOR CITATIONS FROM CHATGPT. The citations are likely imaginary and you will only waste yours and the librarian's time. And you WILL be made fun of among the staff. (Worse than this happening in legal settings is this happens in medical settings 😑)

    • @zuccero23
      @zuccero23 Рік тому +196

      Honestly ChatGPT has given me some good references (mostly of what one would call "classical" papers, the one's that are old and cited a lot in other work), but obviously, google every single one before you use it anywhere. In my experience, it's about 50% chance if a citation is real or not, and then another good 50% if it's summary of it is actually accurate of what's in the paper.

    • @KeraR432
      @KeraR432 Рік тому +90

      Even before things like Chat GPT we had people requesting fake citations, just another reason why librarians can never be fully replaced by AI

    • @rickallen9099
      @rickallen9099 Рік тому +7

      Mock it now, but the technology is only going to get better with each iteration. Lawyers aren't safe from AI either. Nor are librarians.

    • @wurdnurd1
      @wurdnurd1 Рік тому +240

      @@rickallen9099 We don't mock AI, we mock the attempt to submit nonexistent citations without verifying that they're real.

    • @RubyRedDances
      @RubyRedDances Рік тому +37

      It's crazy that a large language model is not able to cite the sources of its information.

  • @walls_of_skulls6061
    @walls_of_skulls6061 Рік тому +2270

    Got to love how everyone is like”Chatgpt is going to take over everything” and then every time you apply it to something real like this it consistently comes up short

    • @AYVYN
      @AYVYN Рік тому +337

      If you’re an expert in your field, ChatGPT is like a very smart freshman college student. Impressive to everyone else, but you see the issues.

    • @warlockd
      @warlockd Рік тому +260

      ​@@AYVYNAt least a freshman knows to verify sources.

    • @walls_of_skulls6061
      @walls_of_skulls6061 Рік тому +153

      @@warlockd not even that, Chatgpt has been known to lie! It tries to complete satisfying sentences and then like half the time it just says stuff that sounds right.

    • @shadenox8164
      @shadenox8164 Рік тому +73

      @@AYVYN If you're an expert in your field you'd be able to tell it doesn't understand what its saying.

    • @mangoalias608
      @mangoalias608 Рік тому +115

      @@AYVYN its not even a student. its like taking all the books from your college library and putting them in a blender, and then getting a random person off the street to rearrange the pieces

  • @joshuawhitman8254
    @joshuawhitman8254 Рік тому +454

    What's clear to me is that this judge did his research. He very clearly understands that they didn't just ask Chat GPT to explain the relevant law but instead asked Chat GPT to prove their loosing argument. ChatGPT only knows what words sound good together. It does not know why they sound good together.

    • @myself248
      @myself248 Рік тому +67

      That's the salient bit here -- the judge was able, not just to call their bluff, but to call two or three nested levels of bluff, by recognizing the kind of bullshittery that ChatGPT engages in, and HOW that crept into the process at each step along the way.

    • @omni42
      @omni42 Рік тому +38

      Right? That caught my ear too, the judge knew how this would've happened and was savvy enough to get the line of logic that would have produced these results. They were screwed.

    • @ildarion3367
      @ildarion3367 Рік тому

      That's a bit of a simplification. A simplification we can make about most people when they speak or write too. If you use Bing you can do very fast legal work and it will give you the references. If the data is not available online, you can use GPT4's API and load your data.
      I trust GPT's level of reasoning more than I trust the average Joe.

    • @princess_gurchi
      @princess_gurchi Рік тому +21

      @@ildarion3367 Average Joe doesn't know anything about anything, be it law, tech, economics, logistics or nuclear power plant design. That's kinda the point of how modern society works: no single person can learn everything there is to know about every topic. That's why we have specialization. You choose a field and over time become proficient with it, while completely disregarding other fields and relying on other people for their specialized knowledge through cooperation. While your claim is probably correct, it's not meaningful. Sure, chatGPT can form a more coherent response to a legal question than me, someone who never had any interactions with legal system in their life, but it still doesn't change the fact that neither of us are specialists in this field. And therefore both of our opinions are equally useless when compared to a real specialist.

    • @xponen
      @xponen Рік тому +14

      @@ildarion3367 trust based only on charisma & fluent speech is recipe for disaster.

  • @LeCommieBoi
    @LeCommieBoi Рік тому +140

    There was a test conducted in Quebec where the Bar examinators gave the bar examination to Chat GPT. TLDR: It failed miserably

    • @KnakuanaRka
      @KnakuanaRka 10 місяців тому +1

      Interesting; where did you hear about that?

    • @LeCommieBoi
      @LeCommieBoi 10 місяців тому +4

      @@KnakuanaRka local newspaper or TV news i don't remember

    • @snmnmidld6203
      @snmnmidld6203 10 місяців тому +2

      Source: trust me bro

    • @andynct
      @andynct 10 місяців тому +9

      Yeah. It only got 12%
      "ChatGPT obtains 12% on the Quebec Bar Exam"

    • @miickydeath12
      @miickydeath12 9 місяців тому +1

      its weird because just last year chatgpt achieved much higher scores on bar exams. it seems like chatgpt over time has been dumbed down to prevent people from using it to cheat, this can be seen when you just ask the model some math questions, i couldve sworn it was way better at solving math last year

  • @lesigh3410
    @lesigh3410 Рік тому +3416

    The realization that Devin is actually sitting in a library in all his recordings and isn't just using a green screen was by far the biggest plot twist in this video
    Edit: why are people arguing about whether or not it was real or edited
    why would he go through all that effort getting a book that looked identical to one in his green screen if that was what he was using

    • @bertilhatt
      @bertilhatt Рік тому +320

      And he waited… Not the first, or the second time he mentions case books, but the *Third*. The storytelling in those videos…

    • @swilsonmc2
      @swilsonmc2 Рік тому +45

      It's a green screen.

    • @lesigh3410
      @lesigh3410 Рік тому +238

      @swilsonmc he picked up the book bruh, off the bookshelf behind him

    • @swilsonmc2
      @swilsonmc2 Рік тому +102

      I looked at it again and you're right.

    • @lz345
      @lz345 Рік тому +97

      Glad I am not alone in this. I almost jumped when he pulled out the book.

  • @mundzine
    @mundzine Рік тому +3010

    They got off with just a $5000 fine....and the firm is still deciding whether to appeal or not. It's crazy that they knowingly fabricated cases only to get away with a slap on the wrist

    • @sillybob9689
      @sillybob9689 Рік тому +139

      For real? Just $5k?

    • @mundzine
      @mundzine Рік тому +240

      @@sillybob9689 yup, and the judge apparently would've let it go if they came clean in the first place

    • @Steamrick
      @Steamrick Рік тому +210

      $5k plus however much he's gonna lose from torpedoing his own career...

    • @Matt-cr4vv
      @Matt-cr4vv Рік тому +235

      Meh you’d be surprised on the torpedoing his career. Lots of lawyers have been sanctioned and carried on fine. Most all of those things take some deeper research that clients often don’t ever look into. But the judge saying he would’ve just moved past it had they come clean is common. The cover up is almost always worse than the crime.

    • @jimlthor
      @jimlthor Рік тому +62

      I think it was more to scare the hell out of them and embarrass them so they wouldn't make the same mistake of wasting everyone else's time and money

  • @krazzeeaj
    @krazzeeaj Рік тому +866

    As a paralegal, this whole case got under my skin in the worst way. From the unverified citations, to the fact that he didn't know what the Federal Register is, to lying to the judge. If I did even one of the things they did on this case, I would throw myself at the mercy of my boss, because there's no way in hell I would even let him sign something that wasn't perfect, I sure as shit wouldn't file it.

    • @treebeaver3921
      @treebeaver3921 9 місяців тому +40

      I just cannot imagine the embarrassment. I mean how do you even survive the level of embarrassment from using Chat GPT to write your documents and it getting everything wrong lol

    • @ulalaFrugilega
      @ulalaFrugilega 5 місяців тому +4

      Maybe this Schwartz guy is an imposter?

    • @levayv
      @levayv 2 місяці тому +2

      Best part was about F.3d
      It's not a department it's a book

  • @detritusofseattle
    @detritusofseattle Рік тому +25

    Seeing this a second time, it's even worse! I was just telling a coworker about this last night and he was blown away that a lawyer did this.
    The judge was straight up savage.

  • @therranolleo468
    @therranolleo468 Рік тому +676

    Props for the judge for keeping calm while asking these clearly mental lawyers confirmation and not just bonk them in the head with the case book he didn't know about

    • @silentdrew7636
      @silentdrew7636 Рік тому +82

      As a judge, you're supposed to bonk them with the gavel.

    • @AndrewBlechinger
      @AndrewBlechinger Рік тому +45

      ​@@silentdrew7636I guess "throwing the book at them" was never literal, huh?

    • @Lodinn
      @Lodinn Рік тому +44

      I was unsure why judges are treated with some kind of reverence in lawyer circles until I've seen/heard some of their interactions and opinions.
      They sure are very composed, tactful and professional, yet absolutely brutal when it comes to scathing remarks.

    • @warlockd
      @warlockd Рік тому +36

      ​@@LodinnIt feels like the judge was more dumbfounded than anything. I mean, the responses were so idiotic it makes you wonder how he even passed the bar.

    • @Lodinn
      @Lodinn Рік тому +19

      @@warlockd Not sure I agree - by the time they've produced these made-up cases using ChatGPT, the damage was already done. Coming clean was probably the least dumb decision overall in that situation.
      ...granted, the F.3d moment sounds like a really, really bad knowledge gap, but IANAL. The rest didn't particularly stand out to me, they were pretty screwed by then already anyway.

  • @Superdavo0001
    @Superdavo0001 Рік тому +334

    One thing I love about legal drama like this is how passive-aggressive everything needs to be as it must be kept professional. A judge isn't gonna erupt on someone but if they make a motion to politely ask what you were thinking, you know you're in one heck of a mess.

    • @gavros9636
      @gavros9636 Рік тому +2

      @@cat-le1hf Ah yes the trial of Chicago seven.

    • @vylbird8014
      @vylbird8014 3 місяці тому +5

      You should see British parliamentary debates. There are strict rules of conduct which dictate how to address people and forbid, among other things, accusing another MP of lying. Even if they are blatantly speaking utter falsehoods, it's forbidden to accuse them of it - because MPs, being the highest and most honourable of society, are surely above such things and it would be an insult to the institution to so much as suggest the possibility of deception. This has lead to a lot of passive-aggressive implications. An MP can't accuse another of intentional lying, so they will instead suggest "The right honorable gentleman appears to be mistaken' giving the most respectful and formal of words while making it clear in their tone that the intended meaning is more 'liar liar pants on fire.'

  • @andreaski100
    @andreaski100 Рік тому +2412

    The most galling thing is LoDuca's refusal to take any responsibility. He blames everyone and anyone else. A competent paralegal would be an asset to this team.

    • @richardgarrett2792
      @richardgarrett2792 Рік тому +288

      With all this public humiliation, any competent paralegal would be looking elsewhere.

    • @acat6145
      @acat6145 Рік тому +106

      They should just hand in their bar card they ain’t recovering from this

    • @andreaski100
      @andreaski100 Рік тому +53

      @@richardgarrett2792 you're absolutely correct! 😂 I'm sure they're insufferable to work for

    • @lesigh3410
      @lesigh3410 Рік тому

      For real, as idiotic as Schwartz was, LoDuca was just completely in "it's never my fault" mode. What an arrogant idiot.

    • @markbeames7852
      @markbeames7852 Рік тому +35

      sounds like a former POTUS

  • @killerzer0x74
    @killerzer0x74 Рік тому +28

    Meanwhile I happen to know that if this serving cart were to be pushed with such a force that it quote "incapacitated him"...the damn cart would have broken before any actual harm was done

  • @nrs_207
    @nrs_207 Рік тому +1126

    Taking the bar exam next month, this either makes me more confident that I should pass bc they did; or if I don’t, I’m going to cry bc they did

    • @yqyqyq1
      @yqyqyq1 Рік тому +62

      all the best ❤

    • @maryhales4595
      @maryhales4595 Рік тому +40

      Good luck!!!

    • @09jcoc
      @09jcoc Рік тому +34

      good luck with your exam! if these idiots can pass, you’ve got this!!

    • @somedragonbastard
      @somedragonbastard Рік тому +16

      Good luck on your exam!!

    • @ombricshalazar3869
      @ombricshalazar3869 Рік тому

      judging by these idiots i'd say the *bar* is pretty low

  • @myself248
    @myself248 Рік тому +361

    I would love to have been a fly on the wall in Avianca's lawyers' office when they were first searching for the bogus cases and coming up empty-handed. Did they immediately recognize that it was all bunk, or did they second-guess themselves? How long until they floated the idea that opposing counsel simply made it all up? Did they hesitate to file a response calling the bluff?
    I want an interview with those folks!

    • @the_undead
      @the_undead Рік тому +58

      I honestly wouldn't be surprised if it was actually the judge that realized this first, because the judge would also need to have read those cases to make sure that they fully understand the argument being made, and then none of the clerks or whoever were able to find any case mentioned by these attorneys and then the judge is probably like hmm, one clerk struggling to find a particular case is abnormal, five clerks struggling to find any case is very unlikely I wonder if these are even real. And then from there just going and destroying the careers of these attorneys

    • @SuperSimputer
      @SuperSimputer Рік тому +50

      I listened to the podcast this video mentioned, and they were joking about feeling bad for whatever first-year doing the grunt work had to tell a senior partner they couldn't find six cases. That fly on the wall would've been getting an earful.

    • @warlockd
      @warlockd Рік тому +21

      ​@@SuperSimputerI want to know what that extra week "being on vacation" would have bought them. It makes me wonder how often they used that excuse on other court cases.

    • @themorebeer3072
      @themorebeer3072 Рік тому +12

      From the discussion on this by Leonard French (another UA-cam legal educator), any lawyer reading the citations would very quickly realize they're bogus before even searching them out. Several of the citations don't even match the format used in legal cases, and an experienced lawyer should know this at a glance. The judge would not have needed to be the first one to spot this, and chances are the defense lawyers only searched out the citations to give themselves a better chance of the lawsuit being thrown out and themselves awarded fees and costs. It's hard to imagine them having to do any research into the cited cases before realizing something's screwy.

  • @jodi_kreiner
    @jodi_kreiner Рік тому +323

    as an engineer, “if your name is on it, you’re responsible for it” is a HUGE concept. there’s a lot of red tape in working for companies who deal with government contracts, and a lot of specific record-keeping programs you have to use. it’s important for process cycle tracking, but if you’re actually on the development/build side, it can seem pretty tedious. typically you need to be trained on these softwares, so it isn’t uncommon for only one or two people on your team to actually have the authorization to use them. instead of training everyone else, typically that person’s name is just put as the RE (responsible engineer) and then they’re the one who has to sign off on it. for my current program, that ends up being me a lot of the time. in most cases, it isn’t a problem to just go in and sign off on something, seeing as there’s an entire team of people who need to approve before it gets to you. but there’s always the chance that everyone in the upline may also have the same perspective, and my failure to thoroughly review a document before signing off could make or break a multimillion dollar defense contract. and even if it wasn’t even my design so any failures weren’t technically my fault, guess what? if my name on it, I’m the one who has to deal with the fallout. the abundance of approvals and review stages may seem overbearing and unnecessary at times, but that’s how we avoid catastrophic engineering disasters like we’ve seen so many times before. those checks and balances are there for a reason, and if your name is on it, you BETTER have taken the time to complete your check !!

    • @supersonic7605
      @supersonic7605 Рік тому +49

      Computer engineer here, it is very smart for you to assume that a screw-up could still slip through the cracks because it absolutely can. I know because I was once responsible for one. Back when I was just moved up to lead developer, a software my team developed and tested hard-crashed while demoing it to management. As it turns out, one of the new guys submitted his component of the software he worked on without verifying that it works. Since I was new to leading a dev team, I unfortunately just assumed that he verified it so we went ahead and put it together with the rest of the software and it passed our tests. That component dealt with installing the software, so when we tried to demo it to management on a computer that used a different OS, it wasn't properly installed. I got in A LOT of trouble for this (I got yelled at by everyone in management) because they planned official deadlines after I mentioned in an official document that the software was ready to demonstrate to management when it clearly wasn't, which meant they had to further delay a multimillion-dollar asset. This gave me the worst job-related scare of my life because they said that they had grounds to not just demote me, but to "let me go" (their words) because of the amount of money involved. I assume their superiors expressed to them how "unhappy" they were about the delay. Thankfully, I only got a warning because the problem was fixed quickly, but since then I've been too paranoid to not make sure that every word I write in official documents is 100% confirmed as true without a reasonable doubt. So it blows my mind how these lawyers did every single little thing you could do to do the complete opposite

    • @ezioauditore7636
      @ezioauditore7636 Рік тому +5

      I think legally it's (usually) the fault of the company rather than the individual. Or at least based on the cases I've heard. The reasoning being that the company processes should've caught it in the first place, and so they're equally liable.

    • @EndoftheBeginning17
      @EndoftheBeginning17 Рік тому +6

      @@supersonic7605 I am assuming, if only because the one lawyer asked if it was lying, that these lawyer didn't understand what a GPT model program is. I think they assumed it was an ACTUAL Artificial Intelligence. aka an Artificial Mind, one that could actually think on its own and not need input to generate any answers.
      I think, given that none of these lawyers did any actual lawyering, thought that the GPT could do all of their research because it would collect data from various sources, read it understand it and synthesize a legal document for them.
      The law firm itself, at the very least, should have terminated these guys, just for the sheer embarassment. This has certainly cost that law firm millions in revenue. They should also be debarred for failing to actually act as a lawyer. I wonder if the judge actually imposed a sanctin on the lawyers as well. hopefully they have to pay all the legal fees out of pocket for everyone involved and not take any pay, and perhaps get debarred or something.

  • @AboveTheHeavens
    @AboveTheHeavens Рік тому +15

    While there were several miscalculations I think the worst is the different font. I'm no stranger to the copy paste method when turning in assignments but for a federal judge how could you forget ctrl+shft

  • @Tyrim
    @Tyrim Рік тому +529

    I am a mechanical engineer, and run into this situation recently. I was trying to use ChatGPT to shorten my initial research into a topic, it gave me the equations, everything. But since they were sloppy and missing pieces, i asked it to give me the sources for these equations so i can go to the original articles and collect the missing parts. Oh boy i was in for a big surprise. It just kept apologizing and making up new article titles, authors, even DOI s. It was eye opening to say the least.

    • @shahmirzahid9551
      @shahmirzahid9551 Рік тому +19

      As a fellow ML engineer i am surprised your are relying on the chatbot for anything related to research it may help shorten and make pre existing concepts more concise but it is merely a tool for research not the spearhead of said research

    • @Tyrim
      @Tyrim Рік тому +73

      @@shahmirzahid9551 well, "relying" is a bit misleading of a term. it was a low priority topic which i were to take based on if it's feasible to do in a short timeline, and i decided to try out chatgpt on a "if it works works" basis. it didnt work, and i haven't used it since for this purpose whatsoever

    • @Videogamer-555
      @Videogamer-555 Рік тому +1

      What is a DOI?

    • @Uhohlisa
      @Uhohlisa Рік тому +20

      ChatGPT is NOT a search engine!! You cannot use it as such

    • @shahmirzahid9551
      @shahmirzahid9551 Рік тому +3

      @@Tyrim ah i see i did the same when i do some calculus theory study but i just made a engineered a prompt for it to give some detailed explanation of things and it works like a charm i too had my doubts but yeah i wouldnt still blindly believe everything it said as it could be outdated or completely wrong

  • @jakehallam2113
    @jakehallam2113 Рік тому +381

    I love the fact that even some lawyers can't be fussed with reading the Terms of Service for websites. They should have realised that this could happen when even the TOS states, Under section 3.Content:
    "use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts."

    • @shai5651
      @shai5651 Рік тому +53

      I mean, "we are unreliable" is practically the motto of ChatGPT 3

    • @6023barath
      @6023barath Рік тому +52

      Lol they don't even need the Terms of Service. ChatGPT itself tells them point-blank that it can be wrong on the main screen!

    • @Twisted_Code
      @Twisted_Code Рік тому +2

      It is for this reason, and others, that I am reluctant to take any TOS, EULA, or other routine contract seriously unless I am either given a summary of the terms, somewhere, or a reasonable ability to contact the lawyers that drew it up (so I can get clarification). I still tend to read as much as I can of them, particularly if it's a completely new relation, but I'm only one non-lawyer human, and I don't have a team of lawyers to translate for me. Expecting more than my best effort to understand is a little bit unreasonable.

    • @Twisted_Code
      @Twisted_Code Рік тому +23

      @@6023barath "May occasionally generate incorrect information. May occasionally produce harmful instructions or biased content. Limited knowledge of world and events after 2021". Any one of these should have been enough for them to reconsider using it as a source, but all three?. It wasn't correct, it was biased toward their biased questions, and it wasn't up to date.

    • @tiffm3110
      @tiffm3110 Рік тому +2

      Even my 5th grader used ChatGPT to help with a presentation and she spent several hours fact checking each statement before including it in her power point

  • @stischer47
    @stischer47 Рік тому +532

    I must admit that when the lawyer admitted, under oath, that he lied to the judge about going on vacation, I had to get up and walk around I was so stunned. Lying to a Federal judge? Sheesh! How did that lawyer ever pass the bar?

    • @Patrick-vv3ig
      @Patrick-vv3ig Рік тому +45

      Because the US system allows pay to win for literally everything

    • @CatOnACell
      @CatOnACell Рік тому +67

      Also, humans can know the information contained in an ethics class and answer questions based around it. Without actually understanding or agreeing with the information.

    • @darwinfinche9959
      @darwinfinche9959 Рік тому +25

      Passing the bar has nothing to do with practicing law

    • @Sorcerers_Apprentice
      @Sorcerers_Apprentice Рік тому +42

      Passing the bar shows you know how to write a really hard test. That's kind of a separate skillset from learning how to navigate court without angering a judge.

    • @transsnack
      @transsnack Рік тому +5

      There's always the people who pass at the bottom of their class.

  • @PilotAdventurer
    @PilotAdventurer Рік тому +4

    I feel like the jury every time that I watch your video. I know absolutely nothing about law databases but now I got a basic understanding.
    Takes me back to my only jury duty service.

  • @BadTakesMMA
    @BadTakesMMA Рік тому +178

    The best way I have heard ChatGPT described is "ChatGPT knows what a correct answer looks like." At a surface level, it looks like a legitimate answer until you dive into the details in this case.

    • @cmmosher8035
      @cmmosher8035 Рік тому +1

      My understanding is Chatgpt will give you the answer YOU are looking for. That's what it did for these guys.

  • @nickybcrazy97
    @nickybcrazy97 Рік тому +296

    I have some minor sympathy for the lawyer claiming he thought chat gpt was a search engine, given all the hubub and publicity about google and microsoft introducing so-called "ai search engines" a while ago. But the fact that he simply did not check *any* of the information provided is aboslutely mind boggling. He didn't even understand what the citations meant! It seems likely to me that he's been merrily citing cases without reading them for years, and this is just how he got caught. What a mess.

    • @KindredBrujah
      @KindredBrujah Рік тому +20

      By the sounds of the description Devin gave, Mr Schwarz was not a federal lawyer, hence getting Mr LoDuca to file on his behalf. It is plausible (though given he's apparently practiced law for 30 years, something of a stretch to believe) that he simply wasn't aware of the federal nomenclature.

    • @KayDizzelVids
      @KayDizzelVids Рік тому +20

      I have none for those lawyers. They should have checked to see if the cases were real if they couldn’t find what they were looking for in other places. I got a lot of sympathy for the guy who hired these morons though.

    • @Jehty_
      @Jehty_ Рік тому +3

      @@KayDizzelVidsyou have sympathies for a guy suing an airline three (!!) years after he got bonked with a serving cart? Really?

    • @Lodinn
      @Lodinn Рік тому +3

      @@KindredBrujah Maybe his law practice never really extended to courts any much and he was perma-stuck in the ghostwriter position, signing papers for the firm and the like?..

    • @Wertercat
      @Wertercat Рік тому +13

      @@Jehty_ Even the dumbest parties deserve proper legal counsel. A better lawyer would have told him not to bother.

  • @jooleebilly
    @jooleebilly Рік тому +228

    After working for the Sacramento County Superior Court of California, it's crazy that attorneys would try to lie to a Judge. Judges are like gods of their court. NEVER mess with them. They're smart enough to figure it out. They started out as attorneys themselves. I got this from nine months of working as an IT specialist for the Court. Judges can be very nice people, but don't try to mess with them. They are not amused by legal shenanigans.
    I even overheard one Judge in chambers who was speaking with a woman suing due to being injured in a car crash. He actually went out of his way to tell her that "he didn't want to speak ill of her attorneys, but it seems to me that your settlement should be far higher based on the photographs of your injuries. This is not legal advice, so if I were you, I'd consider making sure your attorneys have these pictures and are taking them into consideration." Okay, I'm paraphrasing, but he was oh-so-slyly suggesting that this woman get better lawyers. He was also one of the smartest, no-nonsense Judges I'd ever met. And he didn't suffer fools gladly. But the fact that he went out of his way to help this woman was incredibly good of him. Considering how short he could be, for example, when his computer wasn't working the way he expected, I was surprised to find out how generous and gentle he was with helping plaintiffs out.

    • @cparks1000000
      @cparks1000000 Рік тому +3

      It sounds unethical to me that the judge offered such "not legal advice".

    • @terryjones573
      @terryjones573 Рік тому +7

      @Jack You’re absolutely right. I wouldn’t say it’s “usual” at all for judges to be attorneys first. On the other hand, he was a federal appointment.
      Upon wiki-ing him, he did practice privately in NYC for 26 years.

    • @sempressfi
      @sempressfi Рік тому +4

      This is what I'm most concerned about with our judicial system given the political climate and the way judges were selected in the last administration. Judges are human and fallible, yes, but generally speaking the system has honed itself so that most judges are like vigilant guards watching over those symbolic scales. Sometimes it's out of personal interest that they are VERY not okay with someone/a group tipping those scales whether through bias, incompetence, ideology, etc and sometimes it's genuinely caring and taking their role in democracy seriously but whatever the motivation it plays a critical part in our lives.
      Hoping that at least now many more people recognize how important this branch of government is

    • @amicaaranearum
      @amicaaranearum Рік тому +4

      The first rule of practicing law is “don’t piss off the judge that is hearing your case.”

    • @artsyscrub3226
      @artsyscrub3226 Рік тому

      ​@@cparks1000000
      If it's unethical to tell someone they deserve more money for their injuries than what their hack lawyers are trying to get them I don't want a ethical judge who will let me get screwed over.

  • @meredithlucas7156
    @meredithlucas7156 9 місяців тому +19

    This is how one of those "He never went to law school but he's practicing law like a pro" TV shows would actually go

  • @scottywan82
    @scottywan82 Рік тому +464

    As an accountant, this video caused me physical pain. This sounds like a literal nightmare anyone in a legal or finance profession could have. I am genuinely surprised neither of these men broke down sobbing on the stand.

    • @lilymarinovic1644
      @lilymarinovic1644 Рік тому +19

      Who says they didn't?

    • @hawkeye5955
      @hawkeye5955 Рік тому +33

      I imagine it's not any better when the entire legal community is pointing and saying "Ha ha!"

    • @TheGreatSquark
      @TheGreatSquark Рік тому +26

      *shudder* dealing with a client's lousy OCR system is bad enough. I cannot imagine the disaster that would ensue if someone let a generative AI near financial records or reports.

    • @katrinabryce
      @katrinabryce Рік тому +3

      @@TheGreatSquark You will likely see it first in the investment side of things.

    • @storage9578
      @storage9578 Рік тому +21

      @@TheGreatSquark I imagine "the ai made a mistake" could be a nice excuse for fabricating numbers. At least would expect less trouble than "yeah we lied to mislead investors".

  • @lauragiletti
    @lauragiletti Рік тому +194

    I can only imagine the shock, laughter and amazement in the offices of the defending lawyer and at the judge’s office. Laughter and also a portion of anger.

    • @NineSun001
      @NineSun001 Рік тому +25

      I can't image the faces of the defending lawyers after they actually realized wtf just happend. Before that they must've been confused to hell and back again.
      I would've paid to see that ass whooping in the curt.

    • @clyne8835
      @clyne8835 Рік тому +2

      They were popping open champagne realising the case was gonna be thrown out in no time

  • @seantlewis376
    @seantlewis376 Рік тому +381

    I want to see a follow up to this story. For 14 years, I worked in the IT Department of a prominent law firm. How these attorneys are not disbarred already is beyond me. As with most professions, attorneys are very defensive of their professions, and get upset with people who disgrace them. Rightfully so. I feel the same way when I hear about a dishonest IT person. I have been hired by lawyers to investigate a situation with an unscrupulous network administrator, for example. I was happy to do the work, and delighted to see the person destroyed in civil court.

    • @tailsofpearls
      @tailsofpearls Рік тому +29

      They will absolutely be disbarred, its just been two days since the last update on the case.

    • @Roccondil
      @Roccondil Рік тому +7

      It would also be interesting to see an analysis of their case history as well...

    • @KanuckStreams
      @KanuckStreams Рік тому +6

      @@Roccondil Yeah, if I was a judge or legal body, this bombshell would make me want to shine a very bright light on their prior cases, and shine it into every single uncomfortable hole to see if this was not a one-off idiocy but in fact the mistake of palming off the lying to an AI rather than hand-crafting the lies themselves.

    • @Cariol
      @Cariol Рік тому +1

      Search Mr. Liebowitz, a "Copyright attorney" - sometimes it taks an unbelievable amount of wrong doing to get disbarred

    • @the_undead
      @the_undead Рік тому

      "Ah yes let's screw over some lawyers, that sounds like a great idea"

  • @VeracityLH
    @VeracityLH 3 місяці тому +3

    This also reminds me of my work as a medical transcriptionist. When voice recognition programs came out, one of my doctors went to a convention where the software was introduced. He came back and gleefully told me it would put me out of work. (He wasn't malicious. He knew I was knowledgeable about computers and wanted my opinion.) I told him it would never happen because the software required editing by the user, especially in the beginning while the software adapted to the user's accent and use of language. I said doctors either would not or could not take the time for proper proofreading. And they still don't use it.
    New software sounds magical until you read the fine print.

  • @toadeightyfive
    @toadeightyfive Рік тому +270

    I went to check myself what 925 F.3d 1339 actually was; it's a page within a decision by the US Court of Appeals D.C. Circuit (the full case actually starts on page 1291) called J.D. v. Azar, one that had to do with the constitutionality of a Trump-era restriction preventing immigrant minors in government detention from obtaining abortion services. It was actually kinda interesting to skim through, if completely irrelevant to airline law.

    • @MekamiEye
      @MekamiEye Рік тому +27

      thank you for looking it up and sharing a quick summary with us! Was curious to see if someone looked it up or not.

    • @Native_Creation
      @Native_Creation Рік тому +5

      It may be relevant when these minors are transported via chartered airlines. Human trafficking itself is a major issue that airlines look out for, so there seems to be relevance.

    • @phineas81707
      @phineas81707 Рік тому +9

      The fact it's not actually a real case, just a page in a case starting from an earlier page, helps explain why a cursory glance didn't raise the red flags you get when you actually read the page in front of you.

    • @webbowser8834
      @webbowser8834 Рік тому +7

      Tbf the biggest surprise to me is that is indeed a valid citation, and not some hilariously out of bounds non-existent thing.

    • @TEverettReynolds
      @TEverettReynolds Рік тому

      Has anybody offered an explanation of WHY ChatGPT gave the false reference and was so adamant that it was a real source? Could ChatGPT be pulling from a fake law source itself? Did the programmers do this on purpose? I use ChatGPT regularly for work, and while not perfect, it's about 80% accurate in the IT space. So why would it be so far off in the legal space? It has been successfully used in the academic space also, to the point that some teachers and professors can't tell a real paper from a ChatGPT paper apart.

  • @willegan1823
    @willegan1823 Рік тому +599

    Update: Judge Castell dismissed the case due to the statute of limitations issue and fined LoDuca, Schwartz, and their law firm $5000 each. They’re very lucky to have gotten off that lightly.

    • @panda4247
      @panda4247 Рік тому +58

      Wtf. I'd expect disbarment plus a large fine.
      Plus mr. Mate suing them for mishandling his case.
      Plus investigation into the law firm, how their processes are written and adhered to.
      I would expect a reasonable law-firm to have standards of conduct that specify which tools to use for case search, or whatever

    • @sambmortimer
      @sambmortimer Рік тому +69

      They definitely should have gotten slapped much harder, but on the plus side, they can't hide from this, and will never be taken seriously as lawyers ever again.

    • @nurlindafsihotang49
      @nurlindafsihotang49 Рік тому +46

      I think Judge Castell *see* that they already got their career destroyed, and minding that they got enough punishment already.
      If an attorney from a wee country in South East Asia already hears the mayhem of their blunders, oh boy...they and their firm are toast.

    • @brawler5760
      @brawler5760 Рік тому +30

      Tbf the law community is clowning on LoDuca and Schwartz, so it’s safe to say that their careers have been ruined.

    • @alex_zetsu
      @alex_zetsu Рік тому +17

      Not quite: he wrote an angry letter to the bar (although for some reason, he can't actually disbar them, the bar is the one that does it) so while that's all they are being punished for by the law system, the bar association might suspend them for a few years.

  • @mads_in_zero
    @mads_in_zero Рік тому +349

    I feel like describing language AI models like chatGPT as having "hallucinations" where they "make stuff up sometimes" is far too generous to what they actually do. These chatbots don't know what's true and what's false, they don't actually _know_ anything. They're _always_ making stuff up - guessing what sequence of words is probable in response to any given input - and it's more accurate to say that they get things _right_ sometimes.
    Chatbots will confidantly lie to you, but actually calling it a "lie" is a mistake, because lying requires knowing you're spreading a mistruth, which they simply don't. Because they don't "know" things the way we do. That predictive text output gets to be called "AI" is a huge framing mistake that only makes people misunderstand and anthropomorphise these things.

    • @suntiger745
      @suntiger745 Рік тому +35

      Good point. Spelling out what the GPT actually stands for gives a much clearer picture of what it is and isn’t. But hey, news articles have to get those clicks, and AI news is hot stuff…

    • @lordhoden
      @lordhoden Рік тому +33

      "At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving." IBMs definition for artificial intelligence.
      ChatGPT relies on a robust dataset to solve problems, using GPU's. I'd say its an AI. So I don't think calling it AI is a framing mistake. People just don't know the definition of AI and assume AI means human intelligence, produced by a computer. This, it very clearly isn't.

    • @Jhfisibejoso8pkabrvo2is8
      @Jhfisibejoso8pkabrvo2is8 Рік тому +2

      ​@@lordhodenYep

    • @Linkfan001
      @Linkfan001 Рік тому +12

      Exactly! We are all far too ready to cede our intelligence and lives to these mechanized marionettes and most don't have the first clue what they do or how they work.
      These robots cannot and should not ever be trusted. They don't understand context, nuance, intent, or even the most basic concepts like just or true. We should all agree to not entertain the notion these things are anything more than mere tools to be used but leave it to the scientist and model makers. Not language, poems, art, law, history, etc.

    • @michaelkenner3289
      @michaelkenner3289 Рік тому +9

      So technically it's the human user that is hallucinating.
      Honestly I do think at least a small proportion of AIs abilities say nothing about computing and more about psychology. The computer isn't good at making an answer, we're good at interpreting the answer to apply to the situation.

  • @jaydavis4764
    @jaydavis4764 11 місяців тому +442

    "That's not how humans, let alone lawyers, talk."
    I love the implication that lawyers may not, in fact, be humans.

    • @TheLewistownTrainspotter8102
      @TheLewistownTrainspotter8102 11 місяців тому +17

      It's true. The difference between lawyers and humans is in their blood. Most lawyers' blood is laced with increased intelligence.

    • @westein1282
      @westein1282 10 місяців тому +7

      Well they aren't lawyers either

    • @grmpf
      @grmpf 10 місяців тому +47

      That's not how the expression "let alone" works.

    • @micahwright5901
      @micahwright5901 10 місяців тому

      @@grmpfit can grammatically work in both scenarios depending on the context
      “That’s not how a dog- let alone a person- would react”
      I’m actually not fully convinced I’m correct here, but it seems it can be used to contrast subjects as I see it currently. Feel free to set me straight or if I’m right agree 🫡

    • @Cinnaschticks
      @Cinnaschticks 10 місяців тому +14

      That's not what that means, but it would be a funny comment if it was.

  • @Twisted_Code
    @Twisted_Code Рік тому +911

    I love the term "unprecedented circumstance" at 14:46. It sounds very professional, but has a very clear hint, in this context, at how utterly insane the judge must think the plaintiff is for citing something he couldn't have read.

    • @housellama
      @housellama Рік тому +102

      Oh my god, by the end of that trainwreck the judge must have been utterly BAFFLED at how this whole thing went. He was beyond furious. That court transcript was rough.

    • @devinward461
      @devinward461 Рік тому +6

      It's such a powerful phrase

    • @Matt-cr4vv
      @Matt-cr4vv Рік тому

      In the end they actually got off pretty easy. They were fined $5,000 which is much lighter than it could have been. But the judge would absolutely be pissed because it wasn’t just one small issue that was quickly corrected and noted as being an error. Before the case was even brought the attorneys should have done research regarding the SOL issue and at minimum had an argument for it. But they didn’t and these fake cases were brought only after the opposition noted the SOL had run. But the cases being fake was brought up months before it even got to the judge by the defense team and the plaintiffs kept their ground on it. The judge was less pissed about the cases at first as much as he was pissed that it continued on for months and that so many steps to prevent this were ignored. But even so through all of it they still weren’t punished too badly (as of now). Will be interesting to see if the state bar steps in and what they do if anything. The malpractice and incompetence of the cases at the start was an issue but not immediately correcting it and carrying out the ruse for a bit is more of an issue.

    • @rilasolo113
      @rilasolo113 Рік тому +23

      Not particularly. It literally means unprecedented ie there is no legal precedent because this is the first time this particular legal problem has been encountered in a court.
      Precedent is what oils the courts. When there isn’t any is when courtrooms get exciting.

    • @tymondabrowski12
      @tymondabrowski12 Рік тому +10

      @@rilasolo113 How is there no precedence for making stuff up tho? They can't be the first people who submitted documents filled with nonsense, even if there was no ChatGPT before.

  • @Asethet
    @Asethet Рік тому +215

    I remember the actual Zicherman v. Korean Airlines case, it was 1996 not 2008 like ChatGPT cited. A Korean Airlines flight entered Soviet airspace in 1983 and was shot down killing all 269 on board. It's a poor case to cite even if they'd gotten the citation correct and would have only hurt their case.

    • @madeniquevanwyk
      @madeniquevanwyk Рік тому +18

      Jeez that's rough. Those poor people. Their last few moments must have been spent terrified and angry...

    • @jan_Masewin
      @jan_Masewin Рік тому +27

      And they were citing that to make an argument about someone's knee injury... 🤦‍♀

    • @andrewli8900
      @andrewli8900 Рік тому +4

      Didn't the Soviet Union dissolve in 1991? Was it still considered Soviet airspace back then?

    • @Asethet
      @Asethet Рік тому +27

      @@andrewli8900 the shoot down was in 1983, the court case happened in 1996 against the airline, which would be why ChatGPT chose to reference it, it was more than 2 years after the event, but the 2 year limit doesn't apply to willful misconduct. That's why it was a terrible case to cite, because it didn't apply in the current case and would have only served to further support the airlines position.

  • @MrLegendra
    @MrLegendra Рік тому +619

    Lol…as a medical student the amount of confidence chatGPT has with explaining disease pathologies that’s completely wrong is concerning. It does a good job of coming up with an answer that sounds right but isn’t.

    • @sponge1234ify
      @sponge1234ify Рік тому +76

      That's because, technically, it is. There's a reason that most actual AI researchers call it a Language Model and not an AI, because that's all it is.
      It know the language of law books, or the language of medical opinions. It does not have the facts, let alone up-to-date ones.

    • @neruneri
      @neruneri Рік тому

      Yes actually, blaming that is completely fine, because the simple fact that it is capable of lying to you with confidence means that its work is literally useless. In your own last example, the scope of what you're suggesting it should be used for doesn't even make sense. You would get the AI to do 5 minutes worth of work, just so that you can spend 50 minutes fact checking it. Do it right the first time instead lmao@@A-wy5zm

    • @KelMonstah
      @KelMonstah Рік тому +38

      Someone on the Gardening subreddit recently used ChatGPT to try and answer someone's question about pet-friendly plants and was SO CONFIDENTLY WRONG the mods actually had to step in, because the advice from ChatGPT could've literally killed this guy's pets. I had to go on a rant about Language Model hallucinations and the demonstrably failing accuracy of the output from these systems.
      It's really validating when the mods leave your factually correct, even if angry and spitefully written, comments and delete the moron's 😅

    • @bobthegamingtaco6073
      @bobthegamingtaco6073 9 місяців тому

      In essence, chatGPT is your drunk uncle. It hears half of what you said, and spins off a long story based on something a friend told it 20 years ago with "facts" sprinkled in to support the argument it wants to make

    • @anetkajerabkova19
      @anetkajerabkova19 9 місяців тому +1

      It's become a bit of a meme in the crochet community to ask chat gpt to write a pattern (usually a plushie because they're small and quick) and laugh at the mess it produces. It only looks like a pattern if you've never seen a pattern before and think crochet is done with needles.

  • @pryvisee
    @pryvisee 2 місяці тому +3

    25:38 when you reach behind you, it BLEW MY MIND. I thought it was a green screen for the LONGEST time. 🤣

  • @_somerandomguyontheinternet_
    @_somerandomguyontheinternet_ Рік тому +678

    Me, seeing a Legal Eagle video: An analysis of the Trump indictment already?
    Me, watching the Legal Eagle video: *Never mind this is so much better.*

    • @Thund3rDrag0n12
      @Thund3rDrag0n12 Рік тому +33

      He 100% should cover the Trump stuff, but it's nice he sprinkles in these sillier stories between them

    • @josephrion3514
      @josephrion3514 Рік тому +14

      I agree. He will get it done he's just taking time to get it right.

    • @Firgof
      @Firgof Рік тому +5

      I wouldn't be surprised if it's already up on Devin's nebula. He does say a lot that his videos go up first there and there's a delay before they go down to UA-cam

    • @bertilhatt
      @bertilhatt Рік тому +10

      Imagine you had to film this, and you are barely done reviewing the edits, that the Trump thing comes out…
      Wouldn’t you just have a Spa day, before swimming in the… what’s the German word again?

    • @spoopyvirgil4944
      @spoopyvirgil4944 Рік тому +3

      @@bertilhatt Schadenfreude?

  • @johnbockman6078
    @johnbockman6078 Рік тому +670

    As a retired writing teacher, I took a great interest in this case because this is EXACTLY the sort of BS I had to put up with when it came to lazy students. And when after 17:00 the Schwartz affidavit admitted that the work was done in consultation with ChatGPT, I thought, Lord have mercy, did they really think ChatGPT could do their research for them?! Mind. Totally. Boggled.

    • @anarchy_79
      @anarchy_79 Рік тому +61

      Hahaha faking citations in high school papers, I had that shit down to a SCIENCE. Kids these days, they can just ask their fancy robot to lie for them. When I was their age, I had to walk uphill both ways to come up with believable lies!

    • @rafflesiadeathcscent3507
      @rafflesiadeathcscent3507 Рік тому +12

      @@anarchy_79 hey boomer stop calling us out, I had my chatgpt do my homework just fine, copy paste here and there and boom the hours long homework was done under 30 minutes, the future is now old man

    • @commandrogyne
      @commandrogyne Рік тому +54

      @@rafflesiadeathcscent3507 good luck keeping up with that lol

    • @milanek1527
      @milanek1527 Рік тому +4

      ​@@commandrogynenah bro it does work. I dont use it for homework cuz its easy but for some presentations I ask it to create a sample that I then edit into my own style. Basically not waste alot of time just researching some facts.

    • @milanek1527
      @milanek1527 Рік тому

      ​@@anarchy_79nah bro it does work. I dont use it for homework cuz its easy but for some presentations I ask it to create a sample that I then edit into my own style. Basically not waste alot of time just researching some facts.

  • @Jack_Stones
    @Jack_Stones Рік тому +407

    I think the most eye opening thing in this whole video, is discovering that the book shelves are actually real, and not just a green screen lol

    • @minisnakali
      @minisnakali Рік тому +2

      Ong

    • @jamiefrontiera1671
      @jamiefrontiera1671 Рік тому +9

      same

    • @barryfraser831
      @barryfraser831 Рік тому +19

      I didn't even notice. I just assumed he had it as a prop ready for this moment.

    • @jeffkiska
      @jeffkiska Рік тому +9

      Came here hoping to see that I wasn't the only one who thought this!

    • @emilyrln
      @emilyrln Рік тому +1

      Same 😂

  • @Meili-q9x
    @Meili-q9x Рік тому +7

    I was so pleasantly suprised to find 80,000 hours sponsoring this channel! It's a great resource and all free, and I have genuinely been telling my fellow young and lost graduates to get on it

  • @keilanl1784
    @keilanl1784 Рік тому +467

    Never knew how easy it was to pull all federal court cases in their entirety. I guess that space librarian was right when she said "If it's not in the archives, it doesn't exist."

    • @jasonbell8515
      @jasonbell8515 Рік тому +56

      And then these bozos suggest that the archives are incomplete. What is this, some sort of space opera prequel?

    • @trianglemoebius
      @trianglemoebius Рік тому +36

      Well, except that the context of that scene was that the existence of a planet (which is what was being searched for) HAD been intentionally removed from their database as part of an intergalactic conspiracy. So, despite not being in their databases, Kamino DID exist.

    • @lunaticpathos
      @lunaticpathos Рік тому +15

      If it wasn't, then it would be impossible to defend yourself in court which would be a gross violation of our rights. Granted, you really need a lawyer to do it for you, but it is at least theoretically possible.

    • @IgnatiaWildsmith1227
      @IgnatiaWildsmith1227 Рік тому +10

      madame jocasta nu

    • @LesleyMVA
      @LesleyMVA Рік тому +12

      I thought it did exist but it was removed from the archives making it appear to never have existed. I could be wrong I forget things sometimes.

  • @PlasticBuddha88
    @PlasticBuddha88 Рік тому +198

    My dad was a litigator. He stopped being a litigator in the mid 90’s. I was able to find one of his cases from the mid 80’s entirely by accident using a basic Google search of his name once. Wow, these lawyers are stupid.

  • @lunarumbreon7699
    @lunarumbreon7699 Рік тому +928

    I love that we have legal documents with the term “bogus” in them

    • @shai5651
      @shai5651 Рік тому +61

      It's not as uncommon as you might think.

    • @shai5651
      @shai5651 Рік тому +60

      Legal Gibberish was the new low.

    • @DMSunderland
      @DMSunderland Рік тому +73

      "We would like it entered into the record that we're straight up not having a good time, your honor"

    • @logicisuseful
      @logicisuseful Рік тому +3

      Not the worst I’ve seen. Not even close.

    • @princesssookeh
      @princesssookeh Рік тому +23

      Why not. It was a legit term long before it got adopted as slang.

  • @NishaPerson
    @NishaPerson Рік тому +7

    Listening to The Judge just absolutely grilling the lawyers is possibly the most funny thing I've ever heard.

  • @RusselSprouts1
    @RusselSprouts1 Рік тому +928

    When he reached back and grabbed a book, I gasped. I always assumed the background was a green screen. I’m sorry for selling you short, Devin! Your content is great!

    • @ShabeRaven
      @ShabeRaven Рік тому +51

      1000% same.

    • @GLUBSCHI
      @GLUBSCHI Рік тому +47

      It looked too good to be a green screen

    • @msguineapigsrus
      @msguineapigsrus Рік тому +20

      i cant believe those are real books XD

    • @temi19
      @temi19 Рік тому +44

      It actually is still a green screen, but he had the book available within arm's reach. You can actually tell by how he reaches the book, and how the book is angled when he pulls it out, as well as the off lighting from the background compared to his face.

    • @GLUBSCHI
      @GLUBSCHI Рік тому +7

      @@temi19 i don't think so, but i haven't watched the part where he takes the book because i just skimmed through, do you have the timestamp

  • @Torgo224
    @Torgo224 Рік тому +148

    Always read the caselaw cited at you in a brief my friends. On many cases when responding to motions, I discovered the authority being cited at me said the exact opposite of what opposing counsel was using it for. Nothing is more satisfying than going into a hearing and throwing opposing counsel's caselaw back at them.

    • @nurlindafsihotang49
      @nurlindafsihotang49 Рік тому +7

      Hear..hear. Not only opposing counsel, district attorneys often guilty of this too. (Even some judges, but you did not hear this from me).

  • @MariaVosa
    @MariaVosa Рік тому +765

    This case will be cited in every Law School from now until the Terminators rise to annihilate us.

    • @scottywan82
      @scottywan82 Рік тому +16

      As it should be.

    • @theomegajuice8660
      @theomegajuice8660 Рік тому +41

      So at least a year or two then

    • @marcusaaronliaogo9158
      @marcusaaronliaogo9158 Рік тому +14

      Tbf, the chat bots are not sentient or even have signs of it.

    • @writer4life724
      @writer4life724 Рік тому +53

      Oh, it's funnier than that! I'm in the education field, and there's talk of using this case as Exhibit A for doing your own research and actually reading/citing your sources properly, lest you possibly lose your job.

    • @MariaVosa
      @MariaVosa Рік тому +13

      @@writer4life724 I honestly cannot think a better example could have been made to not leave your homework to AI!

  • @AngelofGrace96
    @AngelofGrace96 6 місяців тому +3

    As a librarian in training, there is so much access to law databases in public, academic, and law libraries. The idea of not being able to find a case (A CASE YOU CITED SO YOU SHOULD HAVE BEEN ABLE TO FIND IT IN THE FIRST PLACE) is so stupid and so suspicious.

  • @auroraasleep
    @auroraasleep Рік тому +170

    Chat GPT: great for generating plot ideas for my 9 yr. old's D&D games.
    Chat GPT: not great for actual legal court cases.

  • @Timey254
    @Timey254 Рік тому +120

    Remember folks, what ChatGPT can and can't do is literally in it's name: it's a CHAT bot.
    All it does is... keep up a conversation it thinks you want to have. That's where it starts and ends. It makes up the "facts" that you want to know because it doesn't really know anything, feeding it the "knowledge" just tells it how these facts are linguistically structured, so it can create a text that RESEMBLES what you are looking for to keep up the conversation.

    • @seventhslayer6935
      @seventhslayer6935 Рік тому +23

      In a basic sense, it is trained on how a correct response “should sound”. It doesn’t comprehend language and information like we do, it doesn’t have an abstract understanding as to why those documents it’s trained on are structured like that like we do, it just knows that they are, and frames a response accordingly. That’s why, as he said in a previous video on AI lawyering, GPT is known for “eloquent bs”. It sounds right, but it doesn’t have the ability to understand “this sounds right because it contains factual information”

    • @kingofhearts3185
      @kingofhearts3185 Рік тому +4

      It's basically a slightly more coherent version of if you kept hitting words from auto correct and used grammarly to check for a mistake.
      Nothing of sustenance will be said, and it will fall apart the longer it goes on.

    • @Allycat101010
      @Allycat101010 Рік тому +3

      LITERALLY. the creators (wildly dishonest) marketing hype didn't help, but I'm still amazed that people apparently just need to see a 'style' or 'sound' of typing to immediately think "wow. this thing must be factually correct". Bro

    • @carlwalker7560
      @carlwalker7560 Рік тому +3

      It seems to me that the use of the term AI is too loose, when applied to these types of program, at least to me as a layman. AI implies that there is some sort of reasoning going on, whereas in fact it is just language modelling.

    • @bzuidgeest
      @bzuidgeest Рік тому +4

      It does do programming though. And programming syntax is an exact business.
      With the right question and understanding of its limitations it can do some excellent work for you.
      You have to verify everything, but even then it can save a lot of time.
      Basic boilerplate, examples in how to use a new library.
      It's a great tool, but if you are a poor programmer, it won't make you a great programmer. If you don't understand what it gives you, you are likely to fail just like these lawyers.

  • @avengingblowfish9653
    @avengingblowfish9653 Рік тому +222

    I work at an academic press and received an email from a PhD student who couldn’t find a book we supposedly published that they cited in a paper.
    It turns out the book was made up by ChatGPT and the student ended up facing a disciplinary board for academic dishonesty…

    • @ratholin
      @ratholin Рік тому +27

      well he was. He cited a source he had no knowledge of. If you don't know it don't cite it. Academia has become such a game of finding other people who agree with you that there are plenty of dishonest books that will say whatever you want you don't need a robot if you're willing to do the leg work. A decade ago there was a whole ring in India who just worked at creating false consensus in order to keep the grants coming in.

    • @fachriranu1041
      @fachriranu1041 Рік тому +2

      Thats so stupid. If i want Language Model AI to be a tools in my work i can copy paste a section of a book and give me excerpt or anwering specific question accordance with the text of that section.
      The entire logical line is still mine but using language AI to basically paraphrase text.

    • @MushookieMan
      @MushookieMan Рік тому +15

      @@fachriranu1041 Nobody said you can't use AI as a tool. The student didn't verify the accuracy of the output. That's a different thing entirely.

    • @fachriranu1041
      @fachriranu1041 Рік тому +2

      @@MushookieMan thats exactly what i mean. The PhD student is so stupid using AI Language model that way

    • @RFTL
      @RFTL Рік тому +7

      @@MushookieMan Chat GPT it the new Wikipedia trap. You can use it as a tool but there is a reason you can't use it as source.

  • @TheLewistownTrainspotter8102
    @TheLewistownTrainspotter8102 8 місяців тому +5

    13:10 Just reading the fake cases is enough to leave me busting a gut with laughter.
    "Miller v. United Airlines" claims that United filed for bankruptcy in 1992 after the United Airlines Flight 585 crash, and had a former U.S. Attorney General as their legal counsel.
    "Martinez v. Delta Air Lines" has too many logical fallicies.
    "Petersen v. Iran Air" somehow confuses Washington DC with Washington State.
    "Durden vs. KLM Royal Dutch Airlines" cites itself as a precedent.
    "Varghese vs. China Southern Airlines" starts off as the wrongful death suit of Susan Varghese, personal representative of the estate of George Scaria Varghese (deceased). But then abruptly turns into Anish Varghese's lawsuit for breach of contract.

  • @Sakuraclone99k
    @Sakuraclone99k Рік тому +208

    THE NOTARY BEING FAKED HITS SO HARD FOR ME.
    As a Notary, I know how delicate court documents are and the fact that the Date was mismatched?!?! WHATT

    • @pierrecurie
      @pierrecurie Рік тому +25

      The document was notarized 3 months before it was written😂😂😂

    • @ronjohnson6916
      @ronjohnson6916 Рік тому +18

      @@pierrecurie Not what I use my time machine for, but everybody's different.

    • @gfox9295
      @gfox9295 Рік тому +2

      Yeah. No horribly silly time machine usage shaming, please.

    • @andrewshandle
      @andrewshandle Рік тому +3

      So was the notary "faked' or given how incompetent these guys are, did it just have the wrong month by the signature?

    • @pierrecurie
      @pierrecurie Рік тому +1

      @@andrewshandle I'm guessing incompetence, but given that these are official seals, the penalties are likely to be rather significant.

  • @pdfads
    @pdfads Рік тому +223

    It's not just law. When "discussing" scientific issues, chat_gpt creates references to scientific papers and books which do do exist.

    • @Sugarman96
      @Sugarman96 Рік тому

      As a current engineering student, I feel like I'm going insane seeing so many other students rely so blindly on this stupid thing, it's gonna produce so many morons

    • @EWSwot
      @EWSwot Рік тому +68

      Chat GPT doesn't actually "know" anything, it just produces things that sound realistic.
      The language module has a concept of what realistic sounds like based on its input data, it has no concept on what is real or how reality works.
      It is a very good parrot with no internal understanding of what it says.

    • @GoldenPantaloons
      @GoldenPantaloons Рік тому +27

      ​​​@@EWSwot Yep... In a sense it's like those "How English sounds to non-English speakers" videos: It _sounds like_ it's answering the prompt - but that's all. Which may sometimes overlap with a sensible response, or other times make no sense at all.
      As someone who was following ChatGPT's development, witnessing its sudden arrival into public consciousness has been... what's that word for secondhand embarrassment?

    • @MartynWilkinson45
      @MartynWilkinson45 Рік тому +23

      People keep trying to get a clever program to do things it was never designed to do, couldn't do if it was programmed to, and would be questionably legal if they could. Seriously, if AI is still struggling with how many fingers humans have, how do you expect it to understand legal issues?

    • @RocLobo358
      @RocLobo358 Рік тому +3

      Yeah it is excellent at metafiction

  • @jackalovski1
    @jackalovski1 Рік тому +275

    No matter how much of a fraud you feel when doing a task, always remember there’s someone out there doing something they have no clue about with confidence that can only come from ignorance.

    • @NeuroNinjaAlexander
      @NeuroNinjaAlexander Рік тому +24

      I always liked this one: if you ever feel incompetent just remember that there's a country out there that has gone to war with birds... and lost
      Although to be fair, those birds are like tanks lmao

    • @inconnu4961
      @inconnu4961 Рік тому +5

      @@NeuroNinjaAlexander We wont mention the name of the country (Oz-trail-ya) so as to not embarrass an otherwise good ally.

    • @anarchy_79
      @anarchy_79 Рік тому +3

      You are right. I shouldn't feel bad for being a fraud. There are bigger fraudsters out there, so I'm technically on the moral side of life.

    • @NeuroNinjaAlexander
      @NeuroNinjaAlexander Рік тому +2

      @@anarchy_79 That's the spirit! Lol
      There's a song out there (pretty fly for a white guy) with a line I love: he may not have style, but everything he lacks he makes up with denial

  • @miritallstag336
    @miritallstag336 29 днів тому

    Dude, your editor needs a raise. That bit where you threw the case law book? Perfect. I laughed at the explosion and then again at the car alarm.

  • @JohnDoe-bq9tq
    @JohnDoe-bq9tq Рік тому +254

    This shit needs to be dealt with in the harshest way possible.
    Imagine if the defendant didn't have or could not afford adequate legal representation.
    This case might have gone straight to a default verdict, without anyone checking anything.

    • @bertilhatt
      @bertilhatt Рік тому +50

      I think the more reasonable take on this is that it’s a good thing that someone mucking up with ChatGPT happened in a case clearly without merit. The judge can fairly eviscerate counsel without depriving the plaintive.

    • @uncreative5766
      @uncreative5766 Рік тому

      I'm not a lawyer, but this definitely qualifies as malpractice. LoDuca and Schwartz are FFFFF'd. They submitted everything under penalty of perjury, so they are definitely getting hit hard.

    • @randomwerewolf1099
      @randomwerewolf1099 Рік тому +25

      And in this case - LE seems pretty sure that the plaintiff's case is nonsense but imagine if they had had a legitimate case. They would've lost because their lawyers screwed up.

    • @janakakumara3836
      @janakakumara3836 Рік тому

      Well he could always copy past everything to ChatGPT and ask it to check if the content was AI generated.

    • @scifino1
      @scifino1 Рік тому +6

      @@randomwerewolf1099 I'm no legal expert, but aren't plaintiff's counsel here liable for something like fraud or malpractice to the detriment of the plaintiff?

  • @arjc5714
    @arjc5714 Рік тому +182

    When I was reading the tweets on my own, the part that made me cringe the hardest as a non-lawyer was absolutely the “Were you on vacation?” “No, Judge” exchange. Wanted to be swallowed by the earth. Wanted to disappear from existence. He lied about LITERALLY EVERYTHING holy shit.

    • @theclanguagedeveloper5309
      @theclanguagedeveloper5309 Рік тому +19

      Basically career ending answer.

    • @tbotalpha8133
      @tbotalpha8133 Рік тому +5

      I took psychological damage listening to that exchange.

    • @korbell1089
      @korbell1089 Рік тому +1

      @@tbotalpha8133 I took emotional damage.😁 But yeah, when even complete strangers wince when he stated "no", it was not looking good.

  • @elijahdage5523
    @elijahdage5523 Рік тому +164

    People see GPT talking like a real person and immediately believe that it is just as good, if not better than a human when, in reality, it's just really really good at putting words together that sound convincing.

    • @thork6974
      @thork6974 Рік тому +9

      That sounds like a lot of real people I know, actually.

    • @genericname2747
      @genericname2747 Рік тому +14

      ​@@thork6974ChatGPT is just like us...
      It's stupid

    • @anarchy_79
      @anarchy_79 Рік тому

      It's better at putting words together than 75% of the currently living human population. The other 25% are dead.

  • @cheefussmith9380
    @cheefussmith9380 29 днів тому +1

    I’m an in house attorney at a midsized tech company. I have people regularly sending me documents to review that they “ran through chat gpt and think it’s fine”
    My boss always jokes that chat gpt is going to put us out of a job. He only does that because he doesn’t see the emails people send me

  • @TauGDS
    @TauGDS Рік тому +398

    I am a lecturer of computer science at a British university, it is frightening how many of my students think they can just use ChatGPT to write their assignments, one of them even asked us about how they should cite 'AI' using Harvard referencing (I'd also like to point out that of the papers or sections of papers we've flagged for AI, none got higher than a 3rd and several actually failed)
    I'll say it loud for the people at the back: It's a chat bot! it makes sentences that *look* right, and for common knowledge it probably is right because we'd spot if it said "dogs have wings" or "the sun is made of camembert". it's like watching sci-fi and taking that as accurate physics, it's written to sound plausible to a layperson, that's it.

    • @googleoogle
      @googleoogle Рік тому +48

      Exactly, the amount of people that think it's a research tool and not a language model is astounding. it's ONLY job is to make a sentence that looks "right", nothing about accuracy.
      For personal use i wanted a recipe from ChatGPT, and ended up finding it so interesting i asked for a source. it straight up fabricated a source, fake website, book, page number, everything. when i asked for clarification after not being able to find it, the bot basically said "yea it's not real, sorry lol."
      As someone who usually tries to go to the original source if one is cited, the more fake citations that get through these papers, wether personal or even academic, this is gonna be a nightmare in the future sifting through all the junk that a word generator has tricked people into citing. Artificial "intelligence" is a farce sometimes. Really enjoyed your comment, hope you're well.

    • @klisterklister2367
      @klisterklister2367 Рік тому +28

      Unrelated, but shout-out to that time a chatbot suggested you can eat a poisonous plant and gave tips on recipes

    • @kosaciecsyberyjski
      @kosaciecsyberyjski Рік тому +28

      It's literally just better cleverbot. Idk why people treat it like a shortcut for assignments lol

    • @gfox9295
      @gfox9295 Рік тому +8

      it's MegaHAL (chat bot parody of HAL from 2001: A Space Odyssey) from the late 90s but with better coding. What's old is new again.

    • @knightwalkr
      @knightwalkr Рік тому +9

      I know people who use it to help them revise and find mistakes in their novels and such.
      Also I know someone who used it to help them write their bibliography page. They tracked all the appropriate information. Fed that into the ChatGPT and asked it to format the information into the proper format. Then they read over it to verify that it had in fact done it’s job correctly.

  • @unwashedotaku
    @unwashedotaku Рік тому +279

    I don't think I've ever seen Devin this apoplectic. Not only is he ashamed for these clowns, he's visibly angry that they tried this crap.

    • @bertilhatt
      @bertilhatt Рік тому +16

      Wait a couple of days so he recovers from the latest Trump thing…

    • @ianb9028
      @ianb9028 Рік тому +13

      Partly because this is the thin edge of the wedge. Chat GPT will be used more often to "improve" writing and these fake references will become common.

    • @retrosean199
      @retrosean199 Рік тому +12

      It's not just Fremdschämen, it's that this makes the legal profession look stupid 🤣

    • @SA-bc6jw
      @SA-bc6jw Рік тому +2

      As he should be, for a whole ton of reasons.

    • @patheddles4004
      @patheddles4004 Рік тому

      @@ianb9028 Easy enough to automate basic verification of references though: program parses a set of references, queries a law db about them, and reports which ones it couldn't find. Human then goes looking for these references, to see if they exist.
      I mean this wouldn't be good enough for verifying your own references, but it's absolutely good enough to catch most fakes from opposing counsel. And I can't see judges ever deciding that fake references are acceptable.

  • @MrT3a
    @MrT3a Рік тому +46

    "If you want to get technical, they make stuff up"
    This one got me in stitches.
    Truly a technical way of putting it.

  • @lanakeni5003
    @lanakeni5003 25 днів тому +1

    As a history student who had to read court documents for an entire semester, I could recognize a discrepancy a mile away (and I have bad eyesight).

  • @DiscountDeity
    @DiscountDeity Рік тому +817

    Superintendent Chalmers: “Six cases, none found on Google, at this time of year, in this part of the country, localized entirely within your court filings.”
    Principal Skinner: “Yes.”
    Superintendent Chalmers: “May I see them?”
    Principal Skinner: “…no.”

    • @aurea.
      @aurea. Рік тому +17

      Thanks for the laugh!

    • @durdleduc8520
      @durdleduc8520 Рік тому +8

      i could hear their voices

    • @ThePkmnYPerson
      @ThePkmnYPerson Рік тому +23

      Seymour! Your career as a lawyer's on fire!

    • @KarmikCykle
      @KarmikCykle Рік тому +17

      @@ThePkmnYPerson No, Mother! That's just the Northern Lights!

    • @vituperation
      @vituperation Рік тому +9

      Well, Loduca, I'll be overseeing this case _despite_ the statute of limitations.
      Ah! Judge Castel, welcome! I hope you're prepared for an unforgettable docket!
      Egh.
      (Opens up Fastcase to find legal citations only to find the subscription has expired)
      Oh, egads! My case is ruined! ... But what if... I were to use ChatGPT and disguise it as my own filing? Hohohohoho, delightfully devilish, Loduca.

  • @lilia-ai
    @lilia-ai Рік тому +222

    As a programmer, all I say is, Most people still don't realize how stupid AI is, they just sound smart, because they are confident. If it's basic task, or general knowledge, maybe a bit of trivia, you could use AI like chatGPT, but anything more complex, it's just spouting BS usually. I learn this from my experience using AI to help me code.

    • @Ellie-rx3jt
      @Ellie-rx3jt Рік тому +24

      People are easily deceived by other humans who sound smart because they're confident. Add in the (completely wrong but generally held) perceptions that computers always tell the truth and are unbiased...

    • @afrovarangian
      @afrovarangian Рік тому +19

      I've given ChatGPT a simple substitution sum and it gave the wrong answer, used the wrong formula and was trying to gaslight me on why it was correct and I was wrong.

    • @DWlsh43
      @DWlsh43 Рік тому +2

      Yep I realized this when I tried to test it against a router that couldnt talk to its neighbour because it wasnt broadcasting itself in OSPF and chatGPT was spouting complete nonsense and it was comically wrong at times.

    • @livelovelife32
      @livelovelife32 Рік тому +8

      It's funny how many times I have to tell chatgpt to re-examine what it just said and check if it actually answered the question I asked lol. I find it a good study tool though. I copy and paste my studynotes in it and tell it to ask me five/ten/twenty questions based on the information given. It's very good at that kind of thing. I'll give it a topic I'm interested in and tell it to tell me a couple of websites that covers that topic in detail. I'll ignore any link given cause they're normally wrong. You have to know how to use it and understand it's limitations. For example, don't ask it for the code of anything unless it's very very basic. What it can do though is examine the code and explain why something isn't working. It's not always right but most times it is. It's also very good for language learning. I have used it to explain grammar of a sentence I was struggling with.

    • @Alyssa-bi7pe
      @Alyssa-bi7pe Рік тому +1

      @@livelovelife32 yes I agree it’s surprisingly helpful with learning a language although it has given me contradictory answers which I asked to clarify which one is actually correct but it is pretty good at it for an AI

  • @chameleonfoot
    @chameleonfoot Рік тому +53

    Finally something that I actually can talk about because I’m fascinated by the topic: people saying ai lies.
    I still don’t really believe in calling it lying because like. It’s a language model. The computer literally has no idea what it’s saying.
    Basically take the thought experiment “The Chinese Room” for example. A person is trapped in a room with books in Chinese and is told to write appropriate responses to the slips of paper slid under their door. This person doesn’t speak or write Chinese, but all the slips of paper are written in Chinese. So they look for those symbols in their books and write the responses they see.
    But obviously they don’t know what they’re saying. And the only way the people outside would know they’re not fluent in Chinese is by knowing what is going on inside the room or seeing that their responses are odd.
    Chatgpt and other bots are the person inside the room, albeit they go through their books much quicker and will make up new sentences based on all the data they have. But they just. Don’t know what they’re saying. So it feels wrong to call it lying. If I meowed at my cat and he thought that meant I was about to feed him when I wasn’t, it’s not really lying because I didn’t know what I said. It’s on the shoulders of the consumer to understand that the program has no way to differentiate fact from fiction.

    • @katarh
      @katarh Рік тому +6

      There is also a chapter in Asimov's _I, Robot_ with a robot who interpreted "you must not harm humans" as also meaning to not cause emotional harm. So it lied. It lied with the best of intentions, because it didn't want to break a human's heart. But of course, the lies it told led to much worse problems.

    • @BigGomer
      @BigGomer Рік тому +1

      Never heard of that thought experiment pretty neat It seems like a good way to explain ai to people who are struggling grasping it

  • @michaellohmeier6427
    @michaellohmeier6427 Рік тому +3

    Okay, as a German I have do admit that your pronounciation was quite good. Just one tip: Fremdschämen has an 'ä'. The closest thing to this is like the 'a' in apple. The rest was spot on.

  • @WessonSnyder
    @WessonSnyder Рік тому +114

    As the only one in a family of lawyers who hasn't become a lawyer, these videos are how I show interest in their profession. Always fun to send them over and talk about it.

  • @cleverlilvixen
    @cleverlilvixen Рік тому +28

    24:26 The thing that really gets me here is, if you read through ChatGPT’s responses thoroughly, not only does it say that it doesn’t have access to current legal precedent, it encourages the “user” to consult legal databases, do their own legal research and consult with an attorney for proper legal analysis and guidance… I’m not a lawyer, but I think I would have taken that as a hint.

  • @zappababe8577
    @zappababe8577 6 днів тому

    9:05 I like the fact that the Judges give their answer to the court document in front of them by writing on the document itself. That's very environmentally friendly, not using more paper to give an answer, plus there can be no ambiguity as to which document the Judge is responding to and it can't get lost because it's actually on the document itself.