I've been a litigator for 40 years. My most embarrassing moment in court was when my opponent cited a case for a proposition that was not actually in the case. My best guess is that a word search turned up the right words, but in a whole different context. Then, to make matters worse, my opponent sent an associate who had not read the case to argue against me. The judge read the case because I mentioned the issue in my reply brief. The judge asked the opposing attorney about the facts of the case that led to the language cited in the brief. Then rather than acknowledging that she had not read the case, the attorney who was arguing the matter tried to BS the judge. The judge was so unhappy that I was actually embarassed to be standing there watching the debacle. I guess ChatGPT can take this to a new level.
If i pay some lawyer 100k and the mofo has the audacity to use chatGPT i would be fuming, legit fraud, im paying for your experience and knowledge, not giving away free money for you to relax and botch everything...
Lawyers frequently have paralegals, clerks, and interns perform the research, present their findings, and draft documents. That goes all the way up to the US Supreme Court. Using AI is not that much different. The problem is that this attorney did not thoroughly check the work before submitting it. That is either lazy, dishonest, or incompetent.
@@BryanTorok the difference is all those people have an education and can be held accountable. chatGPT is well known to just make shit up and i'd love to see you fire them for making up a gibberish brief.
@frozencanary4522 1sr Divorce lawyer stole my retain and sued for more cash after only working one hour. Small claims judge shook my hand and wished me luck with my actual Divorce after the SC case ended.
@@frozencanary4522 I couldn't find any willing to do so in the capital of my state. They all said they didn't want the bad attitude of legal peers to follow them in court.
ChatGPT forgot to get a law degree for that state and even forgot to pass the bar in that state. I could see a lawyer trying it but no way should they turn it in. Reminds me once a guy had two list one best ways the other the worst way but he forgot to title it well wrong list got implemented
Correct. If the lawyer is using ChatGPT... why would I bother to hire the lawyer? I can just use ChatGPT myself and cut out the middle man. I'm hiring the lawyer specifically because he can do something that ChatGPT can't do.
Not really...since it doesn't understand anything...its just recognizing patterns in its training data and responding based on probabilities of "truthiness"...basically it answers based on vibes, rather than facts or reality.
@@MuantanamoMobileYep. It has no idea when it can't find a good match in its training set and starts grasping at straws. It's designed to behave like that. If you were to remove the possibility of hallucinations, there wouldn't be a ChatGPT.
Sadly, this isn't the stupidest legal story I've heard today. Disney lawyers topped it by arguing that the Disney Plus arbitration agreement applies to lawsuits unrelated to Disney Plus, such as the wrongful death lawsuit filed by the widow of a doctor who died as a result of an allergic reaction after being served food with an allergen (after being assured the allergen would not be present) in a Disney owned restaurant.
I believe that took place at the restaurant Reglan Road in Disney Springs (a third party operated restaurant). I'm not trying to justify Disney, but from my understanding that would be like suing a mall for a restaurant not handling the allergy properly.
Instead of this specious argument, Dis should have settled this quickly along with an apology and a life time pass. Pulling this Dis + subscription was the wet dream of some ahole atty. So here is the takeawy. Before heading off to the Magic Kingdom, be sure to cancel your DP subscription to preserve your rights if you slip on an ice cream cone during the light parade.
I have a friend who uses chatgpt for EVERYTHING. I had three multimeters and said he could have one, he said chatgpt told him the best one was the one on the right because it had the temperature sensor. Dead wrong, the temperature sending one was on the left, so he isn't getting a temperature sensing one now.
@@johnpublic6582 Looks legit. A person could have said that in the context. But GPT4 is still not good at detecting jokes and sarcasm. Food glue is a thing, as well as MANY complex chemicals that can prevent the toppings from sliding off, but probably not used very often for pizza
@@ChrisS-oo6fl in this case, though, it's almost a necessity. There are other ways, but the reliability is... questionable... either way I tend to write with a moderately formal voice about half the time, and have been accused of using ChatGPT (which I do to assist with research, but I never trust any point without confirmation). Frankly, as much as I love the utility, I don't trust it enough to rely on it yet. Someday, maybe. But not for the foreseeable future
I know a number of lawyers, and this is an accurate assessment. Most of them have an undergraduate degree in something worthless (poli-sci seems to be popular) and then go to law school in an attempt to be able to pay back their loans. Paralegals, on the other hand, often know more about being a lawyer than the lawyers they work for. I'd force anyone wanting to be a lawyer to first spend five to ten years as a paralegal. It would weed out many of the idiots and the lazy while providing a solid background in how the law is practiced. If they can't hack the job of a paralegal they shouldn't be allowed to even take the bar exam.
About 10% of people who pass law school fail the bar exam. But there's only a 1% unemployment rate among lawyers. Which tells you that there are a lot of people who barely pass the bar exam who are working as lawyers.
@@blairhoughton7918 Only 10%? Typically a professional certification has a much higher failure rate, even for those who study their ass off. I took one that had a 90% failure rate, and I was glad it did; it kept the incompetents weeded out. The exams need to be difficult so that only those that truly know what the hell they are doing pass. If you're not ready, come back when you are.
@@tourneynet8557 Disney did something even worse. They tried to argue that the arbitration agreement for Disney Plus applies to a wrongful death lawsuit in a Disney-owned restaurant (a doctor died after being served food containing an allergen after he informed the staff that he had a food allergy).
What an insane idea! Actually doing your job?!?!?! What's next? Filing everything properly? What an absurd idea. These "attorneys" shouldn't be attorneys. They bring bad rep to all of them.
@@johnwesley256 -- There are a few good public defenders. Problem is that they are so swamped with cases that they cannot devote the time and energy that their clients need. (I met one of these, once. He could easily make partner at any law firm in the area, but he is an idealist and thinks the system can be fixed.)
@@TheRealScooterGuyYou can't push on a rope. If he wants to fix it he needs to run for office and get over half the leg and the governor in on his plan. But the people who buy politicians aren't interested in that. They're too busy trying to install an aristocracy...
Washington lawyers sign at the bottom attesting that they and only they drafted the filing if anyone helped they have to sign anyone not a lawyer can not sign I'm sure they cheat all the time though.
In the 80s, people were always running into bureaucrats who were certain that if it was in the computer that it had to be correct. This will continue to be a problem for years
There have also been cases where people have erroneously been certified dead. But once it's in the system, it's next to impossible to change that status.
Same thing on YT now. I've notice new channels popping up on my feed, where the subscribers are under 5k, the content is all AI generated. Video, voice. Channel owners too lazy to do the work themselves.
Most people don’t do much at work anyway, and many are overpaid. Most people are in what used to be called “make work” jobs. It’s just to keep the economy moving.
If the viewers are entertained by the content, I don't see a problem with it. The advertisers however may not like it since they're the ones paying for it. And anyone whose copyright has been violated won't like it (e.g. channels which basically convert a book into an AI voice audiobook). So rooting out AI channels is their problem. Not viewers'. The only negative effect I see for viewers is that (like spam) you end up wasting some time checking out a channel before you can figure out it's AI. We've already come up with a variety of anti-spam tools for email. Honestly i think the first company to implement similar tools for video has a chance to displace UA-cam. Google has been really lazy about improving the YT experience for viewers.
You can spot video and audio now , once ai generated material advances you won’t know what’s real or fake and that’s extremely scary. Think how our government or other authorities can abuse this technology
You finally noticed lol? However many of the AI tools are also a good utility to for those with disabilities to be able to create quality content. Especially synthetic voice generation which if they use the right resources you cannot tell the difference between human and machine. The new models are now able to utilize emotional inferences in the voice. Most established creators are using AI generated images. You can which ones utilize good models and what ones use terrible, old , open sourced ones. You can’t tell the difference from real and generated photos with proper use, upscaling and model selection. Your probably seeing dozens a day that you don’t realize is AI.
@@ChrisS-oo6fl Haha. Yeah, I heard the AI generated voice & images are so real now, you can't tell the difference. Very easy to use it for nefarious uses, like framing someone.
Signatures really shouldn't be held much for laypersons... We are nearly FORCED to agree to things we have no way of understanding. Buying a brand new RV, a topic we've seen on this channel, you're told that it is all just "boiler plate"... Are we meant to go hire a lawyer to read over everything before we sign it?
I've been a computer professional, and there's a secret about computers that everyone needs to know: Computers are stupid. That includes ChatGPT. I have friends who are experts in a variety of subjects. EVERY time we query ChatGPT on something we actually know, if the question is even a tiny bit hard ChatGPT gets it wrong. And when you point out the error to ChatGPT, it doubles down and condescendingly explains why your answer is wrong.
@@dionc6 It's worse than that. At least when they're working properly, computers do what they're told to do, and nothing else. They can't think for themselves. But human languages have ambiguity built in. When I see a comment that says "Your a genius," I know enough to mentally correct that to "You're a genius" and deal with it appropriately, however I define "appropriate" in context. If the comment is directed toward me I can Like it and respond "Thank you." Or I can Like it and say "It wasn't my idea. So-and-so said it first." If it's directed at someone else and I think the recipient of the praise is a moron I can simply ignore it and pass on. If I agree I can Like it and pass on, or I can Like it and leave my own comment expanding on the original comment. Those are human ways to react, based on human beliefs and human understanding. And based on the fact that I care enough to be reading and reacting to comments. But why would a computer, or a cloud-based AI, be reading comments in the first place? I'm currently on my tablet. My tablet isn't "reading" comments; it's simply displaying them, because I told it to. It isn't responding to comments; it's providing a method for me to respond, because that's what it's been programmed to do. If ChatGPT is reading comments it's probably because that's part of its training regimen, which does, indeed, make it vulnerable to Garbage In Garbage Out, but GIGO is about data. ChatGPT is a program. It was written by humans, with the purpose of simulating intelligence, key word: "simulating." As a program, ChatGPT can only do what it's been programmed to do. It wasn't programmed to simulate a lawyer, just a human, and then it was "trained" on a whole bunch of UA-cam videos. It knows nothing about the ethical rules lawyers have to follow. Also, apparently its programmers never told it not to make things up - why would they? They wanted it to be able to simulate a creative writer, or a music composer, or an artist, and all of those activities involve "making things up." So when a lawyer asks ChatGPT to provide case law that supports position X, and position X is totally wrong, what is ChatGPT supposed to do? You or I would say "Tell the lawyer that there is no case law supporting position X." But the lawyer didn't ask "IS there case law supporting Position X?"; he told ChatGPT to PROVIDE the case law supporting position X. So ChatGPT does exactly what it's been programmed to do; it gets creative.
If the court decides on summary judgment because your lawyer used ChatGPT, then would you, as the client, have a legal standing to appeal the decision based on inadequate representation? Edit: Not a lawyer, genuinely asking
Knowing some attorneys I could absolutely see them trying to make the argument that they didn't use chat GPT they used another Ai and they were never told they couldn't use that one
Yeah.... If my life/freedom was being held by someone, or paying this guy $500/hr... And I find out a program was doing ALL his work!!??? Ohhhhhh id be beyond mad!! I'm gonna need another lawyer to deal with my new criminal case
I’ve known a few legal secretaries, lawyers ask them to do a lot, essentially their job. I could see an overworked legal assistant being given a task and with a desk/task list piled high using ChatGPT to help them keep up.
@@dianayount2122 that makes the story so funny. The excuse of two actual humans not checking their fillings can only be topped by involving the intern, the intern hiring a guy on fiverr and the guy on fiverr using ChatGPT.
That's a problem all companies face. It's why you have foremen and managers overseeing underlings. And multiple levels of people double and triple checking work before it's approved for release to the client. That way the only way bad work can get out is if multiple levels of your organization are being lazy that way. If you're not practicing these safeguards, you shouldn't be charging the client hundreds of dollars per hour.
I write legal documents for a living. I'm not an attorney but between that law firm's own database or LexisNexis it's not that hard to write this stuff because 70 to 80% of whatever they are writing has already been written. All you have to do is use what has already been written as a template for what you are writing. You can even check the citations to see how well they fit your case or you can use LexisNexis to find cases to use as citations. Also, because I'm not an attorney (or even if I was) every single legal document that we process goes through a review process before our Director signs off on it.
Judge: You stand accused of stalking and voyeurism. How do you plead? Defendant: Not guilty due to insanity Judge: How so? Defendant: I am CRAZY about that girl! Judge: Bailiff, whack his PP! This is from memory, so i am probably getting it wrong, but that was a funny skit. Forget where i saw it, it was a long time ago. Might even have been audio only. Dunno.
Sanction is a contranym. That is, a word in which one meaning can be nearly the opposite or the opposite of another meaning. To sanction in this situation is some kind of punishment for bad behavior, such as a corporation being sanctioned by a government entity for ethical violations Sanction can also mean approval or permission as in the executive VP acted with the full sanction of the board when he committed the violations.
By definition, ChatGPT doesn't know anything about any topic. It only knows about language, language structure, word order, and the material it was trained on in a linguistic sense. It may _appear_ to have knowledge of topics discussed but that is just a language pattern it learned from source material.
Genie is out of the bag on that one. The courts will have to come down hard or it won't stop. If they come down hard they get less use and the app makers will have to make it work right. Funny though the first thing it learned was if you can't dazzle with brillance baffle with bull.
ChatGPT won't be the tool that is eventually used for this. It's a "large language model" which has a purpose of making language look like other examples of language that it has already seen, sometimes at the expense of accuracy. But you can be sure that LexusNexis or WestLaw or some other company in the legal field is working on an AI for legal purposes that will stress accuracy first. The subscription fee for the first such app will be super-high, assuring that only lawyers will be able to afford it. It will likely be able to create any brief supporting any legitimate position, following the rules and local rules for any court in the US. (This is a guess, but I'll bet it happens.)
Derided? GPT itself - probably not, it's a very useful *tool* for analyzing language patterns or re-phrasing stuff. Using it as a substitute to one's brain - absolutely yes!
@@jwhite5008 Well yes, the problem is its creator are selling it as a substitute for brains. So as a company/product it is worthy of being derided - because its creators and marketing make it so.
I'm a lawyer & use AI only as a finding tool (a very fast one) to ID seminal cases & provide a more advanced starting point for my actual research & writing. It's still more AU than AI.
The problem with large-language-model AIs like ChatGPT is that it is more interested in correct construction than factual correctness and will simply invent citations rather than find you existing ones. Live-search AI would need a different AI strategy. And as the lawyer you should still be clicking and reading every cite to be sure it's real and appropriate, because opposing counsel is going to open you like a can of sardines if they find a mistake.
This 'new' AI really just gives a sloppy summary of the same search results we've been getting for 30+ years using "nnatural language" on legal search engines - which has performed about as badly.
That would be smart except how does it work in text? Also, people should remember that AI generated content can't be copyrighted. So unless the person taking the AI content does something to modify it significantly, it's not theirs, and ChatGPT can't own anything.
Watermarks are easily removed. The fundamental problem isn't that it was generated with ChatGPT... the fundamental problem is that the quality is bad. If ChatGPT actually created professional-level and correct output, then it wouldn't be an issue.
It's only a matter of time before we get our first lawsuit where a sanctioned lawyer sues the company that developed ChatGPT for getting them sanctioned. Hopefully, it'll happen in every circuit so the courts can develop a strong precedent, like "Qualified Immunity"-level precedent
ChatGPT making "stuff" up ! This is why its great for general corporate documents, such as in advertising and marketing areas, where that's been the norm long before AI became a thing.
LegalEagle is currently advertising a "legal AI assistant" to help review and draft contract forms. It was interesting that at least some people felt a certain amount of confidence in the product. Perhaps AI are more reliable for that type of "boilerplate" work. Technically, its still reviewed by a lawyer, but technically, so are the court filings.
That does seem like a better use of the technology. A contract is simply taking the terms that have been agreed to (or terms that one hopes the other party will agree to) and putting them in writing. This technology would facilitate that by allowing the wording of that contract to be polished to look better. There would be no reliance on facts that could be fabricated.
If ChatGPT was returning correct results, then using it would save time. Which is the opposite of stealing from clients, unless they were still billing for all the time they would have spent doing it manually. But it's doesn't, so yeah...
@@blairhoughton7918 we already know they would bill all the hours no matter what, but yeah it's theft in multiple ways, billing someone and losing their case because you didn't do the necessary thing/work to ensure their case is theft as well, for sure.
@@blairhoughton7918 Well you also got to remember that without ChatGPT, the Lawyer would have been forced to pay the salaries and time of paralegals, clerks, and interns with the money you give them. But if they cut out the paralegals, clerks, and interns, and just use ChatGPT, then they don't have to pay for paralegals, clerks, and interns and are pocketing even MORE money than they normally would be. I would 100% call that Theft and Fraud, even if AI 100% worked out.
Eventually we'll get a story where a lawyer uses ChatGPT to respond to the court regarding their use of ChatGPT to file an earlier brief. I'm going to enjoy that one ... Also, if I were a lawyer, I would be thankful that ChatGPT is demonstrably unable to do my job correctly. If the day comes when it can, a lot of people in the legal field will be out of work. So I would be loath to hasten that day ...
That day will come. Maybe not the product known as _ChatGPT,_ but there will be some legal AI created, with expensive subscription fees, that doesn't make up its own caselaw, and it will be game over for anyone not using it. I'm guessing that LexusNexus or another company already in that business is already working on such a product.
This is why some people theorize that right now is the golden age of AI. And it's only going to get worse from here. The current AI was trained on datasets which are 100% (or damn near) human-generated content. Future AI will be trained on datasets which are increasingly "polluted" by AI-generated content which people tried to pass off as their own work. Leading to a feedback loop of nonsense being trained into future AI.
☝️ I once worked for a medical clinic. DR didn't know what the patient had, said,~ "Excuse me while I reference my medical books." DR. came to the employee area got on a netbook and googled their symptoms. Then DR went back to counsel the patient. 😂
@@copcuffs9973 if the doc isn't a specialist in a certain field e.g. a -Cardiologist- (-heart-surgeon-) might lookup a suspected -cancer's- -symptoms- and then when sure, will order -specific-tests- and refer you to an -Oncologist- (-cancer-doctor-).
@@copcuffs9973 thats why one is refered to a cardiologist for heart issues, not a Dentist or a Neural Surgeon. And to A pyschiatrist for mental-aliments not an Oncologist or ENT specialist.
Eventually there will be an AI that functions as a compendium of all law knowledge. It's coming. Lawyers using baby AGI are playing fast and loose but rest assured the days of studying law are numbered. Seeing how close AI companies are at solving hallucinations and improving reasoning ability beyond human levels, the justice system is in for a treat. A homeless person will have the equivalent legal representation of a fortune 500 company, without the political influence. A lot of lawyers will say otherwise but they're all wrong. People usually won't consider scenarios where the outcome of their lives are worse. "The market won't crash." "If they take all the jobs what will people do?" "They can't do MY job." I woke up when I discovered Alpha Fold. There used to be an entire industry around protein folding. It took a PHD to figure out how one protein folded. A single AI figured it out.
Well..... The state of Washington is having such a problem finding attorneys to work as a public pretender they are considering not requiring passing the bar to become a licensed attorney
@@baronvonslambert It'll come up with the right answers within a few years, it's literally a matter of time before a model can be trained on trillions of points. We know what we need to do, we just need a bigger boat to do it and they're currently building it as fast as Nvidia can pump out units.
This is why when people start talking about "highly educated" i always shake my head. There's smart, and then there's smart. Education is just leaning things, it does NOT make you smart. You can be highly intelligent and just not educated. Or you can be highly educated, and not smart. There's a small percentage that is both. And, i will state categorically, a smart person is better than a highly educated person. I know personally some people who hold doctorates that are a dumb as a box of rocks when it comes to anything not in their highly specific area of knowledge. And by dumb as a box of rocks i mean unable to do basic reasoning applied to daily life. You know, problem solving.
Yes, ChatGPT has been trained on law libraries, but it has also been trained on Reddit. If you don't know what you're getting into with that, better not to get into it!
I think most attorneys would consider this to be the stuff of nightmares -- the equivalent of getting called on the carpet in front of a judge in your underwear. The mere idea of filing something I couldn't vouch for is terrifying.
There are tools and ways a lawyer can legitimately use "AI" (Large language models) to aid in their work - drafting their briefs is not one of them. You can, however, draft your brief the normal way, then use an LLM to punch up the impact of the prose. You of course still have to be careful, of course.
This is apparently an irresistible temptation for less than quality attorneys. As such it might be a good thing in that it will weed out lazy, overbilling, incompetent attorneys. Who would knowingly hire an attorney that works this way ?
I so wanted a laugh at the end of the night, now I'm just amazed at how stupid these lawyers are. lol I guess that's a laugh in and of itself. Thank you, good Sir, and goodnight.
I find it interesting that so many lawyers are getting caught like this. Before these stories started popping up, I wondered if Courts regularly reviewed the arguments being made in filings. I think I have my answer to that question.
Well, I've seen cases that one cannot trace or they are non existent. But, as for professional courtesy, I don't raise the issue in high tones. " Your honor I couldn't trace these cases so I can't comment on them, yet they do not have any significance otherwise they would be quoted in detail by the opposing party". It works well in a jurisdiction that case law is only advisory.
THAT AIN'T HAPPENIN !!! Best to just use a chat bot with a JD,.. and hope for the best in the future 😂 .... he's a good algorithm, always on time for class, never speaks till spoken too, says he knows Alexa Too 🤣
It's pretty easy to see how this might happen. Some lawyers were talking and one of them made a comment about how they used ChatGPT to write a document and save time. It was a simple or common case where ChatGPT had LOTS of training data in that area and was able to produce something that was perfect, but no one talks about that fact. The lawyer raves about how it was 100% perfect. The other Lawyers take note. Then.. 2 months later one of those lawyer's was overworked and near a deadline because they took on too many cases to try and pay for the rebuild of their yacht in time for summer. One thing led to another and they just needed more time so thought, well it worked for that other guy. So they use chatGPT but this time it's on a case that's not very common and chatGPT starts hallucinating.
I asked chatgp to write a short biography on Max Lucado. I had to delete one sentence where it claimed that his first book was a child's book and gave a name.
The Courts need to automatically fine and sanction any lawyer that uses chat gpt. The Bar associations need to send out notifications and make it part of their tests, that use of chat gpt should be considered illegal. Law schools need to start teaching that using ai to do your job for you is unethical and should be considered illegal and grounds for loosing cases and being held criminally and civilly responsible.
Agreed. The laziness of relying on ai ought to be actionable: an attorney billing a client for spew from Chatgpt?. Unethical... and dangerous. Some cases aim for an immediate verdict, others aim for an appeal, knowing the original jurisdiction leans one way. Approach & style may arrive at different nuances in either instance. More importantly, I don't think the public has ai at the moment. Thats locked down, and we are served something of lesser quality - think 5%
By now, any lawyer worth 5 cents should know about the pitfalls of Chap from continuing education or Steve L. They would know that opposing counsel has law clerks and interns whose tasks include checking case citations. So do judges.
I've been a litigator for 40 years. My most embarrassing moment in court was when my opponent cited a case for a proposition that was not actually in the case. My best guess is that a word search turned up the right words, but in a whole different context. Then, to make matters worse, my opponent sent an associate who had not read the case to argue against me. The judge read the case because I mentioned the issue in my reply brief. The judge asked the opposing attorney about the facts of the case that led to the language cited in the brief. Then rather than acknowledging that she had not read the case, the attorney who was arguing the matter tried to BS the judge. The judge was so unhappy that I was actually embarassed to be standing there watching the debacle. I guess ChatGPT can take this to a new level.
It's just another proof there's no limits to STUPIDITY.. 💯💯😂😂😂
A lawyer who tries to save time with ChatGPT, then bills as if they did their due diligence is a fraud.
If i pay some lawyer 100k and the mofo has the audacity to use chatGPT i would be fuming, legit fraud, im paying for your experience and knowledge, not giving away free money for you to relax and botch everything...
*due
@@jamesr1894My mother taught English and I hated that fact as a child. Now I am my mother and can barely restrain myself.
Lawyers frequently have paralegals, clerks, and interns perform the research, present their findings, and draft documents. That goes all the way up to the US Supreme Court. Using AI is not that much different.
The problem is that this attorney did not thoroughly check the work before submitting it. That is either lazy, dishonest, or incompetent.
@@BryanTorok the difference is all those people have an education and can be held accountable. chatGPT is well known to just make shit up and i'd love to see you fire them for making up a gibberish brief.
Your honor I would like to sue my attorney for not doing their job.
@tylerpetersen6226 It happens all the time. There are lawyers who specialize in malpractice suits against other lawyers .
@frozencanary4522
1sr Divorce lawyer stole my retain and sued for more cash after only working one hour.
Small claims judge shook my hand and wished me luck with my actual Divorce after the SC case ended.
@@frozencanary4522
I couldn't find any willing to do so in the capital of my state. They all said they didn't want the bad attitude of legal peers to follow them in court.
ChatGPT forgot to get a law degree for that state and even forgot to pass the bar in that state. I could see a lawyer trying it but no way should they turn it in. Reminds me once a guy had two list one best ways the other the worst way but he forgot to title it well wrong list got implemented
85 perc of attorneys are incompetent
To me they are guilty of fraud. Their client paid for a lawyer's expertise not the ramblings of a glorified pachinko machine
Correct. If the lawyer is using ChatGPT... why would I bother to hire the lawyer? I can just use ChatGPT myself and cut out the middle man. I'm hiring the lawyer specifically because he can do something that ChatGPT can't do.
“Hi Honey! I finally got featured in Virginia Lawyers Weekly! 🎉”
“Congrats Babe. I’m so proud of you. 🥂”
It's all in how you spin it? 😂
ChatGPT is a child. It is seeking your approval and wants you to be proud, even if it has to lie or make up stories.
Not really...since it doesn't understand anything...its just recognizing patterns in its training data and responding based on probabilities of "truthiness"...basically it answers based on vibes, rather than facts or reality.
@@MuantanamoMobileYep. It has no idea when it can't find a good match in its training set and starts grasping at straws. It's designed to behave like that.
If you were to remove the possibility of hallucinations, there wouldn't be a ChatGPT.
Sadly, this isn't the stupidest legal story I've heard today. Disney lawyers topped it by arguing that the Disney Plus arbitration agreement applies to lawsuits unrelated to Disney Plus, such as the wrongful death lawsuit filed by the widow of a doctor who died as a result of an allergic reaction after being served food with an allergen (after being assured the allergen would not be present) in a Disney owned restaurant.
Yeah the world has lost its mind that lawyer should be reported for a bar complaint and Disney sanctioned for 2 million for even trying to argue that.
I read about that case a bit ago. I hope they get hammered for that.
I believe that took place at the restaurant Reglan Road in Disney Springs (a third party operated restaurant).
I'm not trying to justify Disney, but from my understanding that would be like suing a mall for a restaurant not handling the allergy properly.
Instead of this specious argument, Dis should have settled this quickly along with an apology and a life time pass. Pulling this Dis + subscription was the wet dream of some ahole atty. So here is the takeawy. Before heading off to the Magic Kingdom, be sure to cancel your DP subscription to preserve your rights if you slip on an ice cream cone during the light parade.
Steve did another video on that today.
I have a friend who uses chatgpt for EVERYTHING. I had three multimeters and said he could have one, he said chatgpt told him the best one was the one on the right because it had the temperature sensor. Dead wrong, the temperature sending one was on the left, so he isn't getting a temperature sensing one now.
Classic ChatGPT.
I hate it when all the topings come off the pizza with the first bite, so I've been adding glue to my sauce to keep that from happening. ChatGPT.
Left or right from who's perspective?
@@regd809 From the current perspective, Pol Pot, Stalin, and Mao were far right extremists.
@@johnpublic6582 Looks legit. A person could have said that in the context.
But GPT4 is still not good at detecting jokes and sarcasm.
Food glue is a thing, as well as MANY complex chemicals that can prevent the toppings from sliding off, but probably not used very often for pizza
I'm waiting for one counsels chatgpt filing to claim anothers filing is chatgpt. Especially when the second counsels actually isn't chatgpt.
That's not going to go the way you think it will.
Ironically they are likely using Opena AI’s utilities to check if something is Ai or not which is utilizing GPT.
@@ChrisS-oo6fl in this case, though, it's almost a necessity.
There are other ways, but the reliability is... questionable... either way
I tend to write with a moderately formal voice about half the time, and have been accused of using ChatGPT (which I do to assist with research, but I never trust any point without confirmation). Frankly, as much as I love the utility, I don't trust it enough to rely on it yet.
Someday, maybe. But not for the foreseeable future
The appropriate sanction is to require counsel to stand in the corner of the courtroom while wearing a large red conical hat.
The appropriate sanction is to jail them for defrauding the court for a decade or two so that no one would ever think of doing it.
My brother is a retired lawyer and he told that most lawyers are stupid. Seems like it here
I know a number of lawyers, and this is an accurate assessment. Most of them have an undergraduate degree in something worthless (poli-sci seems to be popular) and then go to law school in an attempt to be able to pay back their loans. Paralegals, on the other hand, often know more about being a lawyer than the lawyers they work for.
I'd force anyone wanting to be a lawyer to first spend five to ten years as a paralegal. It would weed out many of the idiots and the lazy while providing a solid background in how the law is practiced. If they can't hack the job of a paralegal they shouldn't be allowed to even take the bar exam.
About 10% of people who pass law school fail the bar exam. But there's only a 1% unemployment rate among lawyers. Which tells you that there are a lot of people who barely pass the bar exam who are working as lawyers.
@@Maz632 You know what? Having to always shift gears is a pain in the ass. I think Lehto is on to something.
@@blairhoughton7918 Only 10%? Typically a professional certification has a much higher failure rate, even for those who study their ass off. I took one that had a 90% failure rate, and I was glad it did; it kept the incompetents weeded out. The exams need to be difficult so that only those that truly know what the hell they are doing pass. If you're not ready, come back when you are.
You only have to be smarter than the ppl who dont know to seem like an expert
This is getting as bad as the Hertz storyline
Hertz is one company. These are totally independent lawyers.
@@tourneynet8557 Disney did something even worse. They tried to argue that the arbitration agreement for Disney Plus applies to a wrongful death lawsuit in a Disney-owned restaurant (a doctor died after being served food containing an allergen after he informed the staff that he had a food allergy).
@@maschwab63 you're amazing!
Actually, worse. 😅, peace 💚
Can't wait for the day Hertz' lawyers get caught using ChatGPT to screw people over in court, that would really be the cherry on the cake
D'oh! Should be grounds for disbarring an attorney, plus criminal sanctions..
What an insane idea! Actually doing your job?!?!?! What's next? Filing everything properly? What an absurd idea.
These "attorneys" shouldn't be attorneys. They bring bad rep to all of them.
Every public defender ever.
@@johnwesley256 -- There are a few good public defenders. Problem is that they are so swamped with cases that they cannot devote the time and energy that their clients need. (I met one of these, once. He could easily make partner at any law firm in the area, but he is an idealist and thinks the system can be fixed.)
LOL. This is a mustard stain compared to what bad lawyers do.
@@TheRealScooterGuyYou can't push on a rope. If he wants to fix it he needs to run for office and get over half the leg and the governor in on his plan. But the people who buy politicians aren't interested in that. They're too busy trying to install an aristocracy...
Sanction is a fun word.
It means both
"a threatened penalty for disobeying a law or rule"
AND
"official permission or approval for an action:"
Sounds like a lawyer too cheap to hire a paralegal.
Can blame him. Most paralegals are greatly overpaid.
Washington lawyers sign at the bottom attesting that they and only they drafted the filing if anyone helped they have to sign anyone not a lawyer can not sign I'm sure they cheat all the time though.
@@jamessteele7102I think paralegals should show this video to their bosses and demand raises.
Or it may be authored by a paralegal who used chatgpt and then signed by a lawyer.
So you got problems with capitalism?? 😂😂😂
This just shows that these lawyers don't actually do any work and push everything onto other people.
In the 80s, people were always running into bureaucrats who were certain that if it was in the computer that it had to be correct. This will continue to be a problem for years
Same for the books, newspapers, gossip around town.
There have also been cases where people have erroneously been certified dead. But once it's in the system, it's next to impossible to change that status.
Same thing on YT now. I've notice new channels popping up on my feed, where the subscribers are under 5k, the content is all AI generated. Video, voice. Channel owners too lazy to do the work themselves.
Most people don’t do much at work anyway, and many are overpaid. Most people are in what used to be called “make work” jobs. It’s just to keep the economy moving.
If the viewers are entertained by the content, I don't see a problem with it. The advertisers however may not like it since they're the ones paying for it. And anyone whose copyright has been violated won't like it (e.g. channels which basically convert a book into an AI voice audiobook). So rooting out AI channels is their problem. Not viewers'.
The only negative effect I see for viewers is that (like spam) you end up wasting some time checking out a channel before you can figure out it's AI. We've already come up with a variety of anti-spam tools for email. Honestly i think the first company to implement similar tools for video has a chance to displace UA-cam. Google has been really lazy about improving the YT experience for viewers.
You can spot video and audio now , once ai generated material advances you won’t know what’s real or fake and that’s extremely scary. Think how our government or other authorities can abuse this technology
You finally noticed lol? However many of the AI tools are also a good utility to for those with disabilities to be able to create quality content. Especially synthetic voice generation which if they use the right resources you cannot tell the difference between human and machine. The new models are now able to utilize emotional inferences in the voice. Most established creators are using AI generated images. You can which ones utilize good models and what ones use terrible, old , open sourced ones. You can’t tell the difference from real and generated photos with proper use, upscaling and model selection. Your probably seeing dozens a day that you don’t realize is AI.
@@ChrisS-oo6fl Haha. Yeah, I heard the AI generated voice & images are so real now, you can't tell the difference. Very easy to use it for nefarious uses, like framing someone.
Signatures really shouldn't be held much for laypersons... We are nearly FORCED to agree to things we have no way of understanding. Buying a brand new RV, a topic we've seen on this channel, you're told that it is all just "boiler plate"... Are we meant to go hire a lawyer to read over everything before we sign it?
"Never buy a putter until you've first had a chance to throw it."
-Leslie Nielsen
The ChatGPT probably took these fake cases from Law and Order episodes
I've been a computer professional, and there's a secret about computers that everyone needs to know: Computers are stupid. That includes ChatGPT.
I have friends who are experts in a variety of subjects. EVERY time we query ChatGPT on something we actually know, if the question is even a tiny bit hard ChatGPT gets it wrong. And when you point out the error to ChatGPT, it doubles down and condescendingly explains why your answer is wrong.
"Garbage in, garbage out" comes to mind.
@@dionc6 It's worse than that. At least when they're working properly, computers do what they're told to do, and nothing else. They can't think for themselves. But human languages have ambiguity built in.
When I see a comment that says "Your a genius," I know enough to mentally correct that to "You're a genius" and deal with it appropriately, however I define "appropriate" in context. If the comment is directed toward me I can Like it and respond "Thank you." Or I can Like it and say "It wasn't my idea. So-and-so said it first." If it's directed at someone else and I think the recipient of the praise is a moron I can simply ignore it and pass on. If I agree I can Like it and pass on, or I can Like it and leave my own comment expanding on the original comment. Those are human ways to react, based on human beliefs and human understanding. And based on the fact that I care enough to be reading and reacting to comments.
But why would a computer, or a cloud-based AI, be reading comments in the first place? I'm currently on my tablet. My tablet isn't "reading" comments; it's simply displaying them, because I told it to. It isn't responding to comments; it's providing a method for me to respond, because that's what it's been programmed to do.
If ChatGPT is reading comments it's probably because that's part of its training regimen, which does, indeed, make it vulnerable to Garbage In Garbage Out, but GIGO is about data.
ChatGPT is a program. It was written by humans, with the purpose of simulating intelligence, key word: "simulating." As a program, ChatGPT can only do what it's been programmed to do. It wasn't programmed to simulate a lawyer, just a human, and then it was "trained" on a whole bunch of UA-cam videos. It knows nothing about the ethical rules lawyers have to follow.
Also, apparently its programmers never told it not to make things up - why would they? They wanted it to be able to simulate a creative writer, or a music composer, or an artist, and all of those activities involve "making things up."
So when a lawyer asks ChatGPT to provide case law that supports position X, and position X is totally wrong, what is ChatGPT supposed to do? You or I would say "Tell the lawyer that there is no case law supporting position X." But the lawyer didn't ask "IS there case law supporting Position X?"; he told ChatGPT to PROVIDE the case law supporting position X. So ChatGPT does exactly what it's been programmed to do; it gets creative.
If I was a judge, I'd automatically refer them to the bar for action.
If the court decides on summary judgment because your lawyer used ChatGPT, then would you, as the client, have a legal standing to appeal the decision based on inadequate representation?
Edit: Not a lawyer, genuinely asking
Knowing some attorneys I could absolutely see them trying to make the argument that they didn't use chat GPT they used another Ai and they were never told they couldn't use that one
Yeah.... If my life/freedom was being held by someone, or paying this guy $500/hr... And I find out a program was doing ALL his work!!??? Ohhhhhh id be beyond mad!! I'm gonna need another lawyer to deal with my new criminal case
I laughed at your comment.
So nice to see how you remembered your mom in this video
Can the lawyer’s client file a claim against the attorney?
Sure.
Generally they just file a bar complaint though
I’ve known a few legal secretaries, lawyers ask them to do a lot, essentially their job. I could see an overworked legal assistant being given a task and with a desk/task list piled high using ChatGPT to help them keep up.
but all attorneys should read all of their filings
Or be in jail.@@dianayount2122
@@dianayount2122 that makes the story so funny. The excuse of two actual humans not checking their fillings can only be topped by involving the intern, the intern hiring a guy on fiverr and the guy on fiverr using ChatGPT.
That's a problem all companies face. It's why you have foremen and managers overseeing underlings. And multiple levels of people double and triple checking work before it's approved for release to the client. That way the only way bad work can get out is if multiple levels of your organization are being lazy that way. If you're not practicing these safeguards, you shouldn't be charging the client hundreds of dollars per hour.
I write legal documents for a living. I'm not an attorney but between that law firm's own database or LexisNexis it's not that hard to write this stuff because 70 to 80% of whatever they are writing has already been written. All you have to do is use what has already been written as a template for what you are writing. You can even check the citations to see how well they fit your case or you can use LexisNexis to find cases to use as citations. Also, because I'm not an attorney (or even if I was) every single legal document that we process goes through a review process before our Director signs off on it.
so...what is "sanctioning" ? Is it kinda like.. "Bailiff, whack his P P"?
Judge: You stand accused of stalking and voyeurism. How do you plead?
Defendant: Not guilty due to insanity
Judge: How so?
Defendant: I am CRAZY about that girl!
Judge: Bailiff, whack his PP!
This is from memory, so i am probably getting it wrong, but that was a funny skit. Forget where i saw it, it was a long time ago. Might even have been audio only. Dunno.
Sanction is a contranym. That is, a word in which one meaning can be nearly the opposite or the opposite of another meaning. To sanction in this situation is some kind of punishment for bad behavior, such as a corporation being sanctioned by a government entity for ethical violations
Sanction can also mean approval or permission as in the executive VP acted with the full sanction of the board when he committed the violations.
ChatGPT gets stuff wrong and will hallucinate if it knows nothing about the topic. This can’t be good for lawyers either.
By definition, ChatGPT doesn't know anything about any topic. It only knows about language, language structure, word order, and the material it was trained on in a linguistic sense. It may _appear_ to have knowledge of topics discussed but that is just a language pattern it learned from source material.
@@mcwolfbeast But ChatGPT may have the same reasoning ability as the lawyer that filed its response.
So a computerized version of Donald Trump?
Genie is out of the bag on that one. The courts will have to come down hard or it won't stop. If they come down hard they get less use and the app makers will have to make it work right. Funny though the first thing it learned was if you can't dazzle with brillance baffle with bull.
I've known the baffle with bull lawyers and often they did better than the rational ones 😮
Genies come out of bottles (something that once started can't be stopped), cats come out of bags (revealing something that was confidential/secret).
ChatGPT won't be the tool that is eventually used for this. It's a "large language model" which has a purpose of making language look like other examples of language that it has already seen, sometimes at the expense of accuracy. But you can be sure that LexusNexis or WestLaw or some other company in the legal field is working on an AI for legal purposes that will stress accuracy first. The subscription fee for the first such app will be super-high, assuring that only lawyers will be able to afford it. It will likely be able to create any brief supporting any legitimate position, following the rules and local rules for any court in the US. (This is a guess, but I'll bet it happens.)
@@TheRealScooterGuyI find this difficult to believe because llms can't reason.
@@jessecarliner7733Oh, you mean like putting the toothpaste back in the tube ?
Do we now have to ask an Attorney we are possibly considering hiring for legal work, "Do you use ChatGPT?"
"Sure."
"Do you check its work?"
Would you actually expect an honest answer to that question ?
Another dumb move by the law firm of Dumb, Dumber and Stupid!
"Dewey, Cheatham and Howe"
@@wingatebarraclough3553 That's the name of the smarter law firm. 🙂
Lazy, Sleazy and Ballzy LLC
@@edmundwest5636 Ah, the crosstown competition.
Five days in federal jail for contempt of court would solve the problem once and for all.
Who knew that "do your own homework" was life advice?
Probably not, but it is a good first step.
I bet you VOTED for your masters then cry a lot.. 😂😂😂
@@kimlground206 I'm pretty sure that it would become very uncommon.
@@jpnewman1688 Go away troll-thing.
7:10 Steve's Mom is a librarian. Now we know how he got all those law books behind him. 😱
I see it as Chat GPT doing us all a favor and getting rid of the incompetent lawyers.
Steve, out in the programming world we're calling GPT "Gippity" now days, because it just sounds derisive - because GPT should be derided.
Derided?
GPT itself - probably not, it's a very useful *tool* for analyzing language patterns or re-phrasing stuff.
Using it as a substitute to one's brain - absolutely yes!
@@jwhite5008 Well yes, the problem is its creator are selling it as a substitute for brains. So as a company/product it is worthy of being derided - because its creators and marketing make it so.
Thank you Professor for today’s lesson👍
I'm a lawyer & use AI only as a finding tool (a very fast one) to ID seminal cases & provide a more advanced starting point for my actual research & writing. It's still more AU than AI.
Perfect (and appropriate) use of the tech.
AI is a tool, not a lawyer replacement, but is still better than over 95% of public defenders.
Like using a wiki to find the source so you can start the research.
The problem with large-language-model AIs like ChatGPT is that it is more interested in correct construction than factual correctness and will simply invent citations rather than find you existing ones. Live-search AI would need a different AI strategy. And as the lawyer you should still be clicking and reading every cite to be sure it's real and appropriate, because opposing counsel is going to open you like a can of sardines if they find a mistake.
This 'new' AI really just gives a sloppy summary of the same search results we've been getting for 30+ years using "nnatural language" on legal search engines - which has performed about as badly.
Competency as a Darwinian selection pressure.
Be competent or perish.
I believe I heard a story just yesterday where ChatGPT is developing some sort of "watermark" to show stuff that it has written.
That would be smart except how does it work in text? Also, people should remember that AI generated content can't be copyrighted. So unless the person taking the AI content does something to modify it significantly, it's not theirs, and ChatGPT can't own anything.
If this was true, someone else will create an AI to remove the watermark.
Any of that can be gotten around lol.
Watermarks are easily removed. The fundamental problem isn't that it was generated with ChatGPT... the fundamental problem is that the quality is bad. If ChatGPT actually created professional-level and correct output, then it wouldn't be an issue.
@@blairhoughton7918 they aren't trying to own anything, simply let others know that it created it so situations like this don't arise.
It's only a matter of time before we get our first lawsuit where a sanctioned lawyer sues the company that developed ChatGPT for getting them sanctioned.
Hopefully, it'll happen in every circuit so the courts can develop a strong precedent, like "Qualified Immunity"-level precedent
ChatGPT making "stuff" up ! This is why its great for general corporate documents, such as in advertising and marketing areas, where that's been the norm long before AI became a thing.
Maybe that's where Trump has been getting his speeches lately...
@@kimlground206 Maybe Kamala, too.
@@Zwischy Wouldn't surprise me. Maybe she just has a better proofreader.
LegalEagle is currently advertising a "legal AI assistant" to help review and draft contract forms. It was interesting that at least some people felt a certain amount of confidence in the product. Perhaps AI are more reliable for that type of "boilerplate" work. Technically, its still reviewed by a lawyer, but technically, so are the court filings.
That does seem like a better use of the technology. A contract is simply taking the terms that have been agreed to (or terms that one hopes the other party will agree to) and putting them in writing. This technology would facilitate that by allowing the wording of that contract to be polished to look better. There would be no reliance on facts that could be fabricated.
@@TheRealScooterGuyAnd the lawyers would still be 100% on the hook for making sure the final version was correct.
I wouldn’t trust GenAI to draft a contract; too much is at stake. At best, it would imitate the dysfunctional mess of traditional contract language.
If an attorney is using chat gpt they are stealing from their clients.
If ChatGPT was returning correct results, then using it would save time. Which is the opposite of stealing from clients, unless they were still billing for all the time they would have spent doing it manually. But it's doesn't, so yeah...
@@blairhoughton7918 we already know they would bill all the hours no matter what, but yeah it's theft in multiple ways, billing someone and losing their case because you didn't do the necessary thing/work to ensure their case is theft as well, for sure.
@@blairhoughton7918are you from Maine? I know someone with the same name.
@@farmer8102 Not I. There's a few of us around, weirdly. Makes me wonder if the Dave Smiths of the world even consider themselves to have an identity.
@@blairhoughton7918 Well you also got to remember that without ChatGPT, the Lawyer would have been forced to pay the salaries and time of paralegals, clerks, and interns with the money you give them. But if they cut out the paralegals, clerks, and interns, and just use ChatGPT, then they don't have to pay for paralegals, clerks, and interns and are pocketing even MORE money than they normally would be. I would 100% call that Theft and Fraud, even if AI 100% worked out.
Eventually we'll get a story where a lawyer uses ChatGPT to respond to the court regarding their use of ChatGPT to file an earlier brief. I'm going to enjoy that one ...
Also, if I were a lawyer, I would be thankful that ChatGPT is demonstrably unable to do my job correctly. If the day comes when it can, a lot of people in the legal field will be out of work. So I would be loath to hasten that day ...
That day will come. Maybe not the product known as _ChatGPT,_ but there will be some legal AI created, with expensive subscription fees, that doesn't make up its own caselaw, and it will be game over for anyone not using it. I'm guessing that LexusNexus or another company already in that business is already working on such a product.
This is why some people theorize that right now is the golden age of AI. And it's only going to get worse from here. The current AI was trained on datasets which are 100% (or damn near) human-generated content. Future AI will be trained on datasets which are increasingly "polluted" by AI-generated content which people tried to pass off as their own work. Leading to a feedback loop of nonsense being trained into future AI.
@@solandri69 Interesting thought
@@TheRealScooterGuy ChatGPT Esquire. ;-)
@@TheRealScooterGuy I think it will be Gemini. It is pretty good when it comes to getting references/citations right.
I hereby sanction the word "sanction", and furthermore, I will use it to define itself.
Look for Ben with the low flying owls.
Maybe this will help find the lazy lawyers and put them in their place.
Man...if I were the judge for this, that attorney would be facing the *stiffest* penalties that I could levy against them.
How can an attorney be held accountable about professional conduct when clearly this was not professional as all!!
Too late. The genie is out of the bottle.
Same for art, music, mathematics, etc.
☝️
I once worked for a medical clinic.
DR didn't know what the patient had, said,~ "Excuse me while I reference my medical books."
DR. came to the employee area got on a netbook and googled their symptoms. Then DR went back to counsel the patient.
😂
@@copcuffs9973 the modern medical field is so large and deeply specialized no human can presently master it all.
@@copcuffs9973 Medical malpractice is the 3rd highest cause of death, it kills over 250,000 Americans every year.
@@copcuffs9973 if the doc isn't a specialist in a certain field e.g. a -Cardiologist- (-heart-surgeon-) might lookup a suspected -cancer's- -symptoms- and then when sure, will order -specific-tests- and refer you to an -Oncologist- (-cancer-doctor-).
@@copcuffs9973 thats why one is refered to a cardiologist for heart issues, not a Dentist or a Neural Surgeon. And to A pyschiatrist for mental-aliments not an Oncologist or ENT specialist.
Sounds likes he is now super qualified to be a judge!
Eventually there will be an AI that functions as a compendium of all law knowledge. It's coming. Lawyers using baby AGI are playing fast and loose but rest assured the days of studying law are numbered. Seeing how close AI companies are at solving hallucinations and improving reasoning ability beyond human levels, the justice system is in for a treat. A homeless person will have the equivalent legal representation of a fortune 500 company, without the political influence. A lot of lawyers will say otherwise but they're all wrong. People usually won't consider scenarios where the outcome of their lives are worse. "The market won't crash." "If they take all the jobs what will people do?" "They can't do MY job." I woke up when I discovered Alpha Fold. There used to be an entire industry around protein folding. It took a PHD to figure out how one protein folded. A single AI figured it out.
Another great commentary on ChatGPT... Thank you Steve!
Ben behind the top right corner of silver play button .
Well..... The state of Washington is having such a problem finding attorneys to work as a public pretender they are considering not requiring passing the bar to become a licensed attorney
Lawyers thought "Hey, If the kids can get away with using it for homework..." 🤔
Lol lawyers using chatGPT is like saying "Hey I can be replaced by AI!!!".
And they can to a large degree.
@@osgoodblack6464 The ones that think they use chatGPT can be replaced already, but it's got a few more years to go before it can get stuff right.
@@baronvonslambert It'll come up with the right answers within a few years, it's literally a matter of time before a model can be trained on trillions of points. We know what we need to do, we just need a bigger boat to do it and they're currently building it as fast as Nvidia can pump out units.
Wait til version 6.
Can? AM and ARE are the preferred articles here.
This is why when people start talking about "highly educated" i always shake my head. There's smart, and then there's smart. Education is just leaning things, it does NOT make you smart. You can be highly intelligent and just not educated. Or you can be highly educated, and not smart. There's a small percentage that is both.
And, i will state categorically, a smart person is better than a highly educated person. I know personally some people who hold doctorates that are a dumb as a box of rocks when it comes to anything not in their highly specific area of knowledge. And by dumb as a box of rocks i mean unable to do basic reasoning applied to daily life. You know, problem solving.
Ben under low flying owls sign.
ChatGPT is a gift that keeps on giving. Using an LLM to write court documents isn't an accident. It's misconduct at best.
Lawyer-GPT desperately needs to happen!! It can easily be done.. but it won't..
Yes, ChatGPT has been trained on law libraries, but it has also been trained on Reddit. If you don't know what you're getting into with that, better not to get into it!
Just shows there are good and bad people in EVERY Position, Office, Job, etc...
Maybe the BAR association's need to be held civilly responsible for their neglect of ignoring offenses of repeat offending lawyers.
Well if they would teach it to be honest, there wouldn't be a problem
simply: whatever wrote the document, by signing it it is yours.
I think most attorneys would consider this to be the stuff of nightmares -- the equivalent of getting called on the carpet in front of a judge in your underwear. The mere idea of filing something I couldn't vouch for is terrifying.
11:14 If the judge decides to make a ruling/dismiss a case based on an attorney's misconduct, does that attorney's client have any recourse?
Sue the lawyer for malpractice.
About the case, it depends. If the case is dismissed, the client might be able to refile with a competent lawyer.
Delightfully appropriate quote by the woman at the close of the video. My hat is off to her. 😂
A year ago I was talking to a lawyer who boasted about how much he loved using ChatGPT to generate legal reports. He recently got promoted to judge.
There are tools and ways a lawyer can legitimately use "AI" (Large language models) to aid in their work - drafting their briefs is not one of them. You can, however, draft your brief the normal way, then use an LLM to punch up the impact of the prose. You of course still have to be careful, of course.
Kids, if you know how to read and comphrehend what you read, you will be ahead of most adults when you come of age.
I believe POWs are treated better than whistleblowers.
This is apparently an irresistible temptation for less than quality attorneys. As such it might be a good thing in that it will weed out lazy, overbilling, incompetent attorneys. Who would knowingly hire an attorney that works this way ?
“Be careful, this machine has no brain, use your own.”
Never enough time to do it right; only enough time to do it again.
Threat of disbarment should fix this
I so wanted a laugh at the end of the night, now I'm just amazed at how stupid these lawyers are. lol I guess that's a laugh in and of itself. Thank you, good Sir, and goodnight.
The sad thing is large law firms will have good systems for this making their ability to better overwhelm smaller firms ridiculous.
I find it interesting that so many lawyers are getting caught like this. Before these stories started popping up, I wondered if Courts regularly reviewed the arguments being made in filings. I think I have my answer to that question.
Well, I've seen cases that one cannot trace or they are non existent. But, as for professional courtesy, I don't raise the issue in high tones. " Your honor I couldn't trace these cases so I can't comment on them, yet they do not have any significance otherwise they would be quoted in detail by the opposing party". It works well in a jurisdiction that case law is only advisory.
THAT AIN'T HAPPENIN !!!
Best to just use a chat bot with a JD,.. and hope for the best in the future 😂
.... he's a good algorithm, always on time for class, never speaks till spoken too, says he knows Alexa Too 🤣
It's pretty easy to see how this might happen. Some lawyers were talking and one of them made a comment about how they used ChatGPT to write a document and save time.
It was a simple or common case where ChatGPT had LOTS of training data in that area and was able to produce something that was perfect, but no one talks about that fact. The lawyer raves about how it was 100% perfect. The other Lawyers take note. Then.. 2 months later one of those lawyer's was overworked and near a deadline because they took on too many cases to try and pay for the rebuild of their yacht in time for summer. One thing led to another and they just needed more time so thought, well it worked for that other guy. So they use chatGPT but this time it's on a case that's not very common and chatGPT starts hallucinating.
Sounds like grounds for a malpractice suit in the making…
Laziness shows no bounds .
I asked chatgp to write a short biography on Max Lucado. I had to delete one sentence where it claimed that his first book was a child's book and gave a name.
The Courts need to automatically fine and sanction any lawyer that uses chat gpt. The Bar associations need to send out notifications and make it part of their tests, that use of chat gpt should be considered illegal. Law schools need to start teaching that using ai to do your job for you is unethical and should be considered illegal and grounds for loosing cases and being held criminally and civilly responsible.
Agreed. The laziness of relying on ai ought to be actionable: an attorney billing a client for spew from Chatgpt?. Unethical... and dangerous. Some cases aim for an immediate verdict, others aim for an appeal, knowing the original jurisdiction leans one way. Approach & style may arrive at different nuances in either instance.
More importantly, I don't think the public has ai at the moment. Thats locked down, and we are served something of lesser quality - think 5%
Imagine what happens when the courts dont catch it and decades go by ...
Ben drawing everyone's attention to Steve's 100,000 subscriber plaque.
We have to sign it find out what’s in it.
Trying to replace the paralegal with an operating brain with buggy software is a loser's choice.
Ben’s backing up the 100K subs award today.
(fine detail) "backing up to..." 😉🤐
Mornin' Bill
Also well done, early drop and you still got me.
@@Bobs-Wrigles5555 Ben’s insuring Low Flying Owl droppings don’t soil the plaque, hence, ‘backing up’. 😂
G’nite Bob.
An attorney that uses chatgpt deserves to have their license stripped, especially if they did not read over it first.
Welcome to your future lawyers and doctors. Chatgpt use is rampant in universities now.
Who knew that mixing together facts lies and fantasy and calling it intelligence would mimic the human experience.
By now, any lawyer worth 5 cents should know about the pitfalls of Chap from continuing education or Steve L. They would know that opposing counsel has law clerks and interns whose tasks include checking case citations. So do judges.