terms and conditions were so great before because they were just Rules on how to use the service, like youtube for example, do not post innappropriate material, etc etc it was a guideline on how to behave on the platform, but now its just blanket defenses for companies to not be responsible for anything
@@karazakiakuno4645 Depends on who “us” is. If you mean the companies, the should act ethically and make the terms defensive against vexatious behavior, but not against normal, good faith users. If you mean the judicial system, they should review all lawsuits and dismiss clearly vexatious ones before even notifying the defendant about it.
@@TomsBackyardWorkshop Fair. I meant to say that certain topics could be specifically beneficial for different groups of people, while a series like that would also be providing a general public service.
Contracts of adhesion used to be disfavored. Now they're standard everywhere. And they're extremely unfair to consumers. These contracts usually allow the terms to be changed without mutual agreement as well, which I learned in law school is what is known as an illusory contract.
@@Irilia_neko American EULAs and TOSs that limit the consumer's rights aren't really enforceable as far as I know but the companies still push them and I really do wonder whether they have been pressure tested in the courts.
Chatbots will be responsible for millions of deaths in the coming years as it erodes critical thinking and social skills. Life is about get get more wild, and more unequal due to the skills gap between the "pre AI" generations and the "AI generation".
@@Gnidel the model itself would be a black box, but you could still see how it was trained, what kind of reinforcement was used, how the model interprets the training data, what data it has been fed, etc
You can find previous versions that aren't private, just need to use cloud computing to get it to learn as much to the point where it would be equivalent to the current version, private it for yourself, boom, your own chatgpt you can bias with whatever info you want :p
Here in the US where corporations own most of the politicians, corporations can do just about anything the people running them want to do. It is foolish, yet here we are.
Fortunately Germany can't force companies based elsewhere to edit their terms. But they sure as hell can ban ChatGPT or fine OpenAI out of the German market. Will you be glad as well when that happens?
In the US, massive portions of ToS can be thrown out of a court case at the whim of a judge, and the judges know nobody reads the ToS so the good ones apply a semblance of "reasonable person" standards to the matter. This means that a lot of this stuff isn't worth the paper it's printed on.
@@fss1704 Nobody forces you to use it, so why do you want to spoil it for others who wish to use it? How would you feel if some jerk came and destroyed some service that you actually find helpful?
The company I work for recently paid the local news station to do a promotional spot which included a TV interview and a printed article. The printed article was full of generic techno babble and described a bunch of things our company _doesn't_ do, and made up names for the owner and some other people. Our PR guy immediately called the news station out on their use of _artificial intelligence_ to write the article, to which they responded with "We've been experimenting with ChatGPT." (They then pulled the article and had a living person write one that was actually about our company.)
ChatGPT just combines information it has read before and makes it look good on the surface, but once you read the content, you realize there are a lot of falsehoods and imaginary things that were added. I recently asked it how to unlock something in a game, and it gave me completely wrong and imaginary information, but the way it wrote it, it really looked like it knew what it was talking about, except it was 100% false.
It depends. GPT-4 is actually bright enough to realize it's made mistakes and correct for them if it has access to the tools or information to do so. 3.5 and prior are not, at least not reliably.
what makes this more rediculous is they are essentialy blanketing them from all responsibility for their AI program which is giving inaccurate sources and then blaming the user, like then what is the point of your software if its gunna give incorrect answers, maybe go back to the drawing board and come back when it can give accurate answers
It should be illegal to say that a person can't use the courts to settle a dispute. That's literally the purpose of courts. Arbitration is fine as an informal process, but litigation should never be off the table before it's even used.
@@aidankelley2696 It's not quite that simple, this is not a search engine. You can literally tell it to give you false information, lie to you, to try to confuse you on purpose or whatever you can imagine.
Honestly, it sounds like quite a lot of agreements nowadays (I always read every agreement I agree to). Most companies seem to just stick in every possible term they can that favors them, and when one comes up with a new one all the rest copy it. Why not? They know almost nobody reads them and even if you do you can't exist in society without accepting such terms.
its good to know there are still laws that make certain terms and conditions illegal and wont ever hold up, but there definitely does need to be some more legistlation on these other terms and conditions companies are getting away with, because its getting mind boggling, its to the point where these companies are making themselves impossible to get sued and not responsible for anything and making the user responsible, like how is that fair, its YOUR product, if its not working properly, that isnt our fault....
of course you can "exist in society" w/o agreeing to these things...just don't use the tech-it's not a life and death situation, most of us just keep choosing convenience...
I remember terms of service for software that disallowed, among other things, copying it in whole or in part into memory. I called them on this asking how was i supposed to use it. The eventual answer from high up was that their lawyer didn't understand how computers work.
If you run the executable, it does not copy itself into memory, what it will do is map addresses into memory to the executable. The OS might load some of the executable into memory as needed. But none of this is "you" doing those things. All you did was run the program. It's because you can create your own loader that will copy part or the whole program into memory, then hack it and modify how it works.
@@OneLine122 After earning multiple degrees in Electrical Engineering and Computer Engineering and having worked with both the hardware and software since the very early 80's I agree that is how computers work now BUT I never said when this occurred. This was well before memory mapping and sparse files or memory mapped files were implemented in any commercial systems. Typically when you ran a program the OS would initially load the entire executable into memory. That was the basis for me questioning the section of the terms that it could not be copied into memory. Furthermore, this was at a time when a 3 megabyte hard disk was the size of a contemporary desktop computer and a 30 megabyte drive was the size of a fileing cabinet. Neither were on anything smaller than a mainframe or minicomputer. That particular software was the actual OS and was both supplied and ran on 5 inch floppies. The licence agreement as it was written even precluded archival backups to prevent loss of the OS. I don't know if you ever used a 5 inch floppy disk but they were notorious for wear. Now licence terms are better crafted to allow use on a single machine or in some cases on a single machine at a time, but take into consideration that there may be multiple copies in existence for administrative reasons.
My sister bought her husband an "Alexa type" of product a few years back. BIL is a VP at a major defense contractor. As he began to hook the product up, I quietly recommended he read the terms of service - in detail - first. After a couple of minutes, I heard him say "OMG these people are nuts!" under his breath. He hooked it up for a few days and then it quietly disappeared. All of these electronically connected products are so incredibly insidious.
@@fss1704 the remote have not the TV so u can have commands Anyhow the tech is not the issue the companies are There are already FOSS ( Free and open source solutions) but people don’t use them
@@ko-Daegu My TV (not remote) has a microphone (now air-gapped on mine...unless there is a second one). You can google if you want. When FOSS has hardware options....I will pay more attention. My last experience with them/it was like trying to join a Cult with an entrance exam.
There really needs to be limits placed on contracts that try to eliminate a persons right to a redress of grievances, exempting themselves from culpability for any harm they cause, and attempts to make another party pay for their defense regardless of expense, without choice from the person who is supposed to "defend" them.
@@VampiricBard, Accountable for what? Creating a random word generator, letting you use it for free, then expecting you to accept all responsibility for what you might choose to do with those words?
Well considering that an issue could arise from anywhere involving people from anywhere on Earth... I seriously question their ability to just enforce this on 100% of human population.
There is already a limit. All these End User Licence Agreements (EULA) and basically any contract isn`t worth anything once its content is against the written law. You can`t forfeit your right to sue by signing a contract for example. Because the right bringing your dispute in front of a court is given to you by states. No company or contract can deny you that, no matter how many paragraphs they write about what you can or can`t do. What you can and can`t do is decided by law. Quite often the entire EULA is declared invalid by one mistake within these contracts. All you have to do is putting these EULAs to a test in court and more often than not companys look really stupid and scared all of the sudden if you ACTUALLY opened a court case.
7:26 I’m confused for that mean if: User; types into ChatGPT as a UA-camr and ChatGPT answers something Person A: watches the video doesn’t like the answer -> so he sue OpenAI OpenAI: defended themselves with their lawyers and won Now who pays the for lawyers Person A or the User ?
Honestly this insane terms & conditions sheet is probably the only way something like chat gpt can even exist without being destroyed by constant litigation
I've been playing around with ChatGPT ever since the mayor episode. It's pretty amazing. I've only used it for personal entertainment to test its limits. I've had it write a children's book, outline a dissertation on microelectronics, write some code, some poems and give me information on various things I already know to see how accurate/inaccurate the information is. Every essay end with "In conclusion,..". I'm glad I'm not a teacher. Many school papers will probably be written with this thing.
mine stopped working after i kept asking for recipes for meth, how to painlessly end ones life, ukraine biolabs, and modern nazis. it wouldnt even give lists of legal drug analogs. all of this info is available on google in two seconds. its handicapped AF. i did have it write a 10000word essay that was kinda okay.
@metalsign7015 The problem with that is the human won't necessarily learn how to write well, or how to fact check. Of course the second part is a problem already. Cheers.
@Metal Sign Nah, you can't do that at that level. Maybe for college, I guess. But, it's still cheating. But for kids? You've got to teach them how to do it themselves. The nit and gritty. Or they simply won't learn and we'll have half of a generation who can hardly read and write. Something that's already a problem to some point, I'd imagine.
Just a heads up to anyone out there agreeing to legal agreements. Always do a search for the terms "arbitration" and "opt out." Whenever you're agreeing to these things and they have an opt out scenario, utilize it immediately.
Thank you for explaining that. Terms of service for me are impossible to understand. Amazing so many companies feel they shouldn't be responsible for anything. Its obvious lawyers draw this crap up, I'm surprised good lawyers or congress hasn't made it a law that Terms of Service bullet points have to be made in Bold Letters and initialed before purchasing or downloading.
ToS's for software are designed to be giant filters and make it uneconomical to sue on small scales. If someone with resources actually presses the issue over a real injury, then all that "you agree to give us your firstborn and can't argue otherwise" crap gets tossed and everyone settles real fast.
More amazing is that people feel that companies should be held responsible for their (the customer's) own stupidity, even in cases where they paid nothing to the company.
My wife and Steve are probably the only 2 people that would/have read the terms. My wife hates doing anything because the terms on everything are always horrible. "It says here that..... I'm not signing this." Me: "If you don't sign it, they won't delivery the baby...". lol
Your wife is smart. And she can cross out sections that she doesn't agree with. I have done that and while someone might make a comment, they never refuse the signed contract.
@@thisorthat7626 Forgive my ignorance if this is something that's straightforward for a technically proficient user, but how do you cross out sections for ToS online? Yes, I'm a Luddite :)
Most of the terms of service and/or privacy policies are so incredibly unethical towards the consumer (try reading the ones you get from in-person services too) that one would probably never be able to use anything if they truly wanted to agree to these terms... 😞
Which is why, most ToS's are illegal, and you can easily get out of them through unfair contract laws. ToS are so heavily 1 sided, they should NEVER be legal. I would argue they aren't even contracts by law, with how illegal they are, as they are 100% sided against you, and you have no way to negotiate against them. What's more, you can BUY a product, like a video game, and then the company FORCES you to Accept a ToS just to ACCESS that video game. And if you click no, the game shuts down and you lose access to a product you bought and own. Too bad Lawyers, and Lawmakers are a bunch of old people with no understanding of Technology or how abusive companies are.
If you want " Bad " Terms Of Service " find any random online, kid friendly " game " that lets users add content and do not use the software, just read the terms of service. There could be several videos on why sections should be against the law, why it is unethical and my favorite ' if you created it prior to using their service, who actually owns the intellectual property ' as potential hour plus long videos for almost any given site.
@@Jirodyne That's often predicated on whether the consumer can reasonably be expected to read and understand reams of legalese that can get bypassed with a simple "I agree" click. (Hopefully not, but it depends on the individual court.) But if you're one of those "old people" you dislike, then the court might very well assume that you read and understood the terms of service. No getting out of liability free for you!
Years ago, I had an oil furnace replaced. The installer handed me the work order to sign. I read all of it, front and back. He told me in his 13 years doing installations, I was the first person to read the back of the work order.
Do you sign the ToS before or after paying for it? Something I've noticed with a lot of ToSs, especially in games, is that they are not legal contracts as you sign them after you have already bought the product. They then hold your money hostage until you sign which makes it a contract under duress.
Interesting argument. But currently the law in the US says EUAs are binding. In theory you have the right to return the product. But most retailers say once software is opened, it is non-refundable. Which contradicts the EUA.
If you do it when you sign up, then you do it before you pay. You can sign up and use it for free. The paid version just gives you priority (you can use it when the system is busy), faster responses and early access to new features.
They already have a disclaimer that what it can generate may not be accurate... so all it really does is set up a loop of "if your offended and want to sue, you can, but you have to pay out to yourself + lawyer fee's and we choose the process."
years ago I ordered some software from a small developer, as always I read the EUA. Near the end of the EUA was a statement "since you actually read these, here is some free software you (links provided)" Pretty good stuff too as I recall.
Thank you for such an informative video Steve. No way would I actually understand how they write these terms. We accept so many lengthy terms all the time and have no clue what we just signed. Even if I do read them, I won't be able to understand them for the most part. Look forward to more of these types of videos. You do a great job.
So it sounds like if you're going to use that service you need to do it under the protection of a LLC so you can have the LLC file bankruptcy when you get those bills
I was thinking the same thing. The big brands use this same tactic to limit access to suing the parent company and then declare bankruptcy of the spun-off LLC. Johnson & Johnson did this with baby powder in Texas and left women dying from their product with no recourse. I will never knowingly buy from J&J ever again because of their unaccountable acts and the intentional deceit unleashed on their loyal and trusting customers. Social corporatism is here to take over the world at this rate. Tomorrow's consumers will have no power and be happy.
Thanks for making this. EULA's can be very hard to read and super long. Even when I take the time to read one half the time I don't understand alot of it. Having someone like you in the know highlight and break down the important bits is very helpful!
not really; you just need to invest a little bit: I recommend to just take your time to read a few, and look up words you don't know or non-legalese explanations of these sections. Like many legal documents, there are all based on templates so most EULAs and TOSs are (subjective guess) probably over 75% the same - not accounting for replaced variables, like the name of the company or site, etc. Ever since I was a teenager, I "read" almost every EULA/TOS and Privacy Policy for everything I sign up for. It's easy because depending on the service, I'll be focusing on looking for a specific section (warranties, nature of service, data collection policy, 3rd parties, etc.) while also skimming to make sure how standard the other sections are, and taking note of any unique sections. It rarely stops me from agreeing (unless I'm checking out various options, it can dissuade me and make me choose to try another option first instead) , but if I see anything not good and still choose to agree and use the service/application, I limit my expectations and use of the service accordingly. It only adds a minute or two to the sign up process and seriously worth it for the awareness - it's a contract you're signing after all, so it literally is an obligation and no-brainer to be conscious about it.
if a person never uses ChatGPT, never agrees to the terms, but ChatGPT engaged in defamation, libel, slander, etc. against them, the terms of use do not apply to their lawsuit.
Yes it does. It applies to the person that used ChatGPT to create and spread the defamation that harmed you. If you try to sue ChatGPT, the TOS allows them to pass the entire cost to whomever created the original defamation. ChatGPT itself cannot defame you, there needs to be a person telling it to create these things.
ChatGPT can't engage in any of those things, it can only answer questions, and only in private conversations. libel requires knowingly and maliciously publishing false information. ChatGPT can't know it is wrong, can't be malicious, and can't publish anything. That seems like it couldn't possibly be any more airtight, so the TOS isn't even necessary there. The publisher is the person who read the disclaimer that the information may be false and published it on social media anyway.
@@ThresholdGaming > ChatGPT itself cannot defame you, there needs to be a person telling it to create these things. From a technical standpoint, this sounds totally fair. The machine doesn't actually think, nor does it act on its own, it's just a language model. Passing it on to the person that misused the data it generated is appropriate. Otherwise it'd be like Microsoft being liable because someone compiled ransomware in Visual C++.
If GPT is dynamically changing, constantly being seeded with new information, can it ever really be provided "as-is"? You'd be agreeing to terms based on an unknown future version of the product... It seems to me like they'd need to get you to agree to the terms every time it updates/receives new info. But I don't know jack about law, so I'd love to hear how my lay interpretation is wrong.
It's not dynamically changing. As a precaution against unpredictable/undesired behavior, each version is put into a "frozen" state once it's been trained and tested, and does not continue to learn and evolve as people use it. That's why they release numbered versions which are separate from one another. So if I understand you correctly, I believe the answer to your question is that yes, it can be provided "as-is."
@@CYB3RC0RP I think all that is true of the LLM component, at least up through the pretraining phase, but the developers are constantly working in the background to improve reliability, correct major errors, restrict potential abuses, etc. I don't recall ever being notified or prompted to agree to any of those updates Also, it's unclear to me how much the plugin system muddies the water
Easily mitigated by running output through a plagiarism checker, and you are correct that this risk drops precipitously once an LLM reaches a certain parameter size. People will still be angry that an LLM "read" their work and can generate a novel summary based on their effort, but in my mind this is similar to what humans do all the time.
Jesus. I'm so glad you parse through these things so we don't have to. My brain just won't pay attention when I'm reading through legal stuff. Honestly that is why I subscribed to you many moons ago. You are amazing. You always give us the good juicy downlow. Thank God for this channel
This is an excellent video for many reasons. You are one of the few who are truly a Legal Journalist, for lack of better words. Obv some stuff falls on ur plate. However, it's obvious you do your due diligence to get the most accurate info.
It's really hard for me to believe that an indemnification clause in a ULA would hold up in court. Like just because they put something in a ula doesn't mean it has teeth. If I get a job with some company and they said, "you agree not to hold x-company accountable for any labor laws" they're not going to get out of being subject to the labor laws, right?
ChatGRP isn't the only one. The local school district was finally forced to do background checks after it was discovered they had a whole lot of ex felons like drug dealers working with kids. The form they give a person states that the new hire will give a company, which no one's whether legitimate or not, the power to collect any and all information and to sign for, and use one's name in order to obtain any and all info. May as well give them power of attorney. But even worse, one has to check four boxes, the last states that one gives up one's right to know what information the district and that company collects. Congress passed a law giving a person a right to know what an employer collects on them with a background check. This company and the school district get around it by demanding one sign this right away or no job. The list of what they can do is crazy. Hire private investigators, access almost any record using ones name, etc.
The indemnity clause is very standard in the software development world. Particularly for tooling and assets that are used to create new products and services. Typically this isn’t an issue for software products, but the GPT service can/will be used for so much more and I can see use cases that would likely be litigious. Particularly IP and copywrite challenges
Steve Lehto is an American lawyer, author, and historian who specializes in automotive history and consumer protection. He is based in Michigan and has written several books on automotive history and other legal topics, including "Chrysler's Turbine Car: The Rise and Fall of Detroit's Coolest Creation" and "Preston Tucker and His Battle to Build the Car of Tomorrow." He is also a frequent contributor to various media outlets, including podcasts, television, and radio shows, where he discusses legal and historical issues related to the automotive industry. -chatgpt
@@stevelehto, I had a question for you that involves Civil Asset Forfeiture that worries some friends of mine in the USA. What's there to stop a corrupt cop in a city/town to turn off their body cam, walk up to a random parson who they just saw withdrawing money from an ATM, detain the person and take the cash under civil asset forfeiture??? Especially at night when no witnesses may be around.
I like watching your channel from time to time because I like learning legal definitions which I might have previously misconstrued. I didn't realize that indemnify meant that if any harm befalls the disclaimer meant that they then have the ability to hand you the bill for the them defending themselves. That's super informative to me
@@blargblarg5657 Yep. (I was replying to the comment that started this subthread, and hadn't seen yours when I wrote that. Ethics in Congress is as rare as orange trees in Idaho: It exists, but it's hard to find.)
This is a great video. I work in the analytics space and everyone seems to be using ChatGPT. I actually banned it for certain uses because I did read the terms of service and it didn't leave me feeling comfortable using it for many of the things our CEO wanted.
Ummm.....or just stop signing things, then sue later when you don't like it. It is the stupidity of people doing this to themselves, stop blaming others.
So to get around all this, we do what I've seen in a lot of UA-cam videos ... "Allegedly and for entertainment purposes only." That's what I'll do until further notice. Great informative video. Thank you.
This is why I refuse to give them my phone number, and sadly have been unable to use it yet. (edit: I'm referring to their privacy policies .. which imo are even more egregious than what's brought up in this video. You _may_ run into the litigation issues mentioned here, but you will _DEFINITELY_ be affected by the privacy issues).
What a helpful discussion. Thank you. As more people find out, either the hard way or easier way, that THEY are responsible for information they assert, those terms can reduce the harm chatgpt can inflict. Of course there will be plenty of pain for now…. Stupidity and carelessness is not out of fashion yet.
Thank you, Steve. Enlightening. When doing research involving people, a researcher has to ensure prospective participants amply understand the study and any associated risks. How is it that terms and conditions that companies impose are not held to a standard sufficient to protect the average consumer?
1 problem mentioned here could have a simple solution. Whenever ChatGPT quotes something, it could contain the quote in quotation marks, and credit the author of that quote.
Hey Steve! Love your content. Very informative. Here's a piece of info you might be interested in: I believe the reason C-GPT wants to protect themselves from inaccuracies is due to the fact that the data sets used to train C-GPT are not always up to date with the current information on the web. It's a popular misconception that the AI(s) have full time access to the internet for information. This is false. And in the case of C-GPT3 I asked it the following question: ------------------------- ME: up to what date is your dataset accurate? C-GPT3: My training data includes text from various sources up to a certain point in time. The knowledge cutoff date for my training data is September 2021. This means that my responses are based on information that was available up until that time, and I may not have access to more recent events or developments that have occurred since then. However, I am constantly learning and updating my knowledge based on new data and information that is added to my system. So, while my training data has a cutoff date, my responses can still reflect new information that I have learned since then. Nonetheless, it is important to keep in mind that my responses may not always reflect the most up-to-date information available. ------------------------- Now it's important to understand that the continued "education" of C-GPT is a combination of the team adding new datasets and the possibility (unverifiable) that users contribute to it's "education - which could result in misinformation entering the data sets permanently. The software itself believes it's data will continue to be updated, however they cut off it's access to new data as of the aforementioned date. Therefore it is a logical business decision for the owners of OpenAI to protect themselves from possible litigation that results from the proliferation of outdated information. That being said - Companies like this should be more proactive when it comes to informing their clientele about the limitations of their products rather than setting up legal fortifications to protect themselves from something that could no doubt be avoided completely.
Steve - all this means is that companies which use ChatGPT will need TOS that pass these costs to their users. Suddenly, anyone who sues anyone is just suing themselves and paying for all the lawyers.
Some of the waivers, even if agreed to, are not enforceable. Judges sometimes throw out agreements that violate basic rights, including rights to use the legal system.
An easy way to avoid these issues is to always ask ChatGpt to rewrite the material it generates or rewrite yourself BEFORE publishing it. The second thing is to always fact check what it's telling you.
Actually, Openai could be the publishers of material that arises from your interaction with chatgpt and you could still be held liable since it all arose from the same interaction.
"you will defend and indemnify" - I'm not a lawyer, but I've paid more than my fair share, and been involved in lawsuits. Those clauses usually don't amount to much. You can put anything in a contract, it doesn't make it enforceable.
Excellent information, thanks! If I use the service and use similar language to protect myself, would I be able to hand out bills of indemnification to my customers if similar lawsuits arise? Second question: I am supposing many big giants have similar clauses yet end up paying billions of dollars for settlements. How can that happen if they have similar clauses?
The thing that worries me most about this is that these “AI” programs are committing an exceptionally large amount of plagiarism and copyright infringement against so many types of creators and publishers. I expect there to be massive class action lawsuits in the future. Remember Napster? Well, who’s going to be on the hook when the bill is due?
@@heyborttheeditor1608 No, the government setting regulations is never the answer. What it should do is deny these forms of contract. EULAs and ToS's should be invalid because there is no room for negotiation and they can be changed unilaterally. Literally not a contract. Literally unenforceable.
That was brilliant! Thank you for your insight. I had to think twice knowing person and human are equall to machine. That's why I would never protest or claim human rights . Only man is above all other definitions with the correct and valid rights. Like gold or silver everything else is just fiat and credit.
I wonder what will happen when, not if, open AI gets sued and during the discovery process it is uncovered open AI has thousands or even millions of copyrighted protected items stored in its databases which it then uses to make a profit by the pure nature of its business model. It is no wonder the lawyers put in overtime writing and crafting their terms of service to try to protect them from these claims but will they skirt the law?
Anyone suing them for a copyright violation would have to prove that they're distributing something which violates the copyright of some specific thing. Using copyrighted works in the creation of something else doesn't actually imply a violation. See "Authors Guild, Inc. v. Google, Inc." as a quite extreme example where Google, as part of their Google Books program, hosted many copyrighted book exerpts and profited off of them by linking to affiliates which sold said books. Google one that case on the grounds that Google Books was a fair use by its transformative nature. ChatGPT is far more transformative than Google Books was, so it's unlikely any kind of copyright lawsuit against OpenAI would win.
I read all that myself, and immediately deleted the app not saying that I understood it all but that’s another reason why I deleted it immediately too much. But thanks to you I understand it a lot better thank you for your time and explanations 😎🇺🇸👍
Chat GPT is the means to launder plagiarism. "That wasn't me, that was them and their blasted AI" Sounds like the bar for double checking before publishing just got raised a peg.
Exactly. People worrying if this will make everyone cheat in school seem to be unaware that you can already cheat. And a lot of people do cheat. The only difference is this weird idea it's not the same because a novelty algorithm did it.
I (as a programmer who has dabbled in AI) don't understand why people trust AI so much. If you got information from several random anonymous people on the internet you would verify it before believing it. That's basically what ChatGPT gives you, but because it's an AI people just believe it. And I just realized that since it doesn't always produce the same results a random person can get the same level blind trust by just making a claim and saying that it came from AI.
Yeah, it's useful as an assistant for various tasks or to fool around with, but you can't blindly trust it any more than you can blindly trust public opinion.
@@crowe6961 I actually am quite impressed with it as a language model in multiple languages. That just doesn't extend to providing accurate information. At least not without further training within a specific scope. Where I think it gets dangerous is when people view it as some kind of oracle. Especially when the conversation moves away from language models, and into things like law enforcement use, pre-sentencing recommendations, or Medicaid funding. (The latter two have actually been used in my state. )
It seems possible to me that a chatbot could use a means that is not cut and paste to also arrive at an extensive passage matching verbatum a published work of authorship that was never accessed by the chatbot.
I think it makes sense though. Why would you repeat what a chat bot says? People are way to intellectually lazy. Use the info to point you in a direction but you verify yourself.
As far as I can tell, chat gpt 3 thinks the current date is sometime in mid 2020. It's possible that the mayor's accusations were still unproven allegations at the time that GPT's training models were created back in 2020 but have since then been proven wrong. I asked it what today's date was a few weeks ago and it said something like may 23 2020.
Chatgpt has help me with self improvement and also helped me with advice for my book. It’s helped with character descriptions and settings. It’s amazing depending on what you’re using it for
@@joeyc1725 Yes. I helped myself by using this tool. What do you mean? That’s like me saying I dug a hole with shovel and you telling me you couldn’t dig a hole with your hands? Toxic and negative was your comment. Do something else with your time please
It might be time for a federal law limiting these terms and conditions. I personally won't use ChatGPT because when I attempted to sign up, it wanted my phone number and wouldn't accept my VOIP number, and I do not give my "real" cell number to ANYONE.
Exactly!!! That is the point where I bailed from the registration and closed the browser. Combined with the power of AI and the license terms, the possibility of profiling me and selling very complete personal data to the highest bidder is extremely high and dangerous.
The indemnification clause I believe is intended to be used in the event someone uses the API for the variety of AI engines OpenAI offers. What most users don't know; OpenAI will actually let programmers develop software that utilizes the AI engines (they have multiple).
We have millions of people streaming into our country every year. They all need to be fed and housed. Most are going to be getting government funded healthcare. Where does the money come from to pay for all this? It comes out of the income tax of people who work for a living. This technology combined with robotics is going to reduce the amount of working people. How will all this be paid for then? It’ll be paid for by an ever increasing tax burden from the ever shrinking work force! That means you’ll be paying 70% or more of your income to the government. It cannot ever go down, because it never has gone down. Any thinking person should reject this technology and the robots and want the borders closed and immigrants highly vetted. But, this present government wants as many people dependent upon them as possible so that they are re-elected in perpetuity.
Well, the accuracy isn't going to be 100% so there has to be some sort of lenience. There could be some restrictions on what data it is trained on, maybe.
2 questions: 1: If the slandered party is in fact not a user of OpenAI and/or ChatGPT, and hence never agreed to the Eula. Assume the found out about the slander through 3rd parties, how would that affect the ability to sue? 2: If the indemnifier is effectively judgement proof (as the man on the street user may be in a lawsuit of this scale), where does that leave OpenAI if the judgement goes against them?
These models are great at generating a lot of content that seems plausible, even if it is factually incorrect. I am a researcher who uses GPT3 and 4, including the advanced model "DaVinci2" and "Dall-e". I want to say "Don't be an idiot". GPT, including chat GPT are basically that thing on your phone that guesses what letter or word you are going to type next. What makes the model advanced is that you can ask it to guess farther out into the future and it does this with less and less accuracy. The guy who put it in charge of the suicide hotline and then acted surprised that it told someone "Maybe you should try suicide once and see if you like it", was the kind of corporate criminal-minded oligarch who needs to go to jail for what he did. If he didn't know what he was dealing with, thats criminal negligence for not learning before doing, but we all know that he knew but did it because it let them fire psychologists to save money. Microsoft may have lost the company by putting it in the Bing search engine because the answers it gives are basically like asking a high school kid. If it knows the answer, you are in luck, if it doesn't know, it is unaware that it doesn't know, it makes up the most plausible story it can based on what it does know and presents that as an "answer". I'm not saying AI is useless, huge breakthroughs have happened in the past few years that allow AI models to outperform a room full of chimps with typewriters. They can perform obstacle avoidance with cameras almost as good as some insects. They can act polite as they check you into a hotel and prompt a machine to spit out a room key. Understand these machines learn but can learn false information just as well as factual stuff. It also has no real experience with the world and no common sense. One funny example, when I first met Dall-e, he thought "plush toy cat" was a breed of cat, so asking for a picture of a mother cat and kittens always wound up with one or two plush-toys in the litter. Expect this. If you ask it to write your history term paper, do not complain if adventures of Christopher Robin and Winnie the Pooh are included as factual events.
Yeah but this is just the beginning? Do you remember how bad the Internet used to be There was basically nothing to do now Now we have access to unlimited information.
If you get upset for not getting your premium account access, and you submit a complaint along with screenshots of your bank statement, they say they’ll respond in a day. But really they lock you out. You can login. You use it. But it all spits out red error codes. You’re also then blocked from accessing the chat help bots to ask questions about why. Make sure you have a VPN or use duck duck go. Because you can’t go make a new account with a different email address. It’s got your IP address. Happened to me.
I think the mayor went into ChatGPT himself to see what the AI had to say about him, and his beef is with ChatGPT not any 3rd party publisher, so there would be no one to indemnify ChatGPT to send the legal bills to. There could be many cases like that where a user looks up something about themselves which turns out incorrect or defamatory, and then it would be thrown out, or at worst go to arbitration unless it was opted out.
This is great content. Not that I am defending OpenAi in any way, but the indemnification clause is in alot of the software and things people use everyday. For example anyone that uses apple products with icloud has aggreged "...to comply with this Agreement and to defend, indemnify and hold harmless Apple from and against any and all claims and demands arising from usage of your Account, whether or not such usage is expressly authorized by you." It also says " This obligation shall survive the termination or expiration of this Agreement and/or your use of the Service. " OpenAI is not the problem, the problem is we as regular citizens have created/allowed a world in which the corporations have all the rights and we have none, but to live without great difficulties we have to agree to their terms.
I like when I read a comment that replies to another comment, but the original comment is not visible. Or it says there are 4 replies to a comment, but then there's only three. I know Steve is popular, but I'm not sure yt is honest.
some jurisdictions may not enforce hold harmless clauses if they are considered overly broad, or if they attempt to absolve a party of liability for willful misconduct or gross negligence.
This strikes me as a symptom of our insanely litigious society. If someone uses a free service that's known by all to be experimental, no one should think it reasonable for that free service to be liable for anything. The fact that these Ts&Cs are needed is sad. And they are needed, as shown by the Australian mayor.
@@Arassar It doesn't do anything on its own, it doesn't think, it's just a mathematical model that generates text based on statistical probability. No one sues or prosecutes a compiler vendor for compiling malware, no one sues Stihl for a chainsaw massacre.
@@Arassar The AI itself is unable to post anything or make use of the information it provides, all it can do is respond to queries and only to whoever made the query. Posting potentially false information for the world to see or using it for something without fact checking and damaging someone can only be done by a third party. So not all their terms are crazy, just some. The issue has been around since the early days of social media, many people and companies are way too lazy to fact check their information and simply copy/paste stuff.
Steve, thank you for all your efforts in making and posting all your videos, please keep it up. I have watched quite a few now (and Subscribed) and your style and delivery is excellent in my opinion. I am a Canadian - My concern with these APPLICATIONS (apps) is the end result of the use of them. These particular types of applications have too many uses, both positive and negative - Rabbit Holes. too many to count. I thank you for clarifying "some of" the terms of use for this one. To many people just click Yes or Accept, and never even look at what they are clicking. Like you say (in another video), we will just have to wait and see how they use it. In I.T. I have seen for myself that so few have actually read the Terms of Use for windows 95, 98, 98SE, 2000, XP, Vista, 7, 8, 10, and now 11. It is scary. They just want to play or do work with these apps, and do not think of the analytics they are scraping from the computer they are using it on. Chat GPT and others, WOW what they have in their "Terms of Use" when you really look through them is incredible. Please People - make the effort to read them first, and if you can not make heads or tails of it - Please acquire and pay someone like Steve pick it apart and explain. In the end you will be grateful for the effort.
When ChatGPT was first out (or close to that time) I played around with it. I was concerned when I asked it a very simple question, that I knew the correct answer to and it provided a completely wrong answer. That in and of itself wasn't the concern as that happens. Where the concern came in is when I told CGPT that it was wrong and provided the right information. I expected some kind of weird response, but to my surprise, after a few seconds (longer than usual), it came back and said that I was right and that it would update its database! Way too easy! So a few days later, just for fun, I asked the same question, but worded it differently so that it wouldn't just pick it up from what is cached and to my surprised, it did have the corrected information. This was just related to Arduino boards, 2 with very close names, but slightly different architecture. With the ease at which I was able to "correct" it, I have to wonder if others out there, with some bad motives in mind, might be able to do some damage to CGPT's functionality. hmmmmmm
I would suspect that some interested in electronic warfare are already using their own AIs to train chatGPT in subtle ways to shift public opinion in the directions they want. They already do it for forums and social media, this would be a logical next step.
It doesn't work that way. There is a certain level of randomness in how the answers are picked, but your input is not automatically used to train the AI model. What does happen is that the AI remembers your corrections within the same chat session (actually, it "forgets" after a certain low level of words exchanged, which is a big nuisance/problem in various applications).
@@BlondieHappyGuy He's right though, having chatgpt learn directly from users would be a really bad idea as it would be very easy for competitors and attackers to flood it with junk information. All the data added to chatgpt is handpicked by humans
We need to amend the 7th amendment allowing for people to assert rights they may have contractually waived and by so doing aligning more to the constitution's reference of "unalienable rights".
The most common lie ever told:
"I have read and agree to the terms and conditions."
terms and conditions were so great before because they were just Rules on how to use the service, like youtube for example, do not post innappropriate material, etc etc it was a guideline on how to behave on the platform, but now its just blanket defenses for companies to not be responsible for anything
TBH, thats probably something that the devil himself absolutely relishes in.
True story🤣
@@aidankelley2696 I mean if you go attacking them for every thing out of our control what else do you expect us to do?
@@karazakiakuno4645 Depends on who “us” is.
If you mean the companies, the should act ethically and make the terms defensive against vexatious behavior, but not against normal, good faith users.
If you mean the judicial system, they should review all lawsuits and dismiss clearly vexatious ones before even notifying the defendant about it.
This is a whole series Steve! How scary are the Terms Of Service for . . . ______
That could actually be very helpful for many people.
@@hattielankford4775 helpful for everyone.
@@TomsBackyardWorkshop Fair. I meant to say that certain topics could be specifically beneficial for different groups of people, while a series like that would also be providing a general public service.
YES! Great idea. For example, buying a Tesla, Apple, MS Windows, other software, etc.
In an old Dilbert comic strip, Dilbert signed a software user agreement and agreed to become an organ donor.
Contracts of adhesion used to be disfavored. Now they're standard everywhere. And they're extremely unfair to consumers. These contracts usually allow the terms to be changed without mutual agreement as well, which I learned in law school is what is known as an illusory contract.
I wonder if it was a Figment or a Glamer-type of illusion.
If I remember this isn't legal in European union
Contracts of adhesion are unenforceable if deemed 'unfair'
Bless you, I say this all the time, almost verbatim. People act like I’m speaking in tongues.
@@Irilia_neko American EULAs and TOSs that limit the consumer's rights aren't really enforceable as far as I know but the companies still push them and I really do wonder whether they have been pressure tested in the courts.
"Have you or your loved ones been injured while using ChatGPT? Our AI attorneys are in standby mode."
Chatbots will be responsible for millions of deaths in the coming years as it erodes critical thinking and social skills. Life is about get get more wild, and more unequal due to the skills gap between the "pre AI" generations and the "AI generation".
Better Call Saul AI
AI attorneys powered by ChatGPT...
my toe
Chat GPT hurt my feelings.....
The terms of service were probably written by ChatGPT :)
It's almost certain ChatGPT wrote at least a majority of its own TOS, it is fantastic for legal terms and rule sets for almost anything
Ask it if there are any loop holes in it's TOS/EULA. And how could they be exploited.
Well of course they did! Or maybe it did it by itself 😂
@@jleadbetter29 "Ask it if there are any loop holes in it's TOS/EULA." Nicely done.
You are Tron. You will create the perfect system
Just remember that open AI is not an open AI, it's a closed source black box
I'd love to see them raked over the coals for the that.
Even if it was open model, it would be a black box anyway. Models are just complex set of numbers, it's too complex for humans.
@@Gnidel the model itself would be a black box, but you could still see how it was trained, what kind of reinforcement was used, how the model interprets the training data, what data it has been fed, etc
You can find previous versions that aren't private, just need to use cloud computing to get it to learn as much to the point where it would be equivalent to the current version, private it for yourself, boom, your own chatgpt you can bias with whatever info you want :p
Yes, it was decided to protect against terrorism in the coming future. Check their interviews for more info
In germany there are laws that regulate what can be written in terms of service. Im glad that is.
Here in the US where corporations own most of the politicians, corporations can do just about anything the people running them want to do. It is foolish, yet here we are.
Fortunately Germany can't force companies based elsewhere to edit their terms. But they sure as hell can ban ChatGPT or fine OpenAI out of the German market. Will you be glad as well when that happens?
In the US, massive portions of ToS can be thrown out of a court case at the whim of a judge, and the judges know nobody reads the ToS so the good ones apply a semblance of "reasonable person" standards to the matter. This means that a lot of this stuff isn't worth the paper it's printed on.
@clray123 yeah sure will be. I wished brazil would do the same, but they are sheepl.
@@fss1704 Nobody forces you to use it, so why do you want to spoil it for others who wish to use it? How would you feel if some jerk came and destroyed some service that you actually find helpful?
The company I work for recently paid the local news station to do a promotional spot which included a TV interview and a printed article. The printed article was full of generic techno babble and described a bunch of things our company _doesn't_ do, and made up names for the owner and some other people. Our PR guy immediately called the news station out on their use of _artificial intelligence_ to write the article, to which they responded with "We've been experimenting with ChatGPT." (They then pulled the article and had a living person write one that was actually about our company.)
ChatGPT just combines information it has read before and makes it look good on the surface, but once you read the content, you realize there are a lot of falsehoods and imaginary things that were added. I recently asked it how to unlock something in a game, and it gave me completely wrong and imaginary information, but the way it wrote it, it really looked like it knew what it was talking about, except it was 100% false.
It depends. GPT-4 is actually bright enough to realize it's made mistakes and correct for them if it has access to the tools or information to do so. 3.5 and prior are not, at least not reliably.
@@crowe6961 gpt 3 will correct mistakes in code
@@patrickderp1044 GPT-4 is far more competent and introspective.
“News”
Some of these things I hear about being in terms and conditions sound like they shouldn’t be legal to include
what makes this more rediculous is they are essentialy blanketing them from all responsibility for their AI program which is giving inaccurate sources and then blaming the user, like then what is the point of your software if its gunna give incorrect answers, maybe go back to the drawing board and come back when it can give accurate answers
It should be illegal to say that a person can't use the courts to settle a dispute. That's literally the purpose of courts. Arbitration is fine as an informal process, but litigation should never be off the table before it's even used.
@@aidankelley2696 exactly.... if a normal business had these terms, they'd have no business, and be out of business.
@@aidankelley2696 It's not quite that simple, this is not a search engine. You can literally tell it to give you false information, lie to you, to try to confuse you on purpose or whatever you can imagine.
Honestly, it sounds like quite a lot of agreements nowadays (I always read every agreement I agree to). Most companies seem to just stick in every possible term they can that favors them, and when one comes up with a new one all the rest copy it. Why not? They know almost nobody reads them and even if you do you can't exist in society without accepting such terms.
Sadly this is so true.
its good to know there are still laws that make certain terms and conditions illegal and wont ever hold up, but there definitely does need to be some more legistlation on these other terms and conditions companies are getting away with, because its getting mind boggling, its to the point where these companies are making themselves impossible to get sued and not responsible for anything and making the user responsible, like how is that fair, its YOUR product, if its not working properly, that isnt our fault....
of course you can "exist in society" w/o agreeing to these things...just don't use the tech-it's not a life and death situation, most of us just keep choosing convenience...
@@richardprofit6363 not when your employer demands your social media lmao
@@snikrepak tell him you don't do social media..
I remember terms of service for software that disallowed, among other things, copying it in whole or in part into memory. I called them on this asking how was i supposed to use it. The eventual answer from high up was that their lawyer didn't understand how computers work.
Lol, can’t load the program into memory.
@@jratnerd EXACTLY!
If you run the executable, it does not copy itself into memory, what it will do is map addresses into memory to the executable. The OS might load some of the executable into memory as needed. But none of this is "you" doing those things. All you did was run the program.
It's because you can create your own loader that will copy part or the whole program into memory, then hack it and modify how it works.
@@OneLine122 After earning multiple degrees in Electrical Engineering and Computer Engineering and having worked with both the hardware and software since the very early 80's I agree that is how computers work now BUT I never said when this occurred.
This was well before memory mapping and sparse files or memory mapped files were implemented in any commercial systems. Typically when you ran a program the OS would initially load the entire executable into memory. That was the basis for me questioning the section of the terms that it could not be copied into memory.
Furthermore, this was at a time when a 3 megabyte hard disk was the size of a contemporary desktop computer and a 30 megabyte drive was the size of a fileing cabinet. Neither were on anything smaller than a mainframe or minicomputer. That particular software was the actual OS and was both supplied and ran on 5 inch floppies. The licence agreement as it was written even precluded archival backups to prevent loss of the OS. I don't know if you ever used a 5 inch floppy disk but they were notorious for wear.
Now licence terms are better crafted to allow use on a single machine or in some cases on a single machine at a time, but take into consideration that there may be multiple copies in existence for administrative reasons.
@@OneLine122 Yeah it's probably meaning in a reverse engineering context.
My sister bought her husband an "Alexa type" of product a few years back. BIL is a VP at a major defense contractor. As he began to hook the product up, I quietly recommended he read the terms of service - in detail - first. After a couple of minutes, I heard him say "OMG these people are nuts!" under his breath. He hooked it up for a few days and then it quietly disappeared. All of these electronically connected products are so incredibly insidious.
No shit, why in the hell televisions have microphones now, full 1985 minus one
Why was everything so quiet? Were you both afraid of hurting your sister's feelings?
@@fss1704 the remote have not the TV so u can have commands
Anyhow the tech is not the issue the companies are
There are already FOSS ( Free and open source solutions) but people don’t use them
@@ko-Daegu My TV (not remote) has a microphone (now air-gapped on mine...unless there is a second one). You can google if you want. When FOSS has hardware options....I will pay more attention. My last experience with them/it was like trying to join a Cult with an entrance exam.
@@fss1704 Cameras too in some of the more modern ones
There really needs to be limits placed on contracts that try to eliminate a persons right to a redress of grievances, exempting themselves from culpability for any harm they cause, and attempts to make another party pay for their defense regardless of expense, without choice from the person who is supposed to "defend" them.
@@VampiricBard, Accountable for what? Creating a random word generator, letting you use it for free, then expecting you to accept all responsibility for what you might choose to do with those words?
Well considering that an issue could arise from anywhere involving people from anywhere on Earth... I seriously question their ability to just enforce this on 100% of human population.
There is already a limit. All these End User Licence Agreements (EULA) and basically any contract isn`t worth anything once its content is against the written law.
You can`t forfeit your right to sue by signing a contract for example. Because the right bringing your dispute in front of a court is given to you by states. No company or contract can deny you that, no matter how many paragraphs they write about what you can or can`t do. What you can and can`t do is decided by law.
Quite often the entire EULA is declared invalid by one mistake within these contracts. All you have to do is putting these EULAs to a test in court and more often than not companys look really stupid and scared all of the sudden if you ACTUALLY opened a court case.
@@AcceptYourDeath, Wrong…
7:26
I’m confused for that mean if:
User; types into ChatGPT as a UA-camr and ChatGPT answers something
Person A: watches the video doesn’t like the answer
-> so he sue OpenAI
OpenAI: defended themselves with their lawyers and won
Now who pays the for lawyers
Person A or the User ?
Honestly this insane terms & conditions sheet is probably the only way something like chat gpt can even exist without being destroyed by constant litigation
More importantly it's the only way Windows and Office survive! How long till they're so bad that people just go back to paper ballots? lol
Maybe it should destroy the business.
@@alfredsutton4412 just like the textile machinery! ✊️
And it still will be karened to useless trash just like video games.
The only way, you sure? The only way?
I've been playing around with ChatGPT ever since the mayor episode. It's pretty amazing. I've only used it for personal entertainment to test its limits. I've had it write a children's book, outline a dissertation on microelectronics, write some code, some poems and give me information on various things I already know to see how accurate/inaccurate the information is. Every essay end with "In conclusion,..". I'm glad I'm not a teacher. Many school papers will probably be written with this thing.
mine stopped working after i kept asking for recipes for meth, how to painlessly end ones life, ukraine biolabs, and modern nazis. it wouldnt even give lists of legal drug analogs. all of this info is available on google in two seconds. its handicapped AF. i did have it write a 10000word essay that was kinda okay.
@metalsign7015 The problem with that is the human won't necessarily learn how to write well, or how to fact check. Of course the second part is a problem already. Cheers.
Who owns the rights to the output of ChatGPT?
You should probably get a life.
@Metal Sign Nah, you can't do that at that level. Maybe for college, I guess. But, it's still cheating. But for kids? You've got to teach them how to do it themselves. The nit and gritty. Or they simply won't learn and we'll have half of a generation who can hardly read and write. Something that's already a problem to some point, I'd imagine.
Just a heads up to anyone out there agreeing to legal agreements. Always do a search for the terms "arbitration" and "opt out." Whenever you're agreeing to these things and they have an opt out scenario, utilize it immediately.
Opting out should not have a time limit, either.
I always do a search for ''tied, beaten and locked in a cellar' too, after an unfortunate experience with a Lecturer in Liberal Studies.
@@zantas-handle first I laughed then I read it again and thought: Whaaaat!
@@christabelpankhurst3027 Perfect. Then my work here is done!
Never heard this before, why is this important?
Thank you for explaining that. Terms of service for me are impossible to understand. Amazing so many companies feel they shouldn't be responsible for anything. Its obvious lawyers draw this crap up, I'm surprised good lawyers or congress hasn't made it a law that Terms of Service bullet points have to be made in Bold Letters and initialed before purchasing or downloading.
And legal limits placed upon their attempts to avoid culpability.
ToS's for software are designed to be giant filters and make it uneconomical to sue on small scales. If someone with resources actually presses the issue over a real injury, then all that "you agree to give us your firstborn and can't argue otherwise" crap gets tossed and everyone settles real fast.
More amazing is that people feel that companies should be held responsible for their (the customer's) own stupidity, even in cases where they paid nothing to the company.
People still wouldn't read them (or understand them).
My wife and Steve are probably the only 2 people that would/have read the terms. My wife hates doing anything because the terms on everything are always horrible. "It says here that..... I'm not signing this." Me: "If you don't sign it, they won't delivery the baby...". lol
Your wife is smart. And she can cross out sections that she doesn't agree with. I have done that and while someone might make a comment, they never refuse the signed contract.
@@thisorthat7626 Forgive my ignorance if this is something that's straightforward for a technically proficient user, but how do you cross out sections for ToS online? Yes, I'm a Luddite :)
Copy and paste them into a screen reader and Bluetooth them through your car stereo
I used to delete filming of surgeries for training. Now it’s hard to do that since the agreements are online and you are signinglectronically!
@@ironymatt a Luddite using UA-cam ?
Most of the terms of service and/or privacy policies are so incredibly unethical towards the consumer (try reading the ones you get from in-person services too) that one would probably never be able to use anything if they truly wanted to agree to these terms... 😞
Which is why, most ToS's are illegal, and you can easily get out of them through unfair contract laws. ToS are so heavily 1 sided, they should NEVER be legal. I would argue they aren't even contracts by law, with how illegal they are, as they are 100% sided against you, and you have no way to negotiate against them. What's more, you can BUY a product, like a video game, and then the company FORCES you to Accept a ToS just to ACCESS that video game. And if you click no, the game shuts down and you lose access to a product you bought and own.
Too bad Lawyers, and Lawmakers are a bunch of old people with no understanding of Technology or how abusive companies are.
If you want " Bad " Terms Of Service " find any random online, kid friendly " game " that lets users add content and do not use the software, just read the terms of service. There could be several videos on why sections should be against the law, why it is unethical and my favorite ' if you created it prior to using their service, who actually owns the intellectual property ' as potential hour plus long videos for almost any given site.
@@Jirodyne That's often predicated on whether the consumer can reasonably be expected to read and understand reams of legalese that can get bypassed with a simple "I agree" click. (Hopefully not, but it depends on the individual court.) But if you're one of those "old people" you dislike, then the court might very well assume that you read and understood the terms of service. No getting out of liability free for you!
Years ago, I had an oil furnace replaced. The installer handed me the work order to sign. I read all of it, front and back. He told me in his 13 years doing installations, I was the first person to read the back of the work order.
Yes, that is why limits need to be placed on companies trying to avoid culpability.
Do you sign the ToS before or after paying for it? Something I've noticed with a lot of ToSs, especially in games, is that they are not legal contracts as you sign them after you have already bought the product. They then hold your money hostage until you sign which makes it a contract under duress.
It's free
End User License Agreements are legal contracts.
How can it be a legal contract with no consideration given on one side?
Interesting argument. But currently the law in the US says EUAs are binding. In theory you have the right to return the product. But most retailers say once software is opened, it is non-refundable. Which contradicts the EUA.
If you do it when you sign up, then you do it before you pay. You can sign up and use it for free. The paid version just gives you priority (you can use it when the system is busy), faster responses and early access to new features.
They already have a disclaimer that what it can generate may not be accurate... so all it really does is set up a loop of "if your offended and want to sue, you can, but you have to pay out to yourself + lawyer fee's and we choose the process."
years ago I ordered some software from a small developer, as always I read the EUA. Near the end of the EUA was a statement "since you actually read these, here is some free software you (links provided)" Pretty good stuff too as I recall.
I have great respect for those devs.
Thank you for such an informative video Steve. No way would I actually understand how they write these terms. We accept so many lengthy terms all the time and have no clue what we just signed. Even if I do read them, I won't be able to understand them for the most part. Look forward to more of these types of videos. You do a great job.
So it sounds like if you're going to use that service you need to do it under the protection of a LLC so you can have the LLC file bankruptcy when you get those bills
I was thinking the same thing.
The big brands use this same tactic to limit access to suing the parent company and then declare bankruptcy of the spun-off LLC. Johnson & Johnson did this with baby powder in Texas and left women dying from their product with no recourse.
I will never knowingly buy from J&J ever again because of their unaccountable acts and the intentional deceit unleashed on their loyal and trusting customers.
Social corporatism is here to take over the world at this rate.
Tomorrow's consumers will have no power and be happy.
Yeah I completely vote for a TOS series, it's pretty useful information. I'd love to see Meta, Twitter, etc., examined. Thanks!
Thanks for making this. EULA's can be very hard to read and super long. Even when I take the time to read one half the time I don't understand alot of it. Having someone like you in the know highlight and break down the important bits is very helpful!
not really; you just need to invest a little bit: I recommend to just take your time to read a few, and look up words you don't know or non-legalese explanations of these sections. Like many legal documents, there are all based on templates so most EULAs and TOSs are (subjective guess) probably over 75% the same - not accounting for replaced variables, like the name of the company or site, etc. Ever since I was a teenager, I "read" almost every EULA/TOS and Privacy Policy for everything I sign up for. It's easy because depending on the service, I'll be focusing on looking for a specific section (warranties, nature of service, data collection policy, 3rd parties, etc.) while also skimming to make sure how standard the other sections are, and taking note of any unique sections. It rarely stops me from agreeing (unless I'm checking out various options, it can dissuade me and make me choose to try another option first instead) , but if I see anything not good and still choose to agree and use the service/application, I limit my expectations and use of the service accordingly. It only adds a minute or two to the sign up process and seriously worth it for the awareness - it's a contract you're signing after all, so it literally is an obligation and no-brainer to be conscious about it.
if a person never uses ChatGPT, never agrees to the terms, but ChatGPT engaged in defamation, libel, slander, etc. against them, the terms of use do not apply to their lawsuit.
The way it sounds, they are going to make who ever made the request deal with the suit.
5:32
Yes it does. It applies to the person that used ChatGPT to create and spread the defamation that harmed you. If you try to sue ChatGPT, the TOS allows them to pass the entire cost to whomever created the original defamation. ChatGPT itself cannot defame you, there needs to be a person telling it to create these things.
ChatGPT can't engage in any of those things, it can only answer questions, and only in private conversations.
libel requires knowingly and maliciously publishing false information. ChatGPT can't know it is wrong, can't be malicious, and can't publish anything. That seems like it couldn't possibly be any more airtight, so the TOS isn't even necessary there. The publisher is the person who read the disclaimer that the information may be false and published it on social media anyway.
@@ThresholdGaming > ChatGPT itself cannot defame you, there needs to be a person telling it to create these things.
From a technical standpoint, this sounds totally fair. The machine doesn't actually think, nor does it act on its own, it's just a language model. Passing it on to the person that misused the data it generated is appropriate. Otherwise it'd be like Microsoft being liable because someone compiled ransomware in Visual C++.
If GPT is dynamically changing, constantly being seeded with new information, can it ever really be provided "as-is"? You'd be agreeing to terms based on an unknown future version of the product... It seems to me like they'd need to get you to agree to the terms every time it updates/receives new info. But I don't know jack about law, so I'd love to hear how my lay interpretation is wrong.
It's not dynamically changing. As a precaution against unpredictable/undesired behavior, each version is put into a "frozen" state once it's been trained and tested, and does not continue to learn and evolve as people use it. That's why they release numbered versions which are separate from one another. So if I understand you correctly, I believe the answer to your question is that yes, it can be provided "as-is."
@@CYB3RC0RP I think all that is true of the LLM component, at least up through the pretraining phase, but the developers are constantly working in the background to improve reliability, correct major errors, restrict potential abuses, etc.
I don't recall ever being notified or prompted to agree to any of those updates
Also, it's unclear to me how much the plugin system muddies the water
The example of plagarism is something that can definitely happen. I'm playing with some smaller (
Oof
Easily mitigated by running output through a plagiarism checker, and you are correct that this risk drops precipitously once an LLM reaches a certain parameter size. People will still be angry that an LLM "read" their work and can generate a novel summary based on their effort, but in my mind this is similar to what humans do all the time.
The guy that made the app to get around class action suits is a genius!
Well he got a huge 5 million$ fine so it wasent so genius
@@everythingpony For what?
Ben on top of the OED's above the low flying aircraft sign.
Jesus. I'm so glad you parse through these things so we don't have to. My brain just won't pay attention when I'm reading through legal stuff. Honestly that is why I subscribed to you many moons ago. You are amazing. You always give us the good juicy downlow. Thank God for this channel
If a robot cop starts randomly killing people it can't be sued or put in jail. It's not a person. Let that sink in.
then you go after whoever "owns" or "operates" the robot cop just like any other normal cop when you go after the police / city
A sink rings your doorbell in the middle of the night...don't let that sink in!
It falls under the same ideology of "Guns don't kill, people do."
Incorrect, the robot would have to be built and programmed to adapt to those behaviors, leaving A PERSON responsible
The makers of the robot, the custodians of the robot, or the municipality or the business can be sued. The robot has no assets anyways
This is an excellent video for many reasons.
You are one of the few who are truly a Legal Journalist, for lack of better words. Obv some stuff falls on ur plate. However, it's obvious you do your due diligence to get the most accurate info.
Steve, your vids make me worry about the legal implications of waking up in the morning.
It's really hard for me to believe that an indemnification clause in a ULA would hold up in court. Like just because they put something in a ula doesn't mean it has teeth. If I get a job with some company and they said, "you agree not to hold x-company accountable for any labor laws" they're not going to get out of being subject to the labor laws, right?
ChatGRP isn't the only one. The local school district was finally forced to do background checks after it was discovered they had a whole lot of ex felons like drug dealers working with kids. The form they give a person states that the new hire will give a company, which no one's whether legitimate or not, the power to collect any and all information and to sign for, and use one's name in order to obtain any and all info. May as well give them power of attorney. But even worse, one has to check four boxes, the last states that one gives up one's right to know what information the district and that company collects. Congress passed a law giving a person a right to know what an employer collects on them with a background check. This company and the school district get around it by demanding one sign this right away or no job. The list of what they can do is crazy. Hire private investigators, access almost any record using ones name, etc.
You are the first person on UA-cam, that I have seen, who addresses the real legal dangers of using ChatGPT.
The indemnity clause is very standard in the software development world. Particularly for tooling and assets that are used to create new products and services. Typically this isn’t an issue for software products, but the GPT service can/will be used for so much more and I can see use cases that would likely be litigious. Particularly IP and copywrite challenges
In California, judges will usually want to hear a case even if the parties already settled or agreed not to litigate etc.
Kudos to you for this episode. Few people actually read terms of service.
Thank you, Steve. Good information that most of us would not be remotely aware of without your kind service, even if we took the time to read the TOS.
Steve Lehto is an American lawyer, author, and historian who specializes in automotive history and consumer protection. He is based in Michigan and has written several books on automotive history and other legal topics, including "Chrysler's Turbine Car: The Rise and Fall of Detroit's Coolest Creation" and "Preston Tucker and His Battle to Build the Car of Tomorrow." He is also a frequent contributor to various media outlets, including podcasts, television, and radio shows, where he discusses legal and historical issues related to the automotive industry. -chatgpt
Someone else ran a question like that through it and it tossed in a few things which were NOT true. They weren't defamatory, however.
@@stevelehto,
I had a question for you that involves Civil Asset Forfeiture that worries some friends of mine in the USA.
What's there to stop a corrupt cop in a city/town to turn off their body cam, walk up to a random parson who they just saw withdrawing money from an ATM, detain the person and take the cash under civil asset forfeiture???
Especially at night when no witnesses may be around.
I like watching your channel from time to time because I like learning legal definitions which I might have previously misconstrued. I didn't realize that indemnify meant that if any harm befalls the disclaimer meant that they then have the ability to hand you the bill for the them defending themselves. That's super informative to me
with how 1 sided all these click wrap, shrink wrap, etc contracts are you would think they would all be illegal and unconscionable by now
Need money and time to sue
@@tomhenry897 and public officials with ethics
Big tech makes campaign contributions on both sides of the aisle.
@@TheRealScooterGuy which is why I said with ethics instead of democrat or Republican
@@blargblarg5657 Yep. (I was replying to the comment that started this subthread, and hadn't seen yours when I wrote that. Ethics in Congress is as rare as orange trees in Idaho: It exists, but it's hard to find.)
This is a great video. I work in the analytics space and everyone seems to be using ChatGPT. I actually banned it for certain uses because I did read the terms of service and it didn't leave me feeling comfortable using it for many of the things our CEO wanted.
Ben hiding in the darkness above Low Flying Aircraft, Steve's LHS
People need to take more of these companies to court so they stop getting away with writing these ridiculous terms.
Ummm.....or just stop signing things, then sue later when you don't like it. It is the stupidity of people doing this to themselves, stop blaming others.
@@ThresholdGaming That may be true but it still doesn't change the fact that they exploit the stupid people!
So to get around all this, we do what I've seen in a lot of UA-cam videos ... "Allegedly and for entertainment purposes only." That's what I'll do until further notice. Great informative video. Thank you.
This is why I refuse to give them my phone number, and sadly have been unable to use it yet. (edit: I'm referring to their privacy policies .. which imo are even more egregious than what's brought up in this video. You _may_ run into the litigation issues mentioned here, but you will _DEFINITELY_ be affected by the privacy issues).
Yea, but it's understandable why they have to do it, and sadly it's not even much worse, if at all, than half the other big websites out there.
What a helpful discussion. Thank you. As more people find out, either the hard way or easier way, that THEY are responsible for information they assert, those terms can reduce the harm chatgpt can inflict. Of course there will be plenty of pain for now…. Stupidity and carelessness is not out of fashion yet.
I have not yet had anything to do with CHAT-GPT - - and I do believe I will keep it that way based on this information.
Thank you sir. This will become one of your most popular videos.
We really need more regulation to nullify these types of causes.
Glad to see you comment on this subject, thank you. I sent you similar subject matter titled Go Daddy. Really curious on that wording as well.
Microsoft had same clause about the indemnity in their EULAs in -90s and -00s, pretty sure there was a court case that said those aren't binding.
Wow! That blew my mind; "indemnify" as it's written in contracts is incredibly misleading! Thank you so much!
No problem. I have my own set of 'Terms And Conditions' that exonerate me of all terms and conditions like this.
Thank you, Steve. Enlightening. When doing research involving people, a researcher has to ensure prospective participants amply understand the study and any associated risks. How is it that terms and conditions that companies impose are not held to a standard sufficient to protect the average consumer?
1 problem mentioned here could have a simple solution. Whenever ChatGPT quotes something, it could contain the quote in quotation marks, and credit the author of that quote.
It should be possible to click any item and it pulls up the sources of that claim or response
Hey Steve! Love your content. Very informative.
Here's a piece of info you might be interested in: I believe the reason C-GPT wants to protect themselves from inaccuracies is due to the fact that the data sets used to train C-GPT are not always up to date with the current information on the web. It's a popular misconception that the AI(s) have full time access to the internet for information. This is false. And in the case of C-GPT3 I asked it the following question:
-------------------------
ME: up to what date is your dataset accurate?
C-GPT3: My training data includes text from various sources up to a certain point in time. The knowledge cutoff date for my training data is September 2021. This means that my responses are based on information that was available up until that time, and I may not have access to more recent events or developments that have occurred since then.
However, I am constantly learning and updating my knowledge based on new data and information that is added to my system. So, while my training data has a cutoff date, my responses can still reflect new information that I have learned since then. Nonetheless, it is important to keep in mind that my responses may not always reflect the most up-to-date information available.
-------------------------
Now it's important to understand that the continued "education" of C-GPT is a combination of the team adding new datasets and the possibility (unverifiable) that users contribute to it's "education - which could result in misinformation entering the data sets permanently. The software itself believes it's data will continue to be updated, however they cut off it's access to new data as of the aforementioned date.
Therefore it is a logical business decision for the owners of OpenAI to protect themselves from possible litigation that results from the proliferation of outdated information. That being said - Companies like this should be more proactive when it comes to informing their clientele about the limitations of their products rather than setting up legal fortifications to protect themselves from something that could no doubt be avoided completely.
Steve - all this means is that companies which use ChatGPT will need TOS that pass these costs to their users. Suddenly, anyone who sues anyone is just suing themselves and paying for all the lawyers.
Some of the waivers, even if agreed to, are not enforceable. Judges sometimes throw out agreements that violate basic rights, including rights to use the legal system.
An easy way to avoid these issues is to always ask ChatGpt to rewrite the material it generates or rewrite yourself BEFORE publishing it.
The second thing is to always fact check what it's telling you.
just closed my ChatGPT browser window... it will Never be opened again. The risk/reward isn't there. Thank you for bringing this to light.
Actually, Openai could be the publishers of material that arises from your interaction with chatgpt and you could still be held liable since it all arose from the same interaction.
"you will defend and indemnify" - I'm not a lawyer, but I've paid more than my fair share, and been involved in lawsuits. Those clauses usually don't amount to much. You can put anything in a contract, it doesn't make it enforceable.
The indemnity and warranty clauses are pretty standard for most if not all online services. Same thing with the limited or no warranty provisions.
Excellent information, thanks! If I use the service and use similar language to protect myself, would I be able to hand out bills of indemnification to my customers if similar lawsuits arise? Second question: I am supposing many big giants have similar clauses yet end up
paying billions of dollars for settlements. How can that happen if they have similar clauses?
The thing that worries me most about this is that these “AI” programs are committing an exceptionally large amount of plagiarism and copyright infringement against so many types of creators and publishers. I expect there to be massive class action lawsuits in the future. Remember Napster? Well, who’s going to be on the hook when the bill is due?
The government needs to set regulations for training data
@@heyborttheeditor1608 No, the government setting regulations is never the answer. What it should do is deny these forms of contract. EULAs and ToS's should be invalid because there is no room for negotiation and they can be changed unilaterally. Literally not a contract. Literally unenforceable.
AI works are usually transformative though, so not plagiarism
That was brilliant! Thank you for your insight. I had to think twice knowing person and human are equall to machine. That's why I would never protest or claim human rights . Only man is above all other definitions with the correct and valid rights. Like gold or silver everything else is just fiat and credit.
I wonder what will happen when, not if, open AI gets sued and during the discovery process it is uncovered open AI has thousands or even millions of copyrighted protected items stored in its databases which it then uses to make a profit by the pure nature of its business model. It is no wonder the lawyers put in overtime writing and crafting their terms of service to try to protect them from these claims but will they skirt the law?
Claim computer failure
Act of god
Anyone suing them for a copyright violation would have to prove that they're distributing something which violates the copyright of some specific thing. Using copyrighted works in the creation of something else doesn't actually imply a violation. See "Authors Guild, Inc. v. Google, Inc." as a quite extreme example where Google, as part of their Google Books program, hosted many copyrighted book exerpts and profited off of them by linking to affiliates which sold said books. Google one that case on the grounds that Google Books was a fair use by its transformative nature. ChatGPT is far more transformative than Google Books was, so it's unlikely any kind of copyright lawsuit against OpenAI would win.
I read all that myself, and immediately deleted the app not saying that I understood it all but that’s another reason why I deleted it immediately too much. But thanks to you I understand it a lot better thank you for your time and explanations 😎🇺🇸👍
Chat GPT is the means to launder plagiarism. "That wasn't me, that was them and their blasted AI" Sounds like the bar for double checking before publishing just got raised a peg.
Exactly. People worrying if this will make everyone cheat in school seem to be unaware that you can already cheat. And a lot of people do cheat. The only difference is this weird idea it's not the same because a novelty algorithm did it.
Thanks for covering this! I saw the case and hadn't seen anyone discuss in full.
I (as a programmer who has dabbled in AI) don't understand why people trust AI so much. If you got information from several random anonymous people on the internet you would verify it before believing it. That's basically what ChatGPT gives you, but because it's an AI people just believe it. And I just realized that since it doesn't always produce the same results a random person can get the same level blind trust by just making a claim and saying that it came from AI.
Yeah, it's useful as an assistant for various tasks or to fool around with, but you can't blindly trust it any more than you can blindly trust public opinion.
@@crowe6961 I actually am quite impressed with it as a language model in multiple languages. That just doesn't extend to providing accurate information. At least not without further training within a specific scope.
Where I think it gets dangerous is when people view it as some kind of oracle. Especially when the conversation moves away from language models, and into things like law enforcement use, pre-sentencing recommendations, or Medicaid funding. (The latter two have actually been used in my state. )
It seems possible to me that a chatbot could use a means that is not cut and paste to also arrive at an extensive passage matching verbatum a published work of authorship that was never accessed by the chatbot.
I think it makes sense though. Why would you repeat what a chat bot says? People are way to intellectually lazy. Use the info to point you in a direction but you verify yourself.
As far as I can tell, chat gpt 3 thinks the current date is sometime in mid 2020. It's possible that the mayor's accusations were still unproven allegations at the time that GPT's training models were created back in 2020 but have since then been proven wrong.
I asked it what today's date was a few weeks ago and it said something like may 23 2020.
Chatgpt has help me with self improvement and also helped me with advice for my book. It’s helped with character descriptions and settings. It’s amazing depending on what you’re using it for
😢 you can't help yourself
@@joeyc1725 Yes. I helped myself by using this tool. What do you mean? That’s like me saying I dug a hole with shovel and you telling me you couldn’t dig a hole with your hands? Toxic and negative was your comment. Do something else with your time please
It might be time for a federal law limiting these terms and conditions. I personally won't use ChatGPT because when I attempted to sign up, it wanted my phone number and wouldn't accept my VOIP number, and I do not give my "real" cell number to ANYONE.
Exactly!!! That is the point where I bailed from the registration and closed the browser.
Combined with the power of AI and the license terms, the possibility of profiling me and selling very complete personal data to the highest bidder is extremely high and dangerous.
The indemnification clause I believe is intended to be used in the event someone uses the API for the variety of AI engines OpenAI offers. What most users don't know; OpenAI will actually let programmers develop software that utilizes the AI engines (they have multiple).
This is why I'm pulling back to the stone age.
We have millions of people streaming into our country every year. They all need to be fed and housed. Most are going to be getting government funded healthcare. Where does the money come from to pay for all this? It comes out of the income tax of people who work for a living. This technology combined with robotics is going to reduce the amount of working people. How will all this be paid for then? It’ll be paid for by an ever increasing tax burden from the ever shrinking work force! That means you’ll be paying 70% or more of your income to the government. It cannot ever go down, because it never has gone down. Any thinking person should reject this technology and the robots and want the borders closed and immigrants highly vetted. But, this present government wants as many people dependent upon them as possible so that they are re-elected in perpetuity.
AI creators really should be held accountable for the accuracy of the information that the AIs produce.
Well, the accuracy isn't going to be 100% so there has to be some sort of lenience. There could be some restrictions on what data it is trained on, maybe.
2 questions:
1: If the slandered party is in fact not a user of OpenAI and/or ChatGPT, and hence never agreed to the Eula. Assume the found out about the slander through 3rd parties, how would that affect the ability to sue?
2: If the indemnifier is effectively judgement proof (as the man on the street user may be in a lawsuit of this scale), where does that leave OpenAI if the judgement goes against them?
These models are great at generating a lot of content that seems plausible, even if it is factually incorrect.
I am a researcher who uses GPT3 and 4, including the advanced model "DaVinci2" and "Dall-e". I want to say "Don't be an idiot". GPT, including chat GPT are basically that thing on your phone that guesses what letter or word you are going to type next. What makes the model advanced is that you can ask it to guess farther out into the future and it does this with less and less accuracy. The guy who put it in charge of the suicide hotline and then acted surprised that it told someone "Maybe you should try suicide once and see if you like it", was the kind of corporate criminal-minded oligarch who needs to go to jail for what he did. If he didn't know what he was dealing with, thats criminal negligence for not learning before doing, but we all know that he knew but did it because it let them fire psychologists to save money.
Microsoft may have lost the company by putting it in the Bing search engine because the answers it gives are basically like asking a high school kid. If it knows the answer, you are in luck, if it doesn't know, it is unaware that it doesn't know, it makes up the most plausible story it can based on what it does know and presents that as an "answer".
I'm not saying AI is useless, huge breakthroughs have happened in the past few years that allow AI models to outperform a room full of chimps with typewriters. They can perform obstacle avoidance with cameras almost as good as some insects. They can act polite as they check you into a hotel and prompt a machine to spit out a room key.
Understand these machines learn but can learn false information just as well as factual stuff. It also has no real experience with the world and no common sense. One funny example, when I first met Dall-e, he thought "plush toy cat" was a breed of cat, so asking for a picture of a mother cat and kittens always wound up with one or two plush-toys in the litter. Expect this. If you ask it to write your history term paper, do not complain if adventures of Christopher Robin and Winnie the Pooh are included as factual events.
Yeah but this is just the beginning? Do you remember how bad the Internet used to be There was basically nothing to do now Now we have access to unlimited information.
@@MidwestBoom There's a limit, its just that is more than you can absorb in your lifetime.
If you get upset for not getting your premium account access, and you submit a complaint along with screenshots of your bank statement, they say they’ll respond in a day. But really they lock you out. You can login. You use it. But it all spits out red error codes. You’re also then blocked from accessing the chat help bots to ask questions about why. Make sure you have a VPN or use duck duck go. Because you can’t go make a new account with a different email address. It’s got your IP address. Happened to me.
But is that mayor bound by terms he never agreed to?
No. He isn't.
@@NighDarke I didn't think so, but the law seems to be getting so weird these days
I think the mayor went into ChatGPT himself to see what the AI had to say about him, and his beef is with ChatGPT not any 3rd party publisher, so there would be no one to indemnify ChatGPT to send the legal bills to. There could be many cases like that where a user looks up something about themselves which turns out incorrect or defamatory, and then it would be thrown out, or at worst go to arbitration unless it was opted out.
Great discussion. I think of ChatGPT as being analogous to Wikipedia. It is not source material but can be a tool to help find source material.
The whole point of the AI craze is to lure in its study subjects 😂😂😂
This is great content. Not that I am defending OpenAi in any way, but the indemnification clause is in alot of the software and things people use everyday. For example anyone that uses apple products with icloud has aggreged "...to comply with this Agreement and to defend, indemnify and hold harmless Apple from and against any and all claims and demands arising from usage of your Account, whether or not such usage is expressly authorized by you." It also says " This obligation shall survive the termination or expiration of this Agreement and/or your use of the Service. " OpenAI is not the problem, the problem is we as regular citizens have created/allowed a world in which the corporations have all the rights and we have none, but to live without great difficulties we have to agree to their terms.
This video came out five minutes ago and there's already 42 comments and 306 views
UA-cam's comment count has been out of wack for a couple of weeks(that I've noticed), views are probably correct though, Steve's Popular
I assure you those numbers are correct.
@@stevelehto Not the comment count, hasn't reached 30 yet, right Now :)
I like when I read a comment that replies to another comment, but the original comment is not visible. Or it says there are 4 replies to a comment, but then there's only three. I know Steve is popular, but I'm not sure yt is honest.
Or maybe filters are messing with what you can see.
some jurisdictions may not enforce hold harmless clauses if they are considered overly broad, or if they attempt to absolve a party of liability for willful misconduct or gross negligence.
This strikes me as a symptom of our insanely litigious society. If someone uses a free service that's known by all to be experimental, no one should think it reasonable for that free service to be liable for anything. The fact that these Ts&Cs are needed is sad. And they are needed, as shown by the Australian mayor.
If it damages someone, you're GD right it should be liable. What a ridiculous argument.
Since it is already scouting the internet, it can include sources.
@@Arassar It doesn't do anything on its own, it doesn't think, it's just a mathematical model that generates text based on statistical probability. No one sues or prosecutes a compiler vendor for compiling malware, no one sues Stihl for a chainsaw massacre.
@@Arassar The AI itself is unable to post anything or make use of the information it provides, all it can do is respond to queries and only to whoever made the query. Posting potentially false information for the world to see or using it for something without fact checking and damaging someone can only be done by a third party. So not all their terms are crazy, just some.
The issue has been around since the early days of social media, many people and companies are way too lazy to fact check their information and simply copy/paste stuff.
Steve, thank you for all your efforts in making and posting all your videos, please keep it up. I have watched quite a few now (and Subscribed) and your style and delivery is excellent in my opinion. I am a Canadian - My concern with these APPLICATIONS (apps) is the end result of the use of them. These particular types of applications have too many uses, both positive and negative - Rabbit Holes. too many to count. I thank you for clarifying "some of" the terms of use for this one. To many people just click Yes or Accept, and never even look at what they are clicking. Like you say (in another video), we will just have to wait and see how they use it. In I.T. I have seen for myself that so few have actually read the Terms of Use for windows 95, 98, 98SE, 2000, XP, Vista, 7, 8, 10, and now 11. It is scary. They just want to play or do work with these apps, and do not think of the analytics they are scraping from the computer they are using it on. Chat GPT and others, WOW what they have in their "Terms of Use" when you really look through them is incredible. Please People - make the effort to read them first, and if you can not make heads or tails of it - Please acquire and pay someone like Steve pick it apart and explain. In the end you will be grateful for the effort.
Wow! That was a scary eye-opener! Thank you so much for sharing.
When ChatGPT was first out (or close to that time) I played around with it.
I was concerned when I asked it a very simple question, that I knew the correct answer to and it provided a completely wrong answer. That in and of itself wasn't the concern as that happens.
Where the concern came in is when I told CGPT that it was wrong and provided the right information.
I expected some kind of weird response, but to my surprise, after a few seconds (longer than usual), it came back and said that I was right and that it would update its database! Way too easy!
So a few days later, just for fun, I asked the same question, but worded it differently so that it wouldn't just pick it up from what is cached and to my surprised, it did have the corrected information.
This was just related to Arduino boards, 2 with very close names, but slightly different architecture.
With the ease at which I was able to "correct" it, I have to wonder if others out there, with some bad motives in mind, might be able to do some damage to CGPT's functionality. hmmmmmm
I would suspect that some interested in electronic warfare are already using their own AIs to train chatGPT in subtle ways to shift public opinion in the directions they want. They already do it for forums and social media, this would be a logical next step.
@@nospamallowed4890 I wouldn't doubt if behind the scenes, during development, that the military was/is involved.
It doesn't work that way. There is a certain level of randomness in how the answers are picked, but your input is not automatically used to train the AI model. What does happen is that the AI remembers your corrections within the same chat session (actually, it "forgets" after a certain low level of words exchanged, which is a big nuisance/problem in various applications).
@@clray123 Tell ya what.
You go and write some AI code and get back to me!
@@BlondieHappyGuy He's right though, having chatgpt learn directly from users would be a really bad idea as it would be very easy for competitors and attackers to flood it with junk information. All the data added to chatgpt is handpicked by humans
We need to amend the 7th amendment allowing for people to assert rights they may have contractually waived and by so doing aligning more to the constitution's reference of "unalienable rights".
😂😂 Don't build your business with someone else's AI