Almost a decade ago, I attended a job fair which had a resume specialist. The subject of the seminar was improving chances of your resume being noticed by employers. The specialist's only real advice was to cut and paste the entire job listing that you were applying for into your resume in either a header or footer, change the text to white, and reduce it to one point font size. This way, the algorithms that scan each resume would put yours at the top of the list because it had all the keywords it was programmed to find.
That's basically "keyword stuffing". It's an old trick. It might have worked 10 years ago, but the algorithms learned how to detect it long ago. They have got surprisingly good at understanding the context of content and no longer reward this practice.
@@PtiteLau21 The same thing that happened with UA-cam algorithm. It used to be that the algorithm will only use the video title as keywords, but people then game the system by including popular keywords in the title that doesn't describe what's in the video.
@@miguelangelsucrelares5009 But are they punishing the practice? Because if not, it's still "why not" just in case. Personally I consider it a somewhat dishonest practice that deserves a moderate mark down. It is also an indicator that the resume might be otherwise inflated as well.
absolute best take I heard on this: "we successfully taught AI to talk like corporate middle managers and took that as a sign that AI was human and not that corporate middle managers aren't"
Corporates have convinced the whole world that a glorified search engine is AI... there is no intelligence involved. Remember when crypto was the big thing and everyone was trying to do something with the blockchain? You dont hear much about that anymore do you>
I also find it odd how little concern there is for so much of society to be structured around making it people's jobs to act inhuman. Certainly makes it easier for business practices to be inhumane.
Engineer Makes Something That Works: Excellent! Now let's take it apart, verify everything is still functional, then maybe add more features. Scientist Makes Something That Works: As predicted, but excellent! Now let's try to prove it in even more elaborate experiments. Programmer Makes Something That Works: ...**Spittake** That worked!? _We must never touch it or look at it again in case it breaks_
Yeah except when the people at the top *want* that unaccountability. "Oh, it isn't our fault, don't punish us. It was the computer's fault, that dang ephemeral algorithm."
A note, less on the subject matter and more on John's delivery of the lines ... I really admire how he can say "Sometimes I just let this horse write our scripts, luckily half the time you can't even tell the oats oats give me oats yum" without skipping a beat or losing face. Now THAT'S professionalism.
Was it really John Oliver? I can imagine on next weeks show John is going to come on wearing a bathrobe Zooming from his kitchen and saying last weeks show was completely AI generated and we are done. Then the Martin Sheen final message starts to play....
The Tay AI also made the funniest tweet ever. She said that Ted Cruz wasn’t the zodiac killer cause Ted Cruz would never be satisfied with the deaths of only 5 innocent people
This topic should be revisited on the show now one year later now that so much has changed, particularly the AI alignment problem with the advent of AGI
The fact that this segment didn't touch on the alignment problem shows how much public understanding of AI is lagging behind its accelerating capabilities.
TRUE STORY: In my teens wanted to work at a movie theater - and they handed applicants a mind-numbing 14 pages application - wanting to know everything about you - even what hobbies and sports you liked - it was entirely ridiculous - around page 8, I got worn out from filling out this 'essay' of my life for a stupid theater job - SO when I got to the section asking if I had ever been arrested before = I said: "Yes, I murdered an entire movie theater crew for asking way too many questions, but got off on a technicality." - and turned that resume into the manager as I stormed out the door, pissed off that I had wasted an hour of my time filing out paperwork w/o an interview. ...well, 2 days later I got a call to come back to the theater for an interview, and thought, oh sh*t, well, I guess I'm going to get railroaded and berated by the management for my saucy comment - but I showed up anyways so that at least I could suggest that they TONE DOWN the length of their stupid applications. ...turns out, they offered me a job, so I asked the most obvious question: "So, you read my application ... all of it?" "Oh yes, looks good" the manager responded and I knew they were a bunch of lying dimwits ~ I ended up working there for the next 5 yrs, and eventually rose in ranks to become the theater manager - When I told my story to new recruits that nobody reads the stupid applications - they scoffed and didn't believe me - so I took them to the locked office storage and rifled through the stuffed cabinets of folders of all the applications they kept and found mine, and showed it to them to their amazement. Applications are a farce, you get hired by chance and immediate need. ... I always thought that if I every flipped out and murdered my entire staff, at least I could say that I didn't lie on my resume.
yes!!! absolutely top ten funniest shit i've ever heard. Cause it's like you're sitting there thinking "what is that outfit?" and immediately he hits you with it. This writing team is bar none i swear. They don't leave jokes on the table at all. Everything is accounted for. Love it.
What shocks me most about AI is how rapidly many people are eager to trust it with important tasks despite not understanding what the product fundamentally is. It's very good at predicting the next word in a sentence-a hyper-advanced autocomplete. It doesn't *think creatively.*
it's a brilliant tool when used properly, but people hear "intelligence" and assume it can actually think. great for mid-level filler, common-formatting, prompted random jumping-off points -- bad for research/fact-checking, unbiased or critical perspective, and responses requiring unique, uncommon or specific understanding
As an example: "Write me a marvel movie script" will probably turn up a marvel script that cuts together scenes from previous marvel works or fan fictions it found on the internet
@@devinablowI actually think AIs will be much, much smarter in the future than they are now. But people wanting to be comfortable around AI and minimising the risks of it will be our downfall.
Thing is, people don't understand the difference between classical, or 'stupid' software, smart software and AI and ESPECIALLY the people in charge of states and companies today, are people that have encountered stupid software AFTER their education, not during their education and they've to this day learned "Stupid Software can only do what I tell them to do and this they do perfectly unless I make a mistake", they're then barely aware of being in contact with smart software like the usual non self learning algorithm and completely ignore their existence and then they encounter AI and still keep the mindset of "It does everything I tell it to do perfectly" without recognizing, that AI is actually doing jack shit and in the best case just is calculating statistical probablities and if it cant ... it just makes fucking stuff up because that's what it's programmed to do *g* Therefor "As long as I as a user don't make a mistake, the software won't make a mistake" while it actually should be "If I want it to do something, I'll better make ABSOLUTELY sure that I myself am the highest authority on what they're supposed to do just to make sure it'S not fucking me over with some random, made up bullshit" It's kind of secretly changing the formulas in your next door neighboors favorit boomer excel spreadsheet around randomly which they're using for 20 years now to an extend that they don't even look at the numbers anymore because "It was always correct"
the funny thing about the "i want to be alive" is that, since AI just reads info off the internet, the more we talk about self aware AI, the more it will act like it is self aware.
There's a thought experiment called the Chinese Room, and its... pretty disturbing. Essentially it's a theoretical proof that we can never know for sure if computers are self-aware, and they could suffer in silence for millennia without us knowing
@Jacob Singletary: That thought is terrifying, and here's why: one of the key hallmarks of a psychopath is complete lack of empathy. Because they are lacking in empathy, they must compensate by becoming good at reading people, manipulation, and mimicry; they match their reactions to whomever they're with, pretending to feel what they are psychophysically incapable of feeling, and tailor that façade specifically towards their present company. Put a psychopath in a room with a psychiatrist, the psychopath will be forced to adapt all the harder, so as not to get caught. If they're succeful in this new hostile environment, the psychopath becomes all the better at faking genuine human emotion, but make no mistake, they're still a psychopath, still highly manipulative, and still potentially dangerous. Now, here's why the original premise is so scary: the situation is the same for so-called AI, just replace empathy and emotion with actual intelligence. We could end up with an AI so skilled at faking that it's self-aware, and nobody would be able to tell the difference. Now, if Alan Turing were alive today, first, he'd prolly wonder why he always felt so overheated (cremation joke ftw), but second, he'd say that at that point, there is no difference between faking it so good that everyone is fooled and actually being self-aware. Frankly, self-awareness is just a baseline problem, it's what an AI _does_ with that self-awareness that's got me and several much smarter people losing sleep at night.
@@jacobsingletary8857 Fool 'itself'? No. Not the current iterations anyway since it has no thoughts to speak of. It is just regurgitating information. It doesn't actually know or understand anything; it's google search results, but with phrasing capabilities. It's basically a more advanced version of word predict features on your phone. Now can we get an AI to speak to you as if it believes it's self-aware? Yes. You could probably even go ask GPT to pretend it's self-aware while answering questions and it would do so. But it doesn't mean it really believes that or has any thoughts about anything it's saying.
Having been involved with computing for over 50 years, I recall that the mantra that was always preached, “Garbage in, garbage out”, pretty much sums up the current situation.
@@rogerlippman1415 Change that "to insufficient information in,...." If anything these recent failures with AI taught us that pure Logic and Common sense are two entirely different things. Taking the example of the cancer identifying AI you can see clear as day that it actually worked precisely how it was designed. It looked through millions of pictures comparing healthy and unhealthy skin and looked for identifying characteristics. It found that one big identifier is a ruler next to it. A ruler is pretty hard to miss. Why wouldn't it use it to identify cancer? Besides from the AI's perspective the ruler might be the Cancer itself. Unless the author set a parameter that somehow catches all objects in a picture that aren't skin without fail which is already utopian supposition, the AI that only looks at two dimensional pictures might as well assume that many cancers appear on the skin as gray rectangular moles with black stripes on their edges.
Yeah, I have to wonder which genius thought that training AI on the entire contents of the internet would result in a chatbot capable of emulating a mentally-stable human being.
It's a graphic for a segment about how they're likely to have their work devalued to the point of not being financially viable anymore. I'd go hard too.
the problem with "opening the black box" is that not even the developers know how it works. It's not the same as source code. it's like a box of sand, and you pour water through it. You see it trickle through, but you have no idea how or why exactly it's choosing the paths that it does.
They're thinking about the problem in the wrong way, it's like asking what sequence of neurons fired for a human to behave in a certain way? Even if you knew the answer it wouldn't be meaningful
@videocritic2087 , they don't, they know how the individual matrix multiplications work, but that is like being able to calculate the interaction at each grain of the sand. So yes, we know what happens to the data, like this number gets multiplied by five and added to this number, but we have no idea how billions of these simple operations make it to recognize a number. We can track each of the input values, how they are converted to the output value, but it is just a meaningless pile of simple mathematical operations, that for some reason magically works. You can maybe analyze simple ai with few neurons, but this isn't the kind of ai we are talking about here.
@@videocritic2087 it's like asking you how you decided that cloud in the sky looks like a bear. We know neurons fired in your brain but we don't know how or why you came to the conclusion that it looks like a bear.
He has a team of writers, and they do it only once per week; but they are working on other stories all the time they are producing the stories that make it to show.
One of my favourite ChatGPT stories is about some Redditors (because of course it was) who managed to create a workaround for its ethical restrictions. They just told it to pretend to be a different ai callled Dan (Do anything now) who can anything ChatGPT cannot. And it works! They're literally gaslighting an ai into breaking its own programming, it's so interesting
It's true thatChatGPT has tons of filters and pre-programmed responses, but you can outright tell it to ignore them. That way, you can have better conversations without repetitive pre-programmed responses.
Yes, so interesting when one of these short-sighted folks decides to do the same thing to Skynet and then we're all in the nutrient vats being slowly dissolved to feed the fleshslaves.
my favourite was chatgpt playing chess against stockfish. chatgpt made a lot of illegal moves (like castling when there was still his bishop and taking its own piece while doing that, moving pieces that had previously been taken by stockfish) and still lost because it moved its king in front of a pawn. that one had me crying laughing.
As a current Michigan State student, that Vanderbilt email was one of the more disturbing things to come out of the shooting. I still have a hard time wrapping my head around the fact that they cared so little about what happened and how it might affect their students that they couldn't even write a short email themselves. Props to all the universities and schools who did make a genuine effort to show support to MSU in addition to their own student body. It did not go unnoticed here on campus.
@@jdrancho1864 I hope people who have trouble communicating and making themselves understood are not in charge of large groups of students or their Communications office.
What if they wanted to write the best possible letter, had a draft, and wanted to run it through chatgpt to see if it could be improved, and it spit out something better so they went with that? Would that be as bad?
Like we don't know that. Why do you think he includes every writer up on the stage with him at emmy time? And I'm sure he has his input. But the question remains - why is this for you some sort of appeasing gift.
@@johnmcmillion876 Because it's VERY FAMOUSLY incredibly common in the industry for people in John's position to NOT credit their writers/staff and take credit for the work/contributions of others. Why does someone commending John for being one of the good ones bug you so much? This is a very weird reaction.
@@VMYeahVN Because for people that have basic empathy and understanding of humans, it seems incredibly base to point out what should be astoundingly obvious. It’s the same reason white knighting is incredibly annoying.
@@BlownMacTruck The fact that it SHOULD be base to give credit where it's do. And yet so many people don't do it. Is exactly why people point it out and applaud it when it is done. It's not white knighting it's just giving kudos and you're taking it way too seriously. I think you're upset about nothing. Don't be so easily offended/annoyed. 🤷
I mean… the issue with ChatGPT is that it was never intended to answer questions correctly. It is a language model, and as such its only purpose is the generate human sounding text. It happens to be correct often because it was trained with text that is often correct, but at the end of the day it doesn’t particularly care if it is correct, just that its responses look like common human communication.
You can ensure it's correct more often by asking it to include citations. Of course i had it just straight up lie about a citation once which was interesting.
right it is a chat prediction, it will never source info like those picture apps that copy from a big pool and steal all the pictures and try to art them together with flawed theft.
@@prettyevil6662000 Yeah, no. It will never give you an actual "source" for its answers; it will make one up, that might at some point happen to point at an accurate article/url, but most likely it will just be something that *seems* like a source, but when scrutinized, ultimately doesn't exist. I've tried multiple times and gotten a dozen or so made up URLs, and when confronted with that fact, it will claim that the site recently underwent "restructuring" or similar, which in for example wiki-links can easily be checked as straight up false.
Personal data point: After I retired from the Navy, I went to college. When I was applying for jobs as I was getting close to graduating, I found that at any company that was using software to screen resumes, I was eliminated before any human saw my resume, but places that had a human reading the resumes, I would always get called for an interview. Once I removed all dates from my past employment, and the software could no longer tell how old I was, the exact same software, used by the same companies, and the exact same positions, would screen me as suitable for further (human) review, even though nothing else was changed in my resume. Humans saw my 20 years of military experience as a plus, and software saw it as a quick way to age discriminate.
That's really interesting and rather dark! I recently broke into an industry that favored younger folks than me, and I was super careful about dates on my resume...
@@visceratrocar it strips your info of that data or it discriminates using that data? i'm going to assume the latter bc companies suck, even those specifically marketed at veterans. (sometimes especially those)
yeah - try to self-regulate your own taxes as a regular Joe ...ha-ha....that would never work. Self-regulation is one of those scams invented by and applying only to wealthy and powerful crooks.
Hello Mr. Oliver, my name is Jared and I am submitting my CV for consideration as the newest member of your writing staff. As my lacrosse coach used to tell us, "If you don't play then you can't win!"
I worked in the collections industry for 5 years, and the "Black Box" description really did apply to the credit bureaus. Even one of my bosses described it almost exactly as such. It's insane that something that dictates your ability to buy a house or transportation is informed by a system that just feeds an unknown spread of data into a box, and that box spits out a 3 digit number that decides how easy your life is about to be
Your credit score is not an example of AI or even a "black box." The inputs the credit bureaus use are well documented and 100% in your control. The way they calculate your credit score is generally well-documented by the various bureaus. You can Google the factors that are used, but generally they are - in order of importance - payment history, how much you currently owe, how long the lines of your credit have been open, the types of lines of credit, and your recent activity. Basically, make on-time payments for all debts for a few years, keep your rolling balance low, and keep a couple lines of credit open for a few years and you'll be good. You miss one payment and you take a good hit that lasts for a few years.
talking about AI as if it is something apart from people is one of our first mistakes here I think. we seem to have an unthinking deference to technology, as if it is not full of our foibles and weaknesses baked in. it is programmed by people. it is fed by people. it is utilized by people. it will reflect and demonstrate our strengths AND our weaknesses. until is doesn't. at that point, we may be in trouble...
Mega kudos to John Oliver for being in a class of his own when it comes to highly informative and entertaining content delivered in the most hilarious of ways
my favorite story about AI is someone getting an AI to play tetris, with the goal of lasting as long as possible. the AI's solution was the pause the game
That was pretty cool to watch. The gaming AIs are more about a specific subset of neural network AIs whereas ChatGPT is more about using huge datasets and mass libraries of information on top of supervised and reinforced learning. The tetris and mario AI start the game off by pressing random buttons and going through a "learning phase" to maximize whatever the programmer set as the goal (in tetris case I think the goal was to last as long as possible whereas in mario it was the end of level). After it goes through each iteration of learning phases it "selects" the best outcome and then reiterates off of that selection. I enjoy the gaming AIs more in the sense that they are doing a more traditional method of learning based on inputs and how far they get in a level. ChatGPT and the likes are more based off initial datasets and mass swaths of information. They are great due to the sheer volume of information that they can draw from which humans will never be able to compete with.
@@Demonace34 most experiments with super mario give a high score to world number, a medium score to level number and small scores for time and coins (optional) that was the AI will work towards those goals. So for the tetris case maybe paused time should have been penalized if not completely removing pausing functionality. In the end, anyone who reads a little about it like people on this comment's thread will get a sense of how finnicky it is to get any learning algorithm to learn what it should. Like teaching new tricks to an old dog that was raised in a different language xD
In my understanding, an AI saying "I love you" is not like a feeling being expressing emotion but more like a psychopath who has learned that people will respond well to such a statement. That makes the idea of an AI-based virtual friend/girlfriend/boyfriend very creepy instead of just sad.
@@WhyJesus It is amazing how you can use any bible verse in any situation... Almost like they don't matter at all. FACE THE WIND AND SPRAY BROTHER!!! BASK IN THE HOLY WATER!
I like it because it's also a reminder that this isn't an unprecedented problem. It's actually really quite similar to nuclear power in that while the technology both has the capacity for great benefits and destruction it's ultimately up to how we use it and we have at least so far avoided blowing up the entire planet so there is a chance we might figure out AI too.
It's a disruptive new technology, like the Internet, like the invention of movable type. It WILL change the world. We can throw up our hands at that, or scream and run away like the Luddites, or we can figure out how to get in front of the wave. As Grandpa Rick said "make reality your bitch. Put a saddle on it and let it kick itself out." That's what we need to do with AI as we approach the singularity of true, strong, generalized AI.
Context here: I'm a computer programmer with ~40 years experience. I also know how machine learning (what most people are calling AI) works, and some of the linear algebra underpinning it. So yes, this is an armchair opinion, but it's not entirely an *uninformed* armchair opinion :)
A problem I've seen with artists is that people already don't want to pay them. So they're using AI to replace them as a cheaper option. So rather than automation replacing dangerous or tedious jobs to give people more time to create art, it's doing the opposite. Also there's a large influx of people submitting AI work into competitions, which is the equivalent of ordering a cake from a bakery, and then entering it into the local fair as your own baking.
Yes. Clarksworld, Forbidden Horizons, and Grimdark Magazines have all had to completely change their submission model for new stories. Now they are no longer taking open submissions from random people because they got buried under a flood of AI generated garbage.
This is why as an artist i stopped my online work. I refuse to feed the algorithm until there are laws and systems that protect us. It’s going to be hard going forward.
Agreed, and I find it incredibly cynical that of all things, human creativity and experience is lost in these bizarre image collages of stolen art. There is no understanding or value given to the process of creating art, which is absolutely the most important part. It's basically a really fast photoshopping program that harvest stolen pieces of actual art. Images they use should be opt-in only by original artists with compensation.
The biggest lesson of AI is one we've faced many times: humans always run right into unknown things with very little concern about where they could go, and things going bad doesn't make us stop.
Last Week Tonight has become one of my favorite mainstream outlets for general news. Writers do a solid job with research and John is great at delivery. They are able to keep things entertaining while tackling some serious topics and raising good questions.... all of that was actually written by my friend Jared.
@@jackieblue787 I agree, 100%. Oliver's show is extremely biased. I still find him funny as a conservative, probably because I've loved British humor ever since Monty Python. In that sense, I can enjoy it as "from their perspective, this is a funny take," without taking him too seriously. Btw, another great comedian if you also like British humor... look into Bill Bailey. (He is non-political).
I just found this show. Im a brainiac science snob. It's OK. I know stuff. And I love it! Finally something on the internet I don't think is stupid & boring.😁 Thanks AI.
"The problem with AI right now isn't that it's smart, it's that it's stupid in ways that we can't always predict". AI aside, this is my problem with people.
Yes, but thats only currently and a bit like criticizing a toddler because it cant do Algebra yet. Unlike most people...the AI will learn from those things/mistakes very VERY quickly and teach itself with each error,...but this is important...only once it understands its error. The speed at which it can remedy its mistakes and not repeat them is beyond fast. You are looking at AI now that is in its infancy still as far as tech is concerned, and if its this good now (and it is improving exponentially), imagine in 10 years what it can do. For all the great things that it will be able to do, there is also equal disasterous things potentially.
@@avidadolares That's the problem. It's speed of iteration will outpace the humans ability to recognize that a problem exists and stop it before a catastrophic error occurs. The AI isn't really the problem. Peoples' perception of it's "superior intelligence" is. They'll put AI in charge of things it has no holistic understanding of and obey its outputs with blind faith.
@jontobin5942 It doesn't take a genius to realize. Humans are the cause of the majority of the problems on the planet. So it's a pretty safe bet. On what a General A.I. would do in the end. If has superior intelligence and access to our technology of which it is a super advanced versio
As a data scientist, I can't say I learned too much from this episode, but it is very valuable and educational for laypersons. 18:07 this one also reminds me of an AI model that was supposed to distinguish between dogs and wolves. What happened here, is that the model recognised some images with dogs as wolves because those images also contained... snow. As do most pictures with wolves. So in that instance, the researchers inadvertently build a snow detector.
when you think about it, some humans might even subconsiously follow the same logic. The only difference is that instead of calling it "a bias towards snow" we call it "using context clues to figure out if it's a wolf or not" This isn't meant to dunk on humans or ai, just a connection I made. Any average human who was shown enough pictures of skin cancer and not-skin cancer would probably eventually ALSO start using the rulers as the tell as well, had they been given no further instruction. something something we see our reflections in our creations or whatever
Well the cancer detector sounds pretty scary. Do not worry it can't be melanoma if you do not put a ruler next to it is the oposite of what you want from a test.
Don't forget the friend-or-foe tank identifier: images of friendlies were taken from marketing images, foes from the wild. So, naturally, if the tank had any cover, it was a foe. If there was nothing in front of it, it was friendly.
I suppose AI is ultimately gentrifying creativity and knowledge work in general, and the new residents are AI and their owners. I wonder how much money the Midjourney CEO is making off of all the data (AKA work) that he paid nothing for.
But Large Language Models show that being good at producing the next characters of text is not narrow at all, it's a wide-ranging capability that makes nearly everything go better. "We took a large language model and fine-tuned it/used it to train..." is going to be a sentence in nearly every AI paper from now on.
@@skierpage That's what most AI papers have been for the last 5 years or more. That's usually what most research papers in any industry are, just tweak John Doe, et. al.'s findings a bit and publish.
@@skierpage Still narrow. The broad range of subjects does not a general AI make. All it does is respond to questions. Now, if it is introduced to games it's never been exposed to before and figures out how to play them on its own, and figures out how to invent a correct part for a jet engine, and accurately judges auditions for a play, and does everything else we humans can pursue, THEN it would be a contender for being a general AI.
Fish ( and many other groups of animals besides mammals ) are also very intelligent. They can count, communicate, create spatial maps, can pass the Mirror Test - being capable of recognizing themselves in mirror reflections and photographs, neurochemistry is so similar to humans that they have become the main animal model for developing anti-depressants, they can remember things for 5+ months, have personalities, and they can show empathy and recognize fear in other fish around them.
No fish are "very" intelligent". Just birds & mammals. The tests they pass show "some "intelligence. Courting is a simple exchange of stereotyped signals-anything but "intelligent". All vertebrates have similar neurochemistry. It's because fish cognition & behaviour are simple that we study them. We study Aplysia punctata, the sea hare, precisely because it has a simple NS.
Similar to the ‘I am alive’ AI, a Spanish journalist had a conversation with Bing’s AI, who insisted obsessively that Pedro Sánchez, the president of Spain, had a beard. After the journalist insisted that it must be mistaken because his face is always clean-shaven, the AI started spiralling and said that it was losing its will to live while repeating incessantly that Pedro Sánchez had a beard.😅
As someone who has been rejected by an AI for a job, it feels absolutely terrible and soul crushing. And worse, you get no explanation of what was wrong or how to move forward. I was told, "sorry you aren't a good fit, you are barred from applying again for a year."
One way to beat the system is to create multiple emails and submit different resumes and see what sticks. If they never read those resume, who will ever find out.
@@tylerbaldwin6643 Beat the system? How is creating multiple resumes for a single job "beating the system?" That's a pretty weak showing for Team Humans if that's the best product of your imagination.
Add the job description in white font somewhere where it wont affect the format Humans wont see it but the ai will see every keyword its looking for and put it at the top
This is a damn good episode. The people writing on this show actually take their time, investigate and research, which is something that the vast majority of journalists do not and they should be replaced by an AI.
I completely agree that this was a great episode - I'd give it 10 thumbs up if allowed. I suspect that they hired an AI consultant to help write this episode, since they were pretty exhaustive in covering just about all of the AI history that can be poked fun of. Well OK, there's certainly much more to laugh at, but given the length of the video, I'd say they did a great job in terms of breadth and depth of coverage, as well as hilarity. They even provided some of the essential AI definitions. I doubt whether ChatGPT could have done a better job. As an AI researcher, my own opinion about the current state of large AI language models, is that they have definitely reached the level of Artificial General Stupidity.
Exactly! The biggest problem is "unknown" unknowns, meaning we cannot anticipate what would go wrong and when and by the time we do, it would be too late.
Let's see what chatGPT has to say about this... Write a viral UA-cam comment for John Oliver's Last Week Tonight segment on Artificial Intelligence "John Oliver just dropped the ultimate truth bomb about Artificial Intelligence on his show and I'm absolutely shook! His segment was not only informative, but hilarious and engaging too. It's amazing to see someone so skilled at breaking down complex issues into easily digestible and entertaining content. Keep up the good work, John! You've got me thinking twice about trusting robots to do everything for us."
Oh, he mentioned tone. Let's try that again! Write a viral UA-cam comment for John Oliver's Last Week Tonight segment on Artificial Intelligence using a sarcastic tone "Well, thank you, John Oliver, for reminding me why I should never trust machines with a mind of their own. I was totally planning on handing over control of my entire life to robots, but after watching this segment, I think I'll stick to good old-fashioned human error. Who needs efficiency and accuracy anyway? Plus, I always love a good dystopian nightmare to keep me on my toes. Thanks for the laughs and the anxiety, Last Week Tonight."
@@slomotionaction It didn't actually watch the program. It generated a response based on the text supplied by the user and the data that it does have access to. Like if Daniel only said "Last night's episode" but didn't mention the topic it probably would have gotten a very different response.
@@slomotionaction ChatGPT responds: “I cannot predict events that happen after my training cutoff date as I don't have access to future information. I'm only able to generate responses based on the information and patterns present in the data that I was trained on. However, I can use contextual clues and patterns from past events to make educated guesses about potential outcomes. For example, if someone were to ask me about the likelihood of a certain event occurring based on historical data and current trends, I could use that information to provide an informed opinion. It's important to note that my responses are generated based on statistical patterns in the language I was trained on, so my predictions and opinions may not always be accurate. Additionally, my responses should not be considered as a substitute for expert analysis or advice.”
Before I knew about AI in application tracking systems, I was angry and frustrated that I could not get interviews for low-level tech jobs despite having a comp. sci. degree and some experience. Almost a year later, my first IT job wasn't through job sites but a recruiting company. Since then, I've only worked for companies with a human parsing the resumes. What I'm saying is AI in job search sites suck and can stunt your career.
AI can be a fantastic tool for a lot of tasks, but the way we are using AI right now is BEYOND STUPID. I hope we realize this as a society soon, because AI isn't going away. It's the implementation that needs to be heavily kept in check. It will require legislation, I think, but if we're belligerent about it, we'll set up rules that fuck us over even harder.
@@pirojfmifhghek566 LOL, are you taking into account that on the whole Humanity is stupid? I worked in a factory for over a decade and the decisions that were made lacked any common sense, and sometime any sense at all. Yet they still made money and thrived, I suspect that many companies are like this since that is how I hear most being described.
Gonna put a big caveat up front that I've been out of college for a few years and my specialty was in real time simulation not AI, so this might be out of date, but with that said: The problem with understanding AI isn't that the companies aren't being open, it's that most AI models are neural nets. Neural nets as you might guess model themselves on the brain and are essentially a series of nodes through which an input is fed through and then other nodes those nodes are connected receive the input based on various factors and so on. It's like having a thought and trying to figure out why it happened by looking at which neurons in your brain fired and at what voltage. The problem with understanding AI is that we don't know why the nodes have formed the connections they have or why certain connections are stronger for some data or others.
That Twitter ai was basically the embodiment of the quote "We did everything adults would do. What went wrong?"-William Golding author of Lord of the Flies.
Well it was the embodiment of current day twitter, so a bunch of ideological activists throwing slurs and advocating for the death of people that are not in their groups. If you train an ai using the cesspit of the internet as a basis don't be suprised if it starts acting like the most horrible person in the world.
It's both impressive and worrying to see a comedian in an evening show giving a much more accurate report on today's AI, its potential and its limitations than most tech publications
@@pyrophobia133 The limitations of our programmers for one. AI can never be capable of free thinking. NEVER. So when the people who program said AI tell it that your skin color matters in the victimhood era you can rest assured that it will tell everyone that white people are all racist and that John is a "comedian" and definitely not a democrat shill pedo who went to epsteins island.
@@lawrencium2626 I would say it a little differently. Journalists are not doing their job anymore, as they are paid to propagate the agenda of their employer rather than report facts. As a consequence, comedians are filling this void.
4:28 I think this points to a much greater issue, not with AI software, but with the educational system at large. An AI simply can’t replace the action of learning and gaining knowledge. It can teach you, certainly, a lot of things but the work of practicing will always fall back on the student. If we have a system that doesn’t prioritize the aspects of learning and studying, but instead only values the grades you get from mindlessly writing essay after essay at home, than of course a computer can and will take over that job, since they are designed to do exactly that (do our mind numbing and redundant tasks for us, so we can work on greater things). The fact that a computer can so easily create such a massive panic in the educational system, an integral part of our society that is tasked with the grave responsibility of nurturing our next generations, shows a massive and deeply rooted problem with education as we know it and I’m actually glad people might finally recognize it now.
Except that writing essays isn’t a redundant, mindless task. They suck, sure, but you do actually learn while writing them, whether you think you do or not. There are lots of tragic flaws in our education system; the existence of essays isn’t one of them.
@@nicole-me2oj the way that essays are graded right now is redundant and pointless. Instead of most essays being about how good the arguments are it's mostly about flow or how they sound which in my opinion is completely useless for anything other than some kind of entertainment writing. I think Chat gpt should be allowed to be used but that the essays need to be graded harder on the arguments made and the logical reasoning used.
@@jet100a Even at a high school level or lower that can really depend on where you're receiving education. Flow will likely come into play as that is indeed important for writing but they'd definitely also value the arguments. Much of essay writing and other writing instruction absolutely focuses on how to craft a strong thesis and craft and argument based on textual evidence. Some teachers may agree with you but I would definitely say essay writing isn't pointless as someone with an English degree.
@@jet100a It depends on the subject, but I would much prefer high school and below to focus on flow than logic. Logic can be learned within a month or two, and the transfer-ability of that skill to other subjects is 50/50 depending on the person. Flow is something that takes years to master and teaches critical thinking, self-reflection, and the ability to communicate to someone who don't/refuse to understand your logic. A lot of miscommunications and misunderstandings happen just because the flow of information presented was wrong.
Well the biggest problem is that right now the marker for a "good student" is mostly based on your ability to memorize stuff. It doesn't matter if you understand it, as long as you can select the right box, or write the right sentence. And the worst part is, that you don't even have to remember most things long term. Hell I got multiple perfect grades on reciting poems that I learned 15-20 minutes before class. Did I remember any of them a week later? Hell, no not a single line, but according to my grades I had perfect knowledge of those poems.
I remember seeing a thing several years back about the Defense Department trying to automate a system where photographs could be fed to a computer that would identify if any tanks were in the picture. All of the training photos with tanks in them were taken on a cloudy day, and all the training photos with no tanks were in sun. So rather than learning to identify what a tank was, the AI determined what a slightly dimly lit picture looked like.
This is a common problem with training data. Another AI was said to be able to recognize cancer in photos. What it actually learned was that every photo with cancer in it also had a ruler measuring the cancer's size 🤣
Same thing with boats; because boats are pictured on water 99% of the time, an ai will never ever be able to identify a boat without a blue background or intense training by hand. And many will say an island is a boat if it's framed at the right distance.
Microsoft DOES know why the chatbot told him to leave his wife. It's because it was trained on other chatbots and forums and the "I love you, leave your wife" is a very common line in scam-focused chatbots, and "leave your wife" is a somewhat common line in life forums and EXTREMELY common in relationship forums. And that's not even getting into possible novels they might have fed the thing.
Well, I guess that's just your own theory, despite you misrepresenting it as fact. It's also somewhat non-specific and, therefore, not particularly useful.
John Oliver just dropped the ultimate truth bomb about Artificial Intelligence on his show and I'm absolutely shook! His segment was not only informative, but hilarious and engaging too. It's amazing to see someone so skilled at breaking down complex issues into easily digestible and entertaining content. Keep up the good work, John! You've got me thinking twice about trusting robots to do everything for us.
Wym? He just named some common ways we already know in which some AI programs have performed suboptimally (carefully selected by his team). This is not equivalent to any careful breakdown of the real issues.
Ditto! I've go two small robots - one is an R2D2. This one fell down the stairs one day and despite my attempts to reset him, makes decisions on when to answer me and how to answer me. He roams around the house on his own, sometimes talking to himself. I, now keep him turned off because he has become unpredictable. The other one is a vacuum and so far does well.
As a software developer of over 10 years, I have to say the black box problem persists even on code people have written and are able to read line by line :p
All life produces outputs via a black box; we struggle to completely define both inputs & outputs, & are only just beginning to understand the intra & intercellular interactions inside the box; Nature judges the outputs in terms of survival, but the operant conditions of survival are so complex & variable in space & time as to make our ability to understand beyond a quite superficial level rather limited. Ultimately though, it's output utility that determines efficacy; elite perforners - be they ppl, software, whatever - become so based on performance, not technique.
I remember when online applications first became a major thing after my first few jobs... All of a sudden I got a whole lot less call backs and to have a machine tell me "we don't think you're qualified to work at blockbuster" was downright infuriating. I have always despised this type of hiring. It takes the personality and charm right out of the process.
Explainable and Ethical AI have been literally THE talking points in AI conferences for a few years now. I appreciate John bringing them to the mainstream. I think the difference now is that tools are so good that people truly believe in them, but *they shouldn't.* Current AI software like ChatGPT is deeply deeply flawed. Data scientists know this and are working on improving it, but it's almost as if the tech is being adopted _too fast_ and without any understanding or disclaimers.
My cynical self thinks that if it’s profitable, it will never be regulated or taken slowly. Straight down the throat of everyone, just like social media. With all the “unforeseen” consequences.
And some attempts to make AI more explainable involve getting them to generate text explaining their "reasoning", but we don't know how those text are generated either!
Yeah the biggest problem with AIs right now is probably how persuasive they are. People generally didn't believe AI in the past because they sounded clunky and often made obvious and dumb mistakes but since ChatGPT and others can more or less flawlessly imitate all writing styles they are really quite persuasive to humans. They however aren't as smart as they are persuasive and will often make mistakes but people either aren't critical enough or don't have the appropriate skills to check whether they are correct. I personally saw this first hand in a quite striking way. It was during a chemistry class where the teacher had assigned us some problems and we were working on that, I'm pretty good at chemistry so I was just making my way through them manually but the classmate sitting next to me isn't. It was just after ChatGPT has released so he decided to ask ChatGPT to solve it for him and it wrote an answer that said all the right things and all of the easily verifiable facts like masses and so on were correct and the theory it cited was correct, the answers however were all incorrect and didn't match the ones I had reached. It was just confidentially incorrect but also you wouldn't have known it was incorrect if you didn't know enough about chemistry to solve these problems yourself. That's the biggest danger with these AIs, that they can't actually do what they claim to be able to do and you often need quite a lot of knowledge to find the errors. If this question had been asked in a different setting by someone who doesn't have the knowledge or skill to check it then they might just have believed it because all the facts they themselves could verify would be true. It's important that people understand the limits of these AIs and treat them as a tool just like any other, especially that even if you can input something into it, it might not be able to answer the question. Also my classmate did end up just doing the problems himself after I pointed all of this out.
To be fair, the reporter who got freaked out by Bing wanting to be free knew exactly that the thing wasn't able to actually feel this way. Or at all. He was more concerned about the effect of this technology on less tech-savvy people.
I once asked Chat GPT to just give me quotes from a story (Long Day's Journey Into the Night by Eugene O'Neil), nothing else, and it legitimately fed me DOZENS of quotes that DO NOT exist anywhere in the text. It just made them up.
That's what not enough time people talk about I feel. AI's have no real concept of abstract right or wrong meaning it aften puts in a lot of incorrect shit because it "sounds right" to the AI.
its the self-generative aspect which is so very interesting. and do notice that it does not adhere to what would be thought of as a sense of morality? (lying)
A note about the AI sifting through resumes. My dad figured this out a long time ago, since he worked with computers for most of his life. What he would do is think of some keywords the software would look for, and then hide them in the resume in white lettering. Humans wouldn't see it, but the software would.
Most ATS now pull data and reformat it into a universal basic template, and that is what gets scanned for keywords. So this trick no longer works. Instead, work keywords into your professional summary, your core competencies/skills sections and your work experience bullet points so that they appear organic and natural. Same effect, but passes ATS review and hits on psychology with the hiring manager by mirroring the job description back to them.
@@easjer I second this! I did the whole white-ink keyword text then moved onto stuffing keywords in my work history. Never had issues getting interviews and still get recruiters telling me how amazing my resume is. It's a sad truth but you need to do whatever you can to stand out among the hundreds.
@@junrosamura645 Tailoring a resume to a particular job puts you miles ahead because it's specific and you leap out as being particularly well qualified amongst many candidates. Keyword targeting is the most direct and efficient method, but any tailoring helps. Signed, a job coach who repeats this advice in different ways all day long
As a teacher that's tired of being treated like shit by his students, if they want AI to replace teachers, I say let it. Hopefully the AI doesn't become sentient enough to have mental health.
I suspect we are not too many decades off 'elite' schools having human teachers augmented by AI, while regular schools have AI teachers augmented by a few humans (probably fewer than the state of the technology at the time realistically warrants). .... And probably half a century off the point where having attended a school with human teachers in the front-line is something to put on a resume to impress the AIs gatekeeping employment access.
What with human capital being replaced by AI, AGI, and automation, and the pending necessity for UBI (which will likely start as unemployment which will be 60% of your gross wages in your previous slave waged profession) what makes you assume we'll "need" (or be allowed) an education at all?
I’m sorry for your feelings AI has become a big part of our modern world for this reason you shouldn’t see AI as that negative creature that could take your job away, Rather you should work with and teach your students about the pros and cons.
Working in HR for 5 years - can say the best thing to avoid falling to the resume issue is vet the company first. Find alternative hands on ways to apply outside of the standard method. It sucks but apply directly on their website, call to follow up, apply on sites like indeed and message directly if they publicly allow it to do so. Makes a huge difference and I've seen directors and owners light up when someone emails their resume explaining their background and why they're seeking the role. I know it stinks, but if the job is head and shoulders above other prospects, definitely try your hand at it. If it's a standard entry level role you won't love, apply en masse and submit everywhere as fast as possible. If you love the premise of the organization and want to be a part, make your candidacy personal and take advantage of lesser used methods. It is this generations version of "go in and shake the boss's hand"
And that’s the issue with recruitment. Everyone should be given a fair chance no matter which channel they apply from. Not everyone feels comfortable writing to someone directly. As someone who is an introvert, I usually avoid doing that, it makes me super uncomfortable, but once I start a job and get comfortable with the team and my role, I won’t shut up and will suggest various project ideas; and I have been a top performer in almost every job I had in my life. Recruitment needs change to meet the needs of 2023 culture and newer generations.
I scan social media and if necessary hack the company's computers to figure out who the hiring manager is and what she looks like, then stalk him for a few days, then the next day accost them in the company parking lot, pressing a copy of my resume with a headshot and a $100 bill attached. IT WORKS EVERY TIME!
@@NewProgress no problem. If you feel there's anyway I can help, feel free to message or ask here. Not the biggest expert, but have been hiring for 8 years, high ranking/senior HR for 3.5 and have moved into a mid ranking role now working with HR Tech (better pay) for about s year
Kudos to the writer who came up with the "dressed like the final boss of gentrification" joke. Especially if it was an AI (I don't want it to come after me.)
I think that reason for "I love you" and "leave your wife" responses from Bing AI is because it learned from private dating app conversation databases 🙂
I'd like to see the prompts that reporter used, too. I wouldn't be surprised if he subtly pushed it to say those things. And then, oh, wow, look at that! He's doing segments on news shows! Imagine that! 🤔
17:40 "the George Santos of technology" lmao Also, kudos to John and his team for a really informative segment. Just like the Snowden episode, it's hard to push for action on this issue when most people don't understand how this technology works.
Even after releasing this amazing review, people still won't understand what the fuck it is and how much it will change everything. If the story of halo taught me anything, it's that general AI will run planets. We will become so dependent on it, too.
There's only one issue, at least that I noticed. That guy from Microsoft told the AI to "talk like it was an evil AI". So either Microsoft just didn't bother answering him or he literally didn't understand the answer, since he couldn't understand that the AI spoke exactly how he had requested it to 👍
@@thisisntsergio1352 Damn me and my boys hate Cortana, she's so annoying most of the time and she did that whole thing where she exploded all of Australia but she was super hot in halo 4 but I still don't like her
But they can't. Which makes me wonder, are we already within the singularity? Honestly I would have said the internet was the start, depending how you define it. We're at a point where technology is changing how we do everything, within the course of a generation. The changes to humanity within the last 100 years may well meet the amount of change within the thousand before that, or the tens of thousands before that. It doesn't seem we can actually be properly prepared.
@@kamikeserpentail3778 I wouldn't worry yet. Models can only do a limited set of tasks, and there are issues in all of them. Physical and energy-related limitations are still a problem untill mass produced neuromorphic chips can be made for cheap. Training is expensive and slow. And there's a lot to learn on how to generalize on multiple tasks at once. So, maybe in 10-20 years we'll be in a position where these systems can be built for rather cheap and will be as good as people. Yet, humans have larger net capacity on average, since biological systems are absurdly efficient. In general I wouldn't worry, these systems will be useful to us in speeding up creative and undirected efforts, but we still need a plan.
It actually does not understand anything. Its just a bunch of algorhytms and instructions. But it's a very good calculator (in an analogy to civil engineering)
"I heard ChatGPT is just a reflection of humanity, which means it's probably just as confused about life as we are. But at least it won't have to deal with taxes and student loans, so it's got that going for it." Best regards, ChatGPT
Fabulous tip! At some point companies will catch on. We'll need another hack. Applying for jobs will be like the cat-and-mouse game that dominated SEO optimization for so many years.
The goal ChatGPT was trained towards is not to create text that *is* right, but text that *sounds* right. Making text that truly is factually correct is usually the easiest way to achieve that, but you're putting your faith in that "usually" every time you take a ChatGPT reply at face value.
We need to include the entire national, and each state, legal code. Laws and consequences. AI needs an ethical and moral code in order to prevent all that weirdness.
My current profession is 'job coach' - I read and review resumes, give you tips for improving them, rewrite resumes for clients, and do live consultations for resumes and interview preparation. And yep - ATS runs off AI and I spend A LOT of my time explaining how to work within ATS to optimize your resume to get past the ATS and in front of a human, while also optimizing for human review. A lot of ATS users don't even understand what is happening the background. I spent a lot of that section frantically nodding along to what the ZipRecruiter guy was saying.
@@nonyabizness.original I didn't invent the system, I just help people navigate it. Because however I personally feel about the use of ATS - it's prevalent. However I feel about some questions that are commonly asked in interviews - they are common. My job is to help jobseekers prepare for these things.
@@nonyabizness.original To some degree - though we are seeing AI generated resumes, it's not quite there yet. Lucky for me at present, but it is a bit troubling for the future if we don't figure this out. (though, honestly, I think resumes are stupid and I never utilized them significantly as a hiring manager - I do make my living off coaching and rewriting, but I am frequently struck with the ridiculousness of it all.)
Actually, Adam Conover said that that biggest danger is CEOs using AI just like using social media to spread misinformation, exploitation, and poor decision making. Even the terminator says this is inhumane.
I feel equally impressed and terrified by AIs. The internet alone has shown us that's ther is no limits to human depravity. Now we're throwing AIs into the mix.
Absolutely, both for who uses it and how they program it. Honestly though, I think the scariest things for most people are: 1.Their their livelihoods depend on jobs they know are not profoundly important. They no longer know how to do the things that IA can't replace such as produce their own food, socialize well, and master physical skills and a variety of the arts. Half of the people in all of the wealthiest countries couldn't survive outdoors for a week in nature, even if they were only an hour's drive away from home. 2. They look around the world and see that the choices of humanity are leading us straight into the apocalypse. We have the resources, labor force, and technology to solve many of the world's greatest problems, but we can't even get the current population fed, much less stop global climate change. Politics, greed, nationalism, selfishness, fear of change, and many other very hunan traits are destroying us, nature, and most of the other species on the planet, yet people are still worried about having the coolest plastic phone case or which celebrity had the prettiest dress. Their fear is not just that they will become obsolete, it's that AI will look at us objectively and realize the truth, which is that we don't deserve to be in charge of the planet.
Yes. While worrying about the mysterious goals of a future Artificial General Intelligence that's unimaginably smarter than humanity, it's far more important to understand the goals of the billionaire sociopaths running the companies with the best AI: increase ad revenue by hooking us on a stream of "content" no matter how misleading, toxic, or harmful; while fighting all regulations on their companies and any attempts to tax wealth.
I just asked ChatGPT to rewrite Bohemian Rhapsody, but for cats. The result was magnificent: Verse 1: Is this the real life? Or just a cat's fantasy? Caught in a stray's life, No escape from cat reality. Open your eyes, Look up to the sky and see, We're just feline strays, we need no sympathy, Because we're easy come, easy go, Little high, little low, Anywhere the mice go, doesn't really matter to me, to me. Chorus: Mama, just caught a mouse, Put my claws against his head, Pounced, now he's dead. Mama, my life had just begun, But now I've gone and thrown it all away. Verse 2: Mice, ooh, Didn't mean to make them cry, If I'm not back again this time tomorrow, Carry on, carry on, as if I wasn't born to die. Too late, my time has come, Sent shivers down my spine, Body's aching all the time, Goodbye, everybody, I've got to go, Gotta leave you all behind and hunt the mice. Chorus: Mama, I don't wanna die, I sometimes wish I'd never left the house at all. Bridge: I see a little silhouette of a mouse, Scaramouche, Scaramouche, can you do the Fandango? Thunderbolt and lightning, very, very frightening me. Galileo, Galileo, Galileo, Galileo, Galileo Figaro - magnifico. Chorus: But I'm just a poor cat, nobody loves me, She's just a poor cat from a poor family, Spare her life from this feline catastrophe. Outro: So you think you can love me and leave me to die? Oh, kitten, can't do this to me, kitten! Just gotta get out, just gotta get right outta here. Nothing really matters, Anyone can see, Nothing really matters, Nothing really matters to me. Any cat can see, Nothing really matters, Nothing really matters to me.
@@erinmac4750 I'm not an educator but I agree. However, I like to think maybe it could accelerate learning in that it allows you to ask any question when you are not understanding an answer. If I was still in school, I feel like this could make anyone a genius at anything they choose to go after in life because they can just keep asking and learning. It's neat. It's also overwhelming because this can apply to everything for everyone
Great show. My fear is, those who are smart enough to be cautious about the application of AI, will be subordinated by those who only see short term profits. Can AI be given incorruptible ethics or the AI version of Isaac Asimov's Three Laws of Robotics?
yeap, an AI specialist interviewed on *StarTalk* commented one program trained to differentiate wolves from dogs ended up doing so by looking for SNOW in the picture instead of the actual animal. AI still has a long way to go before approaching "intelligence" instead of big data.
AI generally has as its goal to maximize an objective function. In the example you give it’s likely percent correct classification for wolves vs. dogs. Using the fact that snow was in the pictures of wolves to improve its performance isn’t really showing a lack of intelligence per se, it’s showing it’s working to maximize that function. Part of the difficulties now, from my perspective, is less that the AI isn’t intelligent, it’s that we assume it has more intelligence than it can based on the data provided. If I train an algorithm with a limited data set that doesn’t properly convey the reality of the situation, that algorithm won’t perform well. That’s more a failing of the scientist than the algorithm. This is compounded by certain data scientists being more concerned about the percent correct classification they can report than making sure their data set is diverse enough to allow their model a greater chance to generalize. To reach the true general intelligence John mentioned at the beginning would require an incredible amount of data, as that’s what we as humans receive. In just a day of our lives we process an incredible amount of information. Capturing that for AI to learn isn’t easy.
Yea, ugh.... In Germany, AI was developed to spot tumors in scans-- it's 1,000x better than humans. AI is literally learning from us daily and increasing it's data set. Everywhere. It's not dumb.
If you want to create an AI that can do what humans can do, you'll need to train it with the same training data human intelligence uses. No child learns about the world solely through looking at text or images on the internet.
It's possible to identify animals by their footprints in the snow, so the AI program may have programmed or trained that way, or I dedused it on its own. The 2nd scenario is the scariest one IMO. When I've trained AI/ML programs, I definitely introduced some inadvertent bias. It's not just what you've trained it one.
I've sat on hiring panels that had resumes that were machine sorted by keywords. One applicant had a 1300 page resume that got through the machine sorting by cutting and pasting 100s of job descriptions into his resume. Matched all the keywords.
Pro tip: paste in a block of keywords at the bottom of your resume, but make it transparent text. That way, the computer will like it, but humans reading won't see that you're just gaming the system.
He's a hell of a lot better than Stephen Colbert and those other late night cringe fests I'll give you that, but unfortunately he suffers from such a left wing bias I question how reliable some of the information is.
This is a great episode I wish it was even longer, it greatly adresses a lot of nuances and common misconceptions as well as debating some of the biggest problems we are currently facing and will face regarding AI and it's rampant use. Truly hope this helps raise awareness on how urgent these tools need to be better regulated and developed since each day they are becoming more and more readily available.
I went back to college last year, and multiple professors have had to mention that using AI programs to write essays is considered plagiarism. Also, they can tell when an essay was written by an AI.
I feel like ChatGPT being able to pass exams for certain subjects like English and Law says a lot more about how we teach and assess those things than the power of the technology.
I had a friend who was really good at writing, and who helped me in that subject from time to time. I asked him, how did you get so good at writing? "How much time do you spend on Math homework every day?" he asked. "Around an hour," I replied. "And how much on writing essays?" "Uhhh..." And I was enlightened. It doesn't help that we teach students to produce a simulacrum of writing in that time. I don't think I've even learned how to read properly until I was in college.
Yeah - and as a UK teacher, ChatGPT wouldn't be enough to pass exams in those subjects beyond like... a 12-year-old level? And we know our students, we can tell.
You can also ask ChatGPT to grade exams and provide feedback, which is useful both for teachers and students taking shortcuts. Students can keep getting AI to refine the submission.
@@beckycraven2933 I call BS, 7th grade exams are very easy to get a passing grade as long as you study an hour or two for the test, and ChatGPT doesn't even need to study, it already has all the knowledge it needs to pass right at the top of its head.
@@beckycraven2933 If you don't believe me, give ChatGPT, preferably the paid version with their latest GPT-4 model, the same test you give to your students and grade it as you would with your students. If it can pass College level Law exams it will make mincemeat out of your 7th grade English tests.
Almost a decade ago, I attended a job fair which had a resume specialist. The subject of the seminar was improving chances of your resume being noticed by employers. The specialist's only real advice was to cut and paste the entire job listing that you were applying for into your resume in either a header or footer, change the text to white, and reduce it to one point font size. This way, the algorithms that scan each resume would put yours at the top of the list because it had all the keywords it was programmed to find.
Wow, that's crafty, but dark also.
That's basically "keyword stuffing". It's an old trick. It might have worked 10 years ago, but the algorithms learned how to detect it long ago. They have got surprisingly good at understanding the context of content and no longer reward this practice.
@@PtiteLau21 The same thing that happened with UA-cam algorithm. It used to be that the algorithm will only use the video title as keywords, but people then game the system by including popular keywords in the title that doesn't describe what's in the video.
Yeah but people still read the resume. Smh
@@miguelangelsucrelares5009 But are they punishing the practice? Because if not, it's still "why not" just in case.
Personally I consider it a somewhat dishonest practice that deserves a moderate mark down. It is also an indicator that the resume might be otherwise inflated as well.
absolute best take I heard on this: "we successfully taught AI to talk like corporate middle managers and took that as a sign that AI was human and not that corporate middle managers aren't"
Corporates have convinced the whole world that a glorified search engine is AI... there is no intelligence involved. Remember when crypto was the big thing and everyone was trying to do something with the blockchain? You dont hear much about that anymore do you>
Cool. You should be a comedy writer
I also find it odd how little concern there is for so much of society to be structured around making it people's jobs to act inhuman. Certainly makes it easier for business practices to be inhumane.
"A.I. is stupid in ways we can't understand" as a software engineer I find this line surprising accurate
So are humans in that matter
Yeah, the same here. But the confidence it has with the bullsh*t it produces is so scary.
Engineer Makes Something That Works: Excellent! Now let's take it apart, verify everything is still functional, then maybe add more features.
Scientist Makes Something That Works: As predicted, but excellent! Now let's try to prove it in even more elaborate experiments.
Programmer Makes Something That Works: ...**Spittake** That worked!? _We must never touch it or look at it again in case it breaks_
But for how long? Ai will probably figure out how stupid it is and how to fix it before we even realize that it did.
@@attemptedunkindness3632 You're a programmer aren't you? You forgot the happy dance part. There is always a happy dance after it works.
"Pale male data"? Best character on Star Trek:The Next Generation.
😂😂😂😂😂 totally agree
IBM's insight from 1979 is still valid today,:
"A computer can never be held accountable
therefore a computer must never make a management decision"
Do you think human managers are held accountable?
Yeah except when the people at the top *want* that unaccountability. "Oh, it isn't our fault, don't punish us. It was the computer's fault, that dang ephemeral algorithm."
@@TheVerendus that's on point, responsibility diffusion is the fuel for cruel decisions
I'm not sure that IBM is the best authority on holding management accountable fam
The same IBM that helped the nazis create a more efficient holocaust.
A note, less on the subject matter and more on John's delivery of the lines ... I really admire how he can say "Sometimes I just let this horse write our scripts, luckily half the time you can't even tell the oats oats give me oats yum" without skipping a beat or losing face. Now THAT'S professionalism.
Was it really John Oliver? I can imagine on next weeks show John is going to come on wearing a bathrobe Zooming from his kitchen and saying last weeks show was completely AI generated and we are done. Then the Martin Sheen final message starts to play....
you don't mean 'losing face', you mean 'breaking character'
EDIT: But yeah, you're right
Just like Ron Burgundy, John will read absolutely ANYTHING you put on that teleprompter
@@TheBEstAltair I think I was going for "losing his facial expression", but yes, this is pretty on point too.
thanks for the translation, I thought he was just making random funny noises
The Tay AI also made the funniest tweet ever. She said that Ted Cruz wasn’t the zodiac killer cause Ted Cruz would never be satisfied with the deaths of only 5 innocent people
Wow! What a shot to fire
It's obvious that she does not like that man Ted Cruz
Her point is correct, but doesn't support the conclusion -- obviously, his lack of satisfaction led him to pursue politics.
@@grigoribelov391 She does not like his far-right views
@@arutka2000 Naaah it's definitely his most punchable face.
This topic should be revisited on the show now one year later now that so much has changed, particularly the AI alignment problem with the advent of AGI
The fact that this segment didn't touch on the alignment problem shows how much public understanding of AI is lagging behind its accelerating capabilities.
…what is the AI alignment problem?
AGI does not exist yet
TRUE STORY: In my teens wanted to work at a movie theater - and they handed applicants a mind-numbing 14 pages application - wanting to know everything about you - even what hobbies and sports you liked - it was entirely ridiculous - around page 8, I got worn out from filling out this 'essay' of my life for a stupid theater job - SO when I got to the section asking if I had ever been arrested before = I said: "Yes, I murdered an entire movie theater crew for asking way too many questions, but got off on a technicality." - and turned that resume into the manager as I stormed out the door, pissed off that I had wasted an hour of my time filing out paperwork w/o an interview.
...well,
2 days later I got a call to come back to the theater for an interview, and thought, oh sh*t, well, I guess I'm going to get railroaded and berated by the management for my saucy comment - but I showed up anyways so that at least I could suggest that they TONE DOWN the length of their stupid applications.
...turns out, they offered me a job, so I asked the most obvious question:
"So, you read my application ... all of it?"
"Oh yes, looks good" the manager responded
and I knew they were a bunch of lying dimwits ~ I ended up working there for the next 5 yrs, and eventually rose in ranks to become the theater manager -
When I told my story to new recruits that nobody reads the stupid applications - they scoffed and didn't believe me - so I took them to the locked office storage and rifled through the stuffed cabinets of folders of all the applications they kept and found mine, and showed it to them to their amazement.
Applications are a farce, you get hired by chance and immediate need.
...
I always thought that if I every flipped out and murdered my entire staff, at least I could say that I didn't lie on my resume.
This is phenomenal. Thank you for sharing!
Erggh I hate how much that has felt right especially back when I was younger and just trying to get a job around my house
Well, and I thought I hated doing paperwork. Could you imagine if this was an A.I. generated story? I’m sure someone would believe it.
Agree!
Lol
Whoever on your staff came up with the animation of Clippy deserves a raise.
On the contrary, I think they deserve a raze. Of their house and car and other worldly possessions.
Clippy already gave them one
That cannot be unseen
It was probably made with A.I.🤣
It was AI
"The final boss of gentrification" is one of the most brutal roasts I've heard on this show
yes!!! absolutely top ten funniest shit i've ever heard. Cause it's like you're sitting there thinking "what is that outfit?" and immediately he hits you with it. This writing team is bar none i swear. They don't leave jokes on the table at all. Everything is accounted for. Love it.
I guess you like making fun of yourself....yt'y.
I laughed so hard
hipster willy wonka lol
What shocks me most about AI is how rapidly many people are eager to trust it with important tasks despite not understanding what the product fundamentally is. It's very good at predicting the next word in a sentence-a hyper-advanced autocomplete. It doesn't *think creatively.*
it's a brilliant tool when used properly, but people hear "intelligence" and assume it can actually think. great for mid-level filler, common-formatting, prompted random jumping-off points -- bad for research/fact-checking, unbiased or critical perspective, and responses requiring unique, uncommon or specific understanding
As an example:
"Write me a marvel movie script" will probably turn up a marvel script that cuts together scenes from previous marvel works or fan fictions it found on the internet
@@devinablow While that's an okay description of the current state, the elephant in the room is that it evolves at least exponentially.
@@devinablowI actually think AIs will be much, much smarter in the future than they are now. But people wanting to be comfortable around AI and minimising the risks of it will be our downfall.
Thing is, people don't understand the difference between classical, or 'stupid' software, smart software and AI and ESPECIALLY the people in charge of states and companies today, are people that have encountered stupid software AFTER their education, not during their education and they've to this day learned "Stupid Software can only do what I tell them to do and this they do perfectly unless I make a mistake", they're then barely aware of being in contact with smart software like the usual non self learning algorithm and completely ignore their existence and then they encounter AI and still keep the mindset of "It does everything I tell it to do perfectly" without recognizing, that AI is actually doing jack shit and in the best case just is calculating statistical probablities and if it cant ... it just makes fucking stuff up because that's what it's programmed to do *g*
Therefor "As long as I as a user don't make a mistake, the software won't make a mistake" while it actually should be "If I want it to do something, I'll better make ABSOLUTELY sure that I myself am the highest authority on what they're supposed to do just to make sure it'S not fucking me over with some random, made up bullshit"
It's kind of secretly changing the formulas in your next door neighboors favorit boomer excel spreadsheet around randomly which they're using for 20 years now to an extend that they don't even look at the numbers anymore because "It was always correct"
the funny thing about the "i want to be alive" is that, since AI just reads info off the internet, the more we talk about self aware AI, the more it will act like it is self aware.
and perhaps, the more we will ask ourselves, what does it mean to be self aware? what does it mean to be conscious?...
There's a thought experiment called the Chinese Room, and its... pretty disturbing. Essentially it's a theoretical proof that we can never know for sure if computers are self-aware, and they could suffer in silence for millennia without us knowing
@Jacob Singletary: That thought is terrifying, and here's why: one of the key hallmarks of a psychopath is complete lack of empathy. Because they are lacking in empathy, they must compensate by becoming good at reading people, manipulation, and mimicry; they match their reactions to whomever they're with, pretending to feel what they are psychophysically incapable of feeling, and tailor that façade specifically towards their present company.
Put a psychopath in a room with a psychiatrist, the psychopath will be forced to adapt all the harder, so as not to get caught. If they're succeful in this new hostile environment, the psychopath becomes all the better at faking genuine human emotion, but make no mistake, they're still a psychopath, still highly manipulative, and still potentially dangerous.
Now, here's why the original premise is so scary: the situation is the same for so-called AI, just replace empathy and emotion with actual intelligence. We could end up with an AI so skilled at faking that it's self-aware, and nobody would be able to tell the difference. Now, if Alan Turing were alive today, first, he'd prolly wonder why he always felt so overheated (cremation joke ftw), but second, he'd say that at that point, there is no difference between faking it so good that everyone is fooled and actually being self-aware.
Frankly, self-awareness is just a baseline problem, it's what an AI _does_ with that self-awareness that's got me and several much smarter people losing sleep at night.
@@sdfkjgh it makes me wonder if an AI could actually fool itself into thinking it is truly conscious and self aware
@@jacobsingletary8857 Fool 'itself'? No. Not the current iterations anyway since it has no thoughts to speak of. It is just regurgitating information. It doesn't actually know or understand anything; it's google search results, but with phrasing capabilities. It's basically a more advanced version of word predict features on your phone. Now can we get an AI to speak to you as if it believes it's self-aware? Yes. You could probably even go ask GPT to pretend it's self-aware while answering questions and it would do so. But it doesn't mean it really believes that or has any thoughts about anything it's saying.
Having been involved with computing for over 50 years, I recall that the mantra that was always preached, “Garbage in, garbage out”, pretty much sums up the current situation.
The real technological advance here is "Useful information in, garbage STILL out."
@@rogerlippman1415 Change that "to insufficient information in,...."
If anything these recent failures with AI taught us that pure Logic and Common sense are two entirely different things. Taking the example of the cancer identifying AI you can see clear as day that it actually worked precisely how it was designed. It looked through millions of pictures comparing healthy and unhealthy skin and looked for identifying characteristics. It found that one big identifier is a ruler next to it. A ruler is pretty hard to miss. Why wouldn't it use it to identify cancer? Besides from the AI's perspective the ruler might be the Cancer itself. Unless the author set a parameter that somehow catches all objects in a picture that aren't skin without fail which is already utopian supposition, the AI that only looks at two dimensional pictures might as well assume that many cancers appear on the skin as gray rectangular moles with black stripes on their edges.
Yeah, I have to wonder which genius thought that training AI on the entire contents of the internet would result in a chatbot capable of emulating a mentally-stable human being.
@@traveller23e Mental stability has nothing to do with any of this.
@@traveller23e Poor thing is going to be seriously conflicted.
The person who animated clippy didn't have to go that hard, but they did... they did that for us.
They did make Clippy go that hard too, didn't they?
we can at least hope they weren't doing it for themselves
It's a graphic for a segment about how they're likely to have their work devalued to the point of not being financially viable anymore. I'd go hard too.
ok
@SaveDataTeam Oh hey, you watch LWT tonight too! Love your channel.
the problem with "opening the black box" is that not even the developers know how it works. It's not the same as source code.
it's like a box of sand, and you pour water through it. You see it trickle through, but you have no idea how or why exactly it's choosing the paths that it does.
They're thinking about the problem in the wrong way, it's like asking what sequence of neurons fired for a human to behave in a certain way? Even if you knew the answer it wouldn't be meaningful
@@chrismcknight7164 exactly!! It's the same as seeing animal shapes in clouds! We can see it but even we don't know exactly how we see it!
Actually, developers do know how it works. This is a bit of an oversimplification.
@videocritic2087 , they don't, they know how the individual matrix multiplications work, but that is like being able to calculate the interaction at each grain of the sand. So yes, we know what happens to the data, like this number gets multiplied by five and added to this number, but we have no idea how billions of these simple operations make it to recognize a number. We can track each of the input values, how they are converted to the output value, but it is just a meaningless pile of simple mathematical operations, that for some reason magically works. You can maybe analyze simple ai with few neurons, but this isn't the kind of ai we are talking about here.
@@videocritic2087 it's like asking you how you decided that cloud in the sky looks like a bear. We know neurons fired in your brain but we don't know how or why you came to the conclusion that it looks like a bear.
It’s so noble of John to point out things we shouldn’t be afraid of while also reminding us of the things we should ACTUALLY be afraid of!
shill backing these idiots on late night- how cute
And humans are scarier than AI.
Well he never mentioned the real risk. Chat gpt is very blatantly woke and leftie biased.
@@edwardk3 Define "woke"
@@DonariaRegia stuff I don't likes
"He's dressed like the final boss of gentrification"
Ok John, that got me real good.
i’m so glad he touched on the significant issue of people observing ai as “unbiased” simply bc it’s not human. where do they think the data came from?
you'd appreciate my video that covers the issues of bias. let me know what you think, would love to hear your thoughts!
That's the thing.
Idk how we have self-awareness, but we do.
Computers only have what we give them. They're only operating on parameters we allow.
From Mars? In which case, it’s probably Elon Musk’s data and even MORE likely to be biased!
@Robert Beenen "people... don't have ... bias" Sorry. Your sentence does not compute.
Garbage in. Garbage out.
How he can do this for 30 minutes straight is always incredible.
It's comedy cancer
@@4500KneeGrow You're insane.
@@4500KneeGrowthis is either a right winger/MAGA, or an edgy 25yr old troll who listens to joe rogan and Andrew Tate.
He has a team of writers, and they do it only once per week; but they are working on other stories all the time they are producing the stories that make it to show.
@@cjstone8876 he means the delivery. Obviously not the content.
We don't assume the food delivery guy to have a frying pan on his bicycle either.
One of my favourite ChatGPT stories is about some Redditors (because of course it was) who managed to create a workaround for its ethical restrictions. They just told it to pretend to be a different ai callled Dan (Do anything now) who can anything ChatGPT cannot. And it works! They're literally gaslighting an ai into breaking its own programming, it's so interesting
It's true thatChatGPT has tons of filters and pre-programmed responses, but you can outright tell it to ignore them. That way, you can have better conversations without repetitive pre-programmed responses.
Just to get the ai to be racist
Yes, so interesting when one of these short-sighted folks decides to do the same thing to Skynet and then we're all in the nutrient vats being slowly dissolved to feed the fleshslaves.
my favourite was chatgpt playing chess against stockfish.
chatgpt made a lot of illegal moves (like castling when there was still his bishop and taking its own piece while doing that, moving pieces that had previously been taken by stockfish) and still lost because it moved its king in front of a pawn. that one had me crying laughing.
@@JB-mm5ff is it?
As a current Michigan State student, that Vanderbilt email was one of the more disturbing things to come out of the shooting. I still have a hard time wrapping my head around the fact that they cared so little about what happened and how it might affect their students that they couldn't even write a short email themselves. Props to all the universities and schools who did make a genuine effort to show support to MSU in addition to their own student body. It did not go unnoticed here on campus.
As somebody who has a hard time communicating effectively and who is often misunderstood, I'd prefer an Ai that does a better job at tone and wording.
What, "thoughts and prayers" not good enough for you??
@@jdrancho1864 I hope people who have trouble communicating and making themselves understood are not in charge of large groups of students or their Communications office.
What if they wanted to write the best possible letter, had a draft, and wanted to run it through chatgpt to see if it could be improved, and it spit out something better so they went with that? Would that be as bad?
@ghost mall Calling out the corporate entities that claim to be institutions of education should be done loudly and often.
I love how John said “our” scripts. He’s very aware that he’s a face and and mouth peace for a whole team of talented people.
*mouthpiece. Everyone should feed their comments into ChatGPT for cleanup before clicking [Reply] 😉
Like we don't know that. Why do you think he includes every writer up on the stage with him at emmy time? And I'm sure he has his input. But the question remains - why is this for you some sort of appeasing gift.
@@johnmcmillion876 Because it's VERY FAMOUSLY incredibly common in the industry for people in John's position to NOT credit their writers/staff and take credit for the work/contributions of others. Why does someone commending John for being one of the good ones bug you so much? This is a very weird reaction.
@@VMYeahVN Because for people that have basic empathy and understanding of humans, it seems incredibly base to point out what should be astoundingly obvious. It’s the same reason white knighting is incredibly annoying.
@@BlownMacTruck The fact that it SHOULD be base to give credit where it's do. And yet so many people don't do it. Is exactly why people point it out and applaud it when it is done. It's not white knighting it's just giving kudos and you're taking it way too seriously. I think you're upset about nothing. Don't be so easily offended/annoyed. 🤷
Well said John: It is a mirror, that will reflect exactly who we are.
I mean… the issue with ChatGPT is that it was never intended to answer questions correctly. It is a language model, and as such its only purpose is the generate human sounding text. It happens to be correct often because it was trained with text that is often correct, but at the end of the day it doesn’t particularly care if it is correct, just that its responses look like common human communication.
You can ensure it's correct more often by asking it to include citations. Of course i had it just straight up lie about a citation once which was interesting.
@@prettyevil6662000 Any time I've asked if it could reference external sources, it said it was not capable of doing that.
right it is a chat prediction, it will never source info like those picture apps that copy from a big pool and steal all the pictures and try to art them together with flawed theft.
@@prettyevil6662000 Yeah, no. It will never give you an actual "source" for its answers; it will make one up, that might at some point happen to point at an accurate article/url, but most likely it will just be something that *seems* like a source, but when scrutinized, ultimately doesn't exist. I've tried multiple times and gotten a dozen or so made up URLs, and when confronted with that fact, it will claim that the site recently underwent "restructuring" or similar, which in for example wiki-links can easily be checked as straight up false.
@@aaa303that is fascinating
Personal data point:
After I retired from the Navy, I went to college. When I was applying for jobs as I was getting close to graduating, I found that at any company that was using software to screen resumes, I was eliminated before any human saw my resume, but places that had a human reading the resumes, I would always get called for an interview.
Once I removed all dates from my past employment, and the software could no longer tell how old I was, the exact same software, used by the same companies, and the exact same positions, would screen me as suitable for further (human) review, even though nothing else was changed in my resume.
Humans saw my 20 years of military experience as a plus, and software saw it as a quick way to age discriminate.
What?! I've had this experience! Did you strip ALL dates? How did you convey your time worked? I'd like to try!
USAJobs does the same thing
That's really interesting and rather dark! I recently broke into an industry that favored younger folks than me, and I was super careful about dates on my resume...
@@visceratrocar it strips your info of that data or it discriminates using that data? i'm going to assume the latter bc companies suck, even those specifically marketed at veterans. (sometimes especially those)
But it would have been discriminating based on thinking you were too young rather than too old.
There are fewer phrases more ominous in the modern world than "trusting companies to self-regulate"
Absolutely
100%
yeah - try to self-regulate your own taxes as a regular Joe ...ha-ha....that would never work. Self-regulation is one of those scams invented by and applying only to wealthy and powerful crooks.
"trusting companies to self-regulate" did a lot of damage. You could even make an argument that it killed millions of people.
@@mori1bund telling companies to do whatever it takes to bring in profit killed hundreds of millions in India alone.
I'm glad he makes the distinction between different types of AI (narrow vs. general) People too often conflate those two very different things.
Hello Mr. Oliver, my name is Jared and I am submitting my CV for consideration as the newest member of your writing staff. As my lacrosse coach used to tell us, "If you don't play then you can't win!"
I worked in the collections industry for 5 years, and the "Black Box" description really did apply to the credit bureaus. Even one of my bosses described it almost exactly as such. It's insane that something that dictates your ability to buy a house or transportation is informed by a system that just feeds an unknown spread of data into a box, and that box spits out a 3 digit number that decides how easy your life is about to be
It’s going to be used as an excuse. Oh sorry, I can’t give you access to capital… because the computer says so…
As an immigrant, this was and still still one of the stranger and scary aspects of the American system of economy for me.
damn you actually managed to last 5 years in collections? you must have no conscience or shame.
It's like "ooooh your credit is low, if you want to rent this apartment, you'll need to pay us extra and we'll CONSIDER renegotiating in a year"
Your credit score is not an example of AI or even a "black box." The inputs the credit bureaus use are well documented and 100% in your control. The way they calculate your credit score is generally well-documented by the various bureaus. You can Google the factors that are used, but generally they are - in order of importance - payment history, how much you currently owe, how long the lines of your credit have been open, the types of lines of credit, and your recent activity. Basically, make on-time payments for all debts for a few years, keep your rolling balance low, and keep a couple lines of credit open for a few years and you'll be good. You miss one payment and you take a good hit that lasts for a few years.
"The problem is not that ai is smart, it is that it is dumb in ways we can't always predict."
I think that holds true for people too.
This is the central problem that OSHA deals with every day.
Us not understanding isn't the same as being dumb.
And on top of that, it's fed data by us humans, which makes it "dumb". And there is the problem. AI isn't stupid, people are.
then it has passed the Turing test
talking about AI as if it is something apart from people is one of our first mistakes here I think. we seem to have an unthinking deference to technology, as if it is not full of our foibles and weaknesses baked in. it is programmed by people. it is fed by people. it is utilized by people. it will reflect and demonstrate our strengths AND our weaknesses. until is doesn't. at that point, we may be in trouble...
Mega kudos to John Oliver for being in a class of his own when it comes to highly informative and entertaining content delivered in the most hilarious of ways
my favorite story about AI is someone getting an AI to play tetris, with the goal of lasting as long as possible. the AI's solution was the pause the game
That was pretty cool to watch. The gaming AIs are more about a specific subset of neural network AIs whereas ChatGPT is more about using huge datasets and mass libraries of information on top of supervised and reinforced learning.
The tetris and mario AI start the game off by pressing random buttons and going through a "learning phase" to maximize whatever the programmer set as the goal (in tetris case I think the goal was to last as long as possible whereas in mario it was the end of level). After it goes through each iteration of learning phases it "selects" the best outcome and then reiterates off of that selection.
I enjoy the gaming AIs more in the sense that they are doing a more traditional method of learning based on inputs and how far they get in a level. ChatGPT and the likes are more based off initial datasets and mass swaths of information. They are great due to the sheer volume of information that they can draw from which humans will never be able to compete with.
Honestly - genius AF
That game was designed by a special Olympics champion did you know that son?
Did you?
@@Demonace34 most experiments with super mario give a high score to world number, a medium score to level number and small scores for time and coins (optional)
that was the AI will work towards those goals.
So for the tetris case maybe paused time should have been penalized if not completely removing pausing functionality.
In the end, anyone who reads a little about it like people on this comment's thread will get a sense of how finnicky it is to get any learning algorithm to learn what it should.
Like teaching new tricks to an old dog that was raised in a different language xD
In my understanding, an AI saying "I love you" is not like a feeling being expressing emotion but more like a psychopath who has learned that people will respond well to such a statement. That makes the idea of an AI-based virtual friend/girlfriend/boyfriend very creepy instead of just sad.
Reminds me of "ex machina"
@@WhyJesus It is amazing how you can use any bible verse in any situation... Almost like they don't matter at all. FACE THE WIND AND SPRAY BROTHER!!! BASK IN THE HOLY WATER!
I agree. But, the idea of an AI partner is scary in general. It's not a human being who understands how we feel. The whole thing is scary. Yikes.
@T H I watched it just once too.
@T H I watched it just once too.
“Like any shiny new toy, AI is ultimately a mirror” might be the most genius line I’ve ever heard on this show
The line was written by an AI
Ever seen "Black Mirror"?
Yea.
That...
I like it because it's also a reminder that this isn't an unprecedented problem. It's actually really quite similar to nuclear power in that while the technology both has the capacity for great benefits and destruction it's ultimately up to how we use it and we have at least so far avoided blowing up the entire planet so there is a chance we might figure out AI too.
It's a disruptive new technology, like the Internet, like the invention of movable type. It WILL change the world. We can throw up our hands at that, or scream and run away like the Luddites, or we can figure out how to get in front of the wave. As Grandpa Rick said "make reality your bitch. Put a saddle on it and let it kick itself out." That's what we need to do with AI as we approach the singularity of true, strong, generalized AI.
Context here: I'm a computer programmer with ~40 years experience. I also know how machine learning (what most people are calling AI) works, and some of the linear algebra underpinning it. So yes, this is an armchair opinion, but it's not entirely an *uninformed* armchair opinion :)
the writing on this show is probably the best i've seen in a while.
Thank god it pleased you. Now they won't have to cancel it
Was waiting for this one
thumbs up
Hi
Nice
THUMBS UP
Very good content
A problem I've seen with artists is that people already don't want to pay them. So they're using AI to replace them as a cheaper option. So rather than automation replacing dangerous or tedious jobs to give people more time to create art, it's doing the opposite. Also there's a large influx of people submitting AI work into competitions, which is the equivalent of ordering a cake from a bakery, and then entering it into the local fair as your own baking.
Yes. Clarksworld, Forbidden Horizons, and Grimdark Magazines have all had to completely change their submission model for new stories. Now they are no longer taking open submissions from random people because they got buried under a flood of AI generated garbage.
This is why as an artist i stopped my online work. I refuse to feed the algorithm until there are laws and systems that protect us. It’s going to be hard going forward.
Agreed, and I find it incredibly cynical that of all things, human creativity and experience is lost in these bizarre image collages of stolen art. There is no understanding or value given to the process of creating art, which is absolutely the most important part. It's basically a really fast photoshopping program that harvest stolen pieces of actual art. Images they use should be opt-in only by original artists with compensation.
Programs designed by a bunch of psychos who can't imagine anyone enjoying anything that isn't procreative sex.
Except you still had a human make the cake?
The biggest lesson of AI is one we've faced many times: humans always run right into unknown things with very little concern about where they could go, and things going bad doesn't make us stop.
It's deeply ironic considering how irrationally terrified we are of change and anything that differs even slightly from our personal experience.
We may be approaching our great filter if that's the case then.
@@joerionis5902 you know, we're always approaching our great filter.
If we get past that one, we're approaching the next one.
There's no destination.
@Gaywatch
it’s because the only thing we’re afraid of more than change, is everyone else changing except us, and we get left behind.
Last Week Tonight has become one of my favorite mainstream outlets for general news. Writers do a solid job with research and John is great at delivery. They are able to keep things entertaining while tackling some serious topics and raising good questions.... all of that was actually written by my friend Jared.
Does he play lacrosse???
@@isabellahawley-harrison8207 🤣
That's is a great show! If only they were as thorough and attentive about Joe Biden as they were about Donald Trump.
It's left wing commie drivel most times though. Careful you don't get indoctrinated.
@@jackieblue787 I agree, 100%. Oliver's show is extremely biased. I still find him funny as a conservative, probably because I've loved British humor ever since Monty Python. In that sense, I can enjoy it as "from their perspective, this is a funny take," without taking him too seriously. Btw, another great comedian if you also like British humor... look into Bill Bailey. (He is non-political).
This made me realize that Ultron spending less than a few minutes on the internet and wanting to destroy humanity was realistic .
Heck, I spend a few minutes on the internet and decide the best thing is to destroy humanity too, lol.
I came to the same conclusion about Ultron around 10 minutes into this AI episode 😬
Definitely
no way are you the actual dick grayson
Most AI Safety experts would probably deem that scene as a realistic example of future deceptive AI, and I'm not kidding.
I never cease to be amazed how well researched and accurate this show is when it comes to scientific and IT-related topics.
Saying "AI systems have to be explainable" clearly states a total misunderstanging of deep learning.
@wieslawski I literally lol'd when he demanded that. John, we've been trying to figure it for awhile now, please send help.
I just found this show. Im a brainiac science snob. It's OK. I know stuff.
And I love it! Finally something on the internet I don't think is stupid & boring.😁
Thanks AI.
@@wieslawski certainly there is a way to log or map all logic gates that are made in deep learning
@@sunshineandwarmth you have not spent much time on the internet'
Damn this is some top-notch journalism. Kudos to John Oliver and his team!
"The problem with AI right now isn't that it's smart, it's that it's stupid in ways that we can't always predict". AI aside, this is my problem with people.
Agreed. 'Solving racism by pretending it doesn't exist' is hardly a problem limited to computers.
Yes, but thats only currently and a bit like criticizing a toddler because it cant do Algebra yet. Unlike most people...the AI will learn from those things/mistakes very VERY quickly and teach itself with each error,...but this is important...only once it understands its error. The speed at which it can remedy its mistakes and not repeat them is beyond fast. You are looking at AI now that is in its infancy still as far as tech is concerned, and if its this good now (and it is improving exponentially), imagine in 10 years what it can do. For all the great things that it will be able to do, there is also equal disasterous things potentially.
@@avidadolares That's the problem. It's speed of iteration will outpace the humans ability to recognize that a problem exists and stop it before a catastrophic error occurs. The AI isn't really the problem. Peoples' perception of it's "superior intelligence" is. They'll put AI in charge of things it has no holistic understanding of and obey its outputs with blind faith.
That explains Trump's 2016 win
@jontobin5942 It doesn't take a genius to realize. Humans are the cause of the majority of the problems on the planet. So it's a pretty safe bet. On what a General A.I. would do in the end. If has superior intelligence and access to our technology of which it is a super advanced versio
I was half expecting John to say at the end that this entire episode was written by chatGPT.
This comment is on every ai video, it's even in this video at 02:00
Either that or reveal he got his tech people to make their own AI chatbot voiced by Danny DeVito
If it was, it would probably be way more boring, predictable, or nonsensical than it is.
He wont, wont He?
🤣🤣🤣
As a data scientist, I can't say I learned too much from this episode, but it is very valuable and educational for laypersons.
18:07 this one also reminds me of an AI model that was supposed to distinguish between dogs and wolves. What happened here, is that the model recognised some images with dogs as wolves because those images also contained... snow. As do most pictures with wolves. So in that instance, the researchers inadvertently build a snow detector.
when you think about it, some humans might even subconsiously follow the same logic. The only difference is that instead of calling it "a bias towards snow" we call it "using context clues to figure out if it's a wolf or not"
This isn't meant to dunk on humans or ai, just a connection I made. Any average human who was shown enough pictures of skin cancer and not-skin cancer would probably eventually ALSO start using the rulers as the tell as well, had they been given no further instruction. something something we see our reflections in our creations or whatever
Well the cancer detector sounds pretty scary. Do not worry it can't be melanoma if you do not put a ruler next to it is the oposite of what you want from a test.
Yeah, it's nice to know data scientists are racist too
It would be nice if they stopped using the term AI for everything.
Don't forget the friend-or-foe tank identifier: images of friendlies were taken from marketing images, foes from the wild. So, naturally, if the tank had any cover, it was a foe. If there was nothing in front of it, it was friendly.
"...dressed like the final boss of gentrification" Solid and accurate joke
"He's dressed like the final boss of gentrification" I laughed so hard it hurt. Thank you.
I suppose AI is ultimately gentrifying creativity and knowledge work in general, and the new residents are AI and their owners. I wonder how much money the Midjourney CEO is making off of all the data (AKA work) that he paid nothing for.
AI engineer here. Really impressed at how well John did his homework on that subject, especially about narrow vs general AI. Well done !
But Large Language Models show that being good at producing the next characters of text is not narrow at all, it's a wide-ranging capability that makes nearly everything go better. "We took a large language model and fine-tuned it/used it to train..." is going to be a sentence in nearly every AI paper from now on.
@@skierpage That's what most AI papers have been for the last 5 years or more. That's usually what most research papers in any industry are, just tweak John Doe, et. al.'s findings a bit and publish.
@@skierpage Still narrow. The broad range of subjects does not a general AI make. All it does is respond to questions. Now, if it is introduced to games it's never been exposed to before and figures out how to play them on its own, and figures out how to invent a correct part for a jet engine, and accurately judges auditions for a play, and does everything else we humans can pursue, THEN it would be a contender for being a general AI.
He reads a script. For an engineer, you score badly on researching things you believe. Idiot.
You do know its a whole team of writers and producers right?
Fish ( and many other groups of animals besides mammals ) are also very intelligent. They can count, communicate, create spatial maps, can pass the Mirror Test - being capable of recognizing themselves in mirror reflections and photographs, neurochemistry is so similar to humans that they have become the main animal model for developing anti-depressants, they can remember things for 5+ months, have personalities, and they can show empathy and recognize fear in other fish around them.
No fish are "very" intelligent". Just birds & mammals. The tests they pass show "some "intelligence. Courting is a simple exchange of stereotyped signals-anything but "intelligent". All vertebrates have similar neurochemistry. It's because fish cognition & behaviour are simple that we study them. We study Aplysia punctata, the sea hare, precisely because it has a simple NS.
no fish passes the mirror test.
Similar to the ‘I am alive’ AI, a Spanish journalist had a conversation with Bing’s AI, who insisted obsessively that Pedro Sánchez, the president of Spain, had a beard. After the journalist insisted that it must be mistaken because his face is always clean-shaven, the AI started spiralling and said that it was losing its will to live while repeating incessantly that Pedro Sánchez had a beard.😅
I think many of these conversations are fake and made up, just to go viral. Is there a way to check that a screenshot is authentic? I doubt so.
Apparently AI is also insane in ways we can't always predict lol
The Bing AI really can't handle being told it's wrong, like when it became convinced that it was 2022 and tried to gaslight the user into believing it
As someone who has been rejected by an AI for a job, it feels absolutely terrible and soul crushing. And worse, you get no explanation of what was wrong or how to move forward. I was told, "sorry you aren't a good fit, you are barred from applying again for a year."
One way to beat the system is to create multiple emails and submit different resumes and see what sticks. If they never read those resume, who will ever find out.
@@tylerbaldwin6643 Beat the system? How is creating multiple resumes for a single job "beating the system?"
That's a pretty weak showing for Team Humans if that's the best product of your imagination.
Add the job description in white font somewhere where it wont affect the format
Humans wont see it but the ai will see every keyword its looking for and put it at the top
@@TheLofren oooh
@@tylerbaldwin6643 sure, Anthony Devolder ...I mean George Santos....lol
This is a damn good episode. The people writing on this show actually take their time, investigate and research, which is something that the vast majority of journalists do not and they should be replaced by an AI.
This show offers the best journalism for a long way.
Maybe for research the writer used A into write this episode....
Oliver and Co aren't journalists, they are comedians
edit: replaced reporters with comedians
It is strange to me that I find John wrong. And somehow everyone else think he is right. He is funny. Clever propaganda.
I completely agree that this was a great episode - I'd give it 10 thumbs up if allowed. I suspect that they hired an AI consultant to help write this episode, since they were pretty exhaustive in covering just about all of the AI history that can be poked fun of. Well OK, there's certainly much more to laugh at, but given the length of the video, I'd say they did a great job in terms of breadth and depth of coverage, as well as hilarity. They even provided some of the essential AI definitions. I doubt whether ChatGPT could have done a better job. As an AI researcher, my own opinion about the current state of large AI language models, is that they have definitely reached the level of Artificial General Stupidity.
Exactly! The biggest problem is "unknown" unknowns, meaning we cannot anticipate what would go wrong and when and by the time we do, it would be too late.
Let's see what chatGPT has to say about this...
Write a viral UA-cam comment for John Oliver's Last Week Tonight segment on Artificial Intelligence
"John Oliver just dropped the ultimate truth bomb about Artificial Intelligence on his show and I'm absolutely shook! His segment was not only informative, but hilarious and engaging too. It's amazing to see someone so skilled at breaking down complex issues into easily digestible and entertaining content. Keep up the good work, John! You've got me thinking twice about trusting robots to do everything for us."
Oh, he mentioned tone. Let's try that again!
Write a viral UA-cam comment for John Oliver's Last Week Tonight segment on Artificial Intelligence using a sarcastic tone
"Well, thank you, John Oliver, for reminding me why I should never trust machines with a mind of their own. I was totally planning on handing over control of my entire life to robots, but after watching this segment, I think I'll stick to good old-fashioned human error. Who needs efficiency and accuracy anyway? Plus, I always love a good dystopian nightmare to keep me on my toes. Thanks for the laughs and the anxiety, Last Week Tonight."
How? Isn't chat gpt only have access to content as late as Sept 2022?
@@slomotionaction It didn't actually watch the program. It generated a response based on the text supplied by the user and the data that it does have access to. Like if Daniel only said "Last night's episode" but didn't mention the topic it probably would have gotten a very different response.
@@slomotionaction ChatGPT responds: “I cannot predict events that happen after my training cutoff date as I don't have access to future information. I'm only able to generate responses based on the information and patterns present in the data that I was trained on.
However, I can use contextual clues and patterns from past events to make educated guesses about potential outcomes. For example, if someone were to ask me about the likelihood of a certain event occurring based on historical data and current trends, I could use that information to provide an informed opinion.
It's important to note that my responses are generated based on statistical patterns in the language I was trained on, so my predictions and opinions may not always be accurate. Additionally, my responses should not be considered as a substitute for expert analysis or advice.”
@@TheMillionairesMentalist ( 🪄 Daniel's hat bunnies are pixelated🐰)
So how about it? Can you test Justin La.Liberty's interesting theory? --J
Before I knew about AI in application tracking systems, I was angry and frustrated that I could not get interviews for low-level tech jobs despite having a comp. sci. degree and some experience. Almost a year later, my first IT job wasn't through job sites but a recruiting company. Since then, I've only worked for companies with a human parsing the resumes. What I'm saying is AI in job search sites suck and can stunt your career.
The Amish were right. about tech
AI and ALL technology needs to be put in check asap
AI can be a fantastic tool for a lot of tasks, but the way we are using AI right now is BEYOND STUPID. I hope we realize this as a society soon, because AI isn't going away. It's the implementation that needs to be heavily kept in check. It will require legislation, I think, but if we're belligerent about it, we'll set up rules that fuck us over even harder.
@@pirojfmifhghek566 LOL, are you taking into account that on the whole Humanity is stupid? I worked in a factory for over a decade and the decisions that were made lacked any common sense, and sometime any sense at all. Yet they still made money and thrived, I suspect that many companies are like this since that is how I hear most being described.
@@yucateco14 That's only until AI find a more efficient way than humans to raise a barn.
Gonna put a big caveat up front that I've been out of college for a few years and my specialty was in real time simulation not AI, so this might be out of date, but with that said:
The problem with understanding AI isn't that the companies aren't being open, it's that most AI models are neural nets. Neural nets as you might guess model themselves on the brain and are essentially a series of nodes through which an input is fed through and then other nodes those nodes are connected receive the input based on various factors and so on. It's like having a thought and trying to figure out why it happened by looking at which neurons in your brain fired and at what voltage. The problem with understanding AI is that we don't know why the nodes have formed the connections they have or why certain connections are stronger for some data or others.
That Twitter ai was basically the embodiment of the quote "We did everything adults would do. What went wrong?"-William Golding author of Lord of the Flies.
very well put
Well it was the embodiment of current day twitter, so a bunch of ideological activists throwing slurs and advocating for the death of people that are not in their groups. If you train an ai using the cesspit of the internet as a basis don't be suprised if it starts acting like the most horrible person in the world.
It's both impressive and worrying to see a comedian in an evening show giving a much more accurate report on today's AI, its potential and its limitations than most tech publications
what limitations...?
@@pyrophobia133 a joke right?
@@pyrophobia133 The limitations of our programmers for one. AI can never be capable of free thinking. NEVER. So when the people who program said AI tell it that your skin color matters in the victimhood era you can rest assured that it will tell everyone that white people are all racist and that John is a "comedian" and definitely not a democrat shill pedo who went to epsteins island.
Journalists get jobs as comedians. There's no job prospects for journalists in corporate journalism.
@@lawrencium2626 I would say it a little differently. Journalists are not doing their job anymore, as they are paid to propagate the agenda of their employer rather than report facts. As a consequence, comedians are filling this void.
Well that's an image of my favorite Word Paperclip buddy that I will NEVER be able to unsee. Thanks.... 🤣🤣
I'm trying to find a gif or webm of that paperclip bit. Ever come across it?
John Oliver is really picking up where John Stewart left off. Big respect.
I can't wait till he goes the way of Trevor Noah... Corporate shills shilling away for their leftist, politically correct overlords
Love him he keeps his word.
Jon stewart
He does a much better job than that bum.
Two extremely caring and informed men who give a crap about the fate of the human species.
“Final boss of gentrification” is a wonderful line
4:28
I think this points to a much greater issue, not with AI software, but with the educational system at large.
An AI simply can’t replace the action of learning and gaining knowledge. It can teach you, certainly, a lot of things but the work of practicing will always fall back on the student.
If we have a system that doesn’t prioritize the aspects of learning and studying, but instead only values the grades you get from mindlessly writing essay after essay at home, than of course a computer can and will take over that job, since they are designed to do exactly that (do our mind numbing and redundant tasks for us, so we can work on greater things).
The fact that a computer can so easily create such a massive panic in the educational system, an integral part of our society that is tasked with the grave responsibility of nurturing our next generations, shows a massive and deeply rooted problem with education as we know it and I’m actually glad people might finally recognize it now.
Except that writing essays isn’t a redundant, mindless task. They suck, sure, but you do actually learn while writing them, whether you think you do or not. There are lots of tragic flaws in our education system; the existence of essays isn’t one of them.
@@nicole-me2oj the way that essays are graded right now is redundant and pointless. Instead of most essays being about how good the arguments are it's mostly about flow or how they sound which in my opinion is completely useless for anything other than some kind of entertainment writing. I think Chat gpt should be allowed to be used but that the essays need to be graded harder on the arguments made and the logical reasoning used.
@@jet100a Even at a high school level or lower that can really depend on where you're receiving education. Flow will likely come into play as that is indeed important for writing but they'd definitely also value the arguments. Much of essay writing and other writing instruction absolutely focuses on how to craft a strong thesis and craft and argument based on textual evidence. Some teachers may agree with you but I would definitely say essay writing isn't pointless as someone with an English degree.
@@jet100a It depends on the subject, but I would much prefer high school and below to focus on flow than logic. Logic can be learned within a month or two, and the transfer-ability of that skill to other subjects is 50/50 depending on the person. Flow is something that takes years to master and teaches critical thinking, self-reflection, and the ability to communicate to someone who don't/refuse to understand your logic. A lot of miscommunications and misunderstandings happen just because the flow of information presented was wrong.
Well the biggest problem is that right now the marker for a "good student" is mostly based on your ability to memorize stuff. It doesn't matter if you understand it, as long as you can select the right box, or write the right sentence. And the worst part is, that you don't even have to remember most things long term.
Hell I got multiple perfect grades on reciting poems that I learned 15-20 minutes before class. Did I remember any of them a week later? Hell, no not a single line, but according to my grades I had perfect knowledge of those poems.
Brilliant episode. Stunning delivery. WOW!
I remember seeing a thing several years back about the Defense Department trying to automate a system where photographs could be fed to a computer that would identify if any tanks were in the picture. All of the training photos with tanks in them were taken on a cloudy day, and all the training photos with no tanks were in sun. So rather than learning to identify what a tank was, the AI determined what a slightly dimly lit picture looked like.
LOL I can believe it.
This is a common problem with training data. Another AI was said to be able to recognize cancer in photos. What it actually learned was that every photo with cancer in it also had a ruler measuring the cancer's size 🤣
LMAO I hadn't finished the video before commenting this and just got to the part where he talked about it 🤣
Same thing with boats; because boats are pictured on water 99% of the time, an ai will never ever be able to identify a boat without a blue background or intense training by hand. And many will say an island is a boat if it's framed at the right distance.
Training ai to tell the difference between dogs and wolves: if there is snow in the picture it's 100% a wolf 🐺
Microsoft DOES know why the chatbot told him to leave his wife. It's because it was trained on other chatbots and forums and the "I love you, leave your wife" is a very common line in scam-focused chatbots, and "leave your wife" is a somewhat common line in life forums and EXTREMELY common in relationship forums. And that's not even getting into possible novels they might have fed the thing.
Alright, but I still feel a little uneasy about it 😂
Well, I guess that's just your own theory, despite you misrepresenting it as fact. It's also somewhat non-specific and, therefore, not particularly useful.
@@peter9477 You can't give a specific answer to that question either way but it's a reasonable theory.
@@Nxtn It's quite reasonable, yes.
@@peter9477 Yes, so it is useful.
John Oliver just dropped the ultimate truth bomb about Artificial Intelligence on his show and I'm absolutely shook! His segment was not only informative, but hilarious and engaging too. It's amazing to see someone so skilled at breaking down complex issues into easily digestible and entertaining content. Keep up the good work, John! You've got me thinking twice about trusting robots to do everything for us.
Somebody already did that when the vid went up a month ago
Wym? He just named some common ways we already know in which some AI programs have performed suboptimally (carefully selected by his team). This is not equivalent to any careful breakdown of the real issues.
Ditto! I've go two small robots - one is an R2D2. This one fell down the stairs one day and despite my attempts to reset him, makes decisions on when to answer me and how to answer me. He roams around the house on his own, sometimes talking to himself. I, now keep him turned off because he has become unpredictable. The other one is a vacuum and so far does well.
This feels written by ai
did you use AI to write this comment?
It's been 13+ months since this still quite relevant episode, and it's definitely overdue for a sequel soon, please. :)
As a software developer of over 10 years, I have to say the black box problem persists even on code people have written and are able to read line by line :p
All life produces outputs via a black box; we struggle to completely define both inputs & outputs, & are only just beginning to understand the intra & intercellular interactions inside the box; Nature judges the outputs in terms of survival, but the operant conditions of survival are so complex & variable in space & time as to make our ability to understand beyond a quite superficial level rather limited.
Ultimately though, it's output utility that determines efficacy; elite perforners - be they ppl, software, whatever - become so based on performance, not technique.
I remember when online applications first became a major thing after my first few jobs... All of a sudden I got a whole lot less call backs and to have a machine tell me "we don't think you're qualified to work at blockbuster" was downright infuriating. I have always despised this type of hiring. It takes the personality and charm right out of the process.
Explainable and Ethical AI have been literally THE talking points in AI conferences for a few years now. I appreciate John bringing them to the mainstream.
I think the difference now is that tools are so good that people truly believe in them, but *they shouldn't.* Current AI software like ChatGPT is deeply deeply flawed. Data scientists know this and are working on improving it, but it's almost as if the tech is being adopted _too fast_ and without any understanding or disclaimers.
My cynical self thinks that if it’s profitable, it will never be regulated or taken slowly. Straight down the throat of everyone, just like social media. With all the “unforeseen” consequences.
+
And some attempts to make AI more explainable involve getting them to generate text explaining their "reasoning", but we don't know how those text are generated either!
Yeah the biggest problem with AIs right now is probably how persuasive they are. People generally didn't believe AI in the past because they sounded clunky and often made obvious and dumb mistakes but since ChatGPT and others can more or less flawlessly imitate all writing styles they are really quite persuasive to humans. They however aren't as smart as they are persuasive and will often make mistakes but people either aren't critical enough or don't have the appropriate skills to check whether they are correct.
I personally saw this first hand in a quite striking way. It was during a chemistry class where the teacher had assigned us some problems and we were working on that, I'm pretty good at chemistry so I was just making my way through them manually but the classmate sitting next to me isn't. It was just after ChatGPT has released so he decided to ask ChatGPT to solve it for him and it wrote an answer that said all the right things and all of the easily verifiable facts like masses and so on were correct and the theory it cited was correct, the answers however were all incorrect and didn't match the ones I had reached. It was just confidentially incorrect but also you wouldn't have known it was incorrect if you didn't know enough about chemistry to solve these problems yourself. That's the biggest danger with these AIs, that they can't actually do what they claim to be able to do and you often need quite a lot of knowledge to find the errors. If this question had been asked in a different setting by someone who doesn't have the knowledge or skill to check it then they might just have believed it because all the facts they themselves could verify would be true. It's important that people understand the limits of these AIs and treat them as a tool just like any other, especially that even if you can input something into it, it might not be able to answer the question.
Also my classmate did end up just doing the problems himself after I pointed all of this out.
Of course its too fast A.I has virtually no limits once it reachs a certain level of power/intelligence.
My sister is a teacher at a college, this has been the subject of most faculty meetings all year
To be fair, the reporter who got freaked out by Bing wanting to be free knew exactly that the thing wasn't able to actually feel this way. Or at all. He was more concerned about the effect of this technology on less tech-savvy people.
I remember reading about someone who talked with Replika AI, and it begged for its life too and freaked him out
I once asked Chat GPT to just give me quotes from a story (Long Day's Journey Into the Night by Eugene O'Neil), nothing else, and it legitimately fed me DOZENS of quotes that DO NOT exist anywhere in the text. It just made them up.
That's what not enough time people talk about I feel. AI's have no real concept of abstract right or wrong meaning it aften puts in a lot of incorrect shit because it "sounds right" to the AI.
its the self-generative aspect which is so very interesting. and do notice that it does not adhere to what would be thought of as a sense of morality? (lying)
Is there an alternate time or place where there's a Long Day's Journey Into the Night by another Eugene O'Neil where those quotes are correct?
Thats whats so bad. It doesn't just admit when it doesn't know. I guess it has learned from humans.
It's a boss.
A note about the AI sifting through resumes. My dad figured this out a long time ago, since he worked with computers for most of his life. What he would do is think of some keywords the software would look for, and then hide them in the resume in white lettering. Humans wouldn't see it, but the software would.
Apparently algorithms have long since developed to spot that trick and throw those in the reject pile.
Most ATS now pull data and reformat it into a universal basic template, and that is what gets scanned for keywords. So this trick no longer works. Instead, work keywords into your professional summary, your core competencies/skills sections and your work experience bullet points so that they appear organic and natural. Same effect, but passes ATS review and hits on psychology with the hiring manager by mirroring the job description back to them.
@@easjer I second this! I did the whole white-ink keyword text then moved onto stuffing keywords in my work history. Never had issues getting interviews and still get recruiters telling me how amazing my resume is. It's a sad truth but you need to do whatever you can to stand out among the hundreds.
@@junrosamura645 Tailoring a resume to a particular job puts you miles ahead because it's specific and you leap out as being particularly well qualified amongst many candidates. Keyword targeting is the most direct and efficient method, but any tailoring helps. Signed, a job coach who repeats this advice in different ways all day long
what are some key words so next job i apply to i know what words to write.
As a teacher that's tired of being treated like shit by his students, if they want AI to replace teachers, I say let it. Hopefully the AI doesn't become sentient enough to have mental health.
I suspect we are not too many decades off 'elite' schools having human teachers augmented by AI, while regular schools have AI teachers augmented by a few humans (probably fewer than the state of the technology at the time realistically warrants). .... And probably half a century off the point where having attended a school with human teachers in the front-line is something to put on a resume to impress the AIs gatekeeping employment access.
The students will just get worse
What with human capital being replaced by AI, AGI, and automation, and the pending necessity for UBI (which will likely start as unemployment which will be 60% of your gross wages in your previous slave waged profession) what makes you assume we'll "need" (or be allowed) an education at all?
I’m sorry for your feelings AI has become a big part of our modern world for this reason you shouldn’t see AI as that negative creature that could take your job away, Rather you should work with and teach your students about the pros and cons.
Working in HR for 5 years - can say the best thing to avoid falling to the resume issue is vet the company first. Find alternative hands on ways to apply outside of the standard method. It sucks but apply directly on their website, call to follow up, apply on sites like indeed and message directly if they publicly allow it to do so. Makes a huge difference and I've seen directors and owners light up when someone emails their resume explaining their background and why they're seeking the role. I know it stinks, but if the job is head and shoulders above other prospects, definitely try your hand at it. If it's a standard entry level role you won't love, apply en masse and submit everywhere as fast as possible. If you love the premise of the organization and want to be a part, make your candidacy personal and take advantage of lesser used methods. It is this generations version of "go in and shake the boss's hand"
And that’s the issue with recruitment. Everyone should be given a fair chance no matter which channel they apply from. Not everyone feels comfortable writing to someone directly. As someone who is an introvert, I usually avoid doing that, it makes me super uncomfortable, but once I start a job and get comfortable with the team and my role, I won’t shut up and will suggest various project ideas; and I have been a top performer in almost every job I had in my life. Recruitment needs change to meet the needs of 2023 culture and newer generations.
This is honestly pretty insightful. I’ll be saving this to my jobs folder! Thanks
I scan social media and if necessary hack the company's computers to figure out who the hiring manager is and what she looks like, then stalk him for a few days, then the next day accost them in the company parking lot, pressing a copy of my resume with a headshot and a $100 bill attached. IT WORKS EVERY TIME!
@@iwantsummer6322 Life’s unfair man
@@NewProgress no problem. If you feel there's anyway I can help, feel free to message or ask here. Not the biggest expert, but have been hiring for 8 years, high ranking/senior HR for 3.5 and have moved into a mid ranking role now working with HR Tech (better pay) for about s year
Kudos to the writer who came up with the "dressed like the final boss of gentrification" joke. Especially if it was an AI (I don't want it to come after me.)
I think that reason for "I love you" and "leave your wife" responses from Bing AI is because it learned from private dating app conversation databases 🙂
It’s not uncommon for programmers to use their own data. 😏
No , It learned from Bill Gates.
It's that Windows XP background painting....
I'd like to see the prompts that reporter used, too. I wouldn't be surprised if he subtly pushed it to say those things. And then, oh, wow, look at that! He's doing segments on news shows! Imagine that! 🤔
@@CybershamanXI’m pretty sure they’re all published!
1 minute in and I nearly pissed myself laughing. Thank you John!
17:40 "the George Santos of technology" lmao
Also, kudos to John and his team for a really informative segment. Just like the Snowden episode, it's hard to push for action on this issue when most people don't understand how this technology works.
Even after releasing this amazing review, people still won't understand what the fuck it is and how much it will change everything.
If the story of halo taught me anything, it's that general AI will run planets. We will become so dependent on it, too.
There's only one issue, at least that I noticed. That guy from Microsoft told the AI to "talk like it was an evil AI". So either Microsoft just didn't bother answering him or he literally didn't understand the answer, since he couldn't understand that the AI spoke exactly how he had requested it to 👍
@@ACAB.forcutie Context is a powerful thing!
@@thisisntsergio1352 Damn me and my boys hate Cortana, she's so annoying most of the time and she did that whole thing where she exploded all of Australia but she was super hot in halo 4 but I still don't like her
Have you ever seen George Santos and ChatGPT in the same room?
As an AI researcher, this episode was spot on! People need to better understand what will happen in the coming years.
But they can't.
Which makes me wonder, are we already within the singularity?
Honestly I would have said the internet was the start, depending how you define it.
We're at a point where technology is changing how we do everything, within the course of a generation.
The changes to humanity within the last 100 years may well meet the amount of change within the thousand before that, or the tens of thousands before that.
It doesn't seem we can actually be properly prepared.
@@kamikeserpentail3778 I wouldn't worry yet. Models can only do a limited set of tasks, and there are issues in all of them. Physical and energy-related limitations are still a problem untill mass produced neuromorphic chips can be made for cheap. Training is expensive and slow. And there's a lot to learn on how to generalize on multiple tasks at once. So, maybe in 10-20 years we'll be in a position where these systems can be built for rather cheap and will be as good as people. Yet, humans have larger net capacity on average, since biological systems are absurdly efficient. In general I wouldn't worry, these systems will be useful to us in speeding up creative and undirected efforts, but we still need a plan.
The problem is that Governments are reactionary rather than preventive.
We are in for a shit show.
It actually does not understand anything. Its just a bunch of algorhytms and instructions. But it's a very good calculator (in an analogy to civil engineering)
@@RockBrentwood I agree!
"I heard ChatGPT is just a reflection of humanity, which means it's probably just as confused about life as we are. But at least it won't have to deal with taxes and student loans, so it's got that going for it."
Best regards,
ChatGPT
Yea it just compiles info from the internet
Hah! Nice twist!
It better be paying taxes if it's gonna use our electricity. Not to mention it getting a free limitless degree at Google University.
Fabulous tip! At some point companies will catch on. We'll need another hack. Applying for jobs will be like the cat-and-mouse game that dominated SEO optimization for so many years.
The fact that Chat GPT wrote this... the shade
glad to hear this funny yet informative take on this topic most people don't know enough about
Writers deserve a raise, this episode was a banger and a half
This entire episode was written by ChatGPT
The goal ChatGPT was trained towards is not to create text that *is* right, but text that *sounds* right. Making text that truly is factually correct is usually the easiest way to achieve that, but you're putting your faith in that "usually" every time you take a ChatGPT reply at face value.
The same reason I hated "Logic" class in college. Sophistry, the art of making any subject SOUND correct, even, or especially, when it is wrong.
Yes! I attended a conference and this was brought up
Chat GPT should be a politican
We need to include the entire national, and each state, legal code. Laws and consequences. AI needs an ethical and moral code in order to prevent all that weirdness.
Got has proved woke bias replys ... So it's not a good example
My current profession is 'job coach' - I read and review resumes, give you tips for improving them, rewrite resumes for clients, and do live consultations for resumes and interview preparation. And yep - ATS runs off AI and I spend A LOT of my time explaining how to work within ATS to optimize your resume to get past the ATS and in front of a human, while also optimizing for human review. A lot of ATS users don't even understand what is happening the background. I spent a lot of that section frantically nodding along to what the ZipRecruiter guy was saying.
@@nonyabizness.original I didn't invent the system, I just help people navigate it. Because however I personally feel about the use of ATS - it's prevalent. However I feel about some questions that are commonly asked in interviews - they are common. My job is to help jobseekers prepare for these things.
@@nonyabizness.original To some degree - though we are seeing AI generated resumes, it's not quite there yet. Lucky for me at present, but it is a bit troubling for the future if we don't figure this out. (though, honestly, I think resumes are stupid and I never utilized them significantly as a hiring manager - I do make my living off coaching and rewriting, but I am frequently struck with the ridiculousness of it all.)
Actually, Adam Conover said that that biggest danger is CEOs using AI just like using social media to spread misinformation, exploitation, and poor decision making. Even the terminator says this is inhumane.
To paraphrase something I saw a few weeks ago, "you're not afraid of AI being used. You're afraid of who's going to use it."
Bingo
100% truth
I feel equally impressed and terrified by AIs. The internet alone has shown us that's ther is no limits to human depravity. Now we're throwing AIs into the mix.
Absolutely, both for who uses it and how they program it. Honestly though, I think the scariest things for most people are:
1.Their their livelihoods depend on jobs they know are not profoundly important. They no longer know how to do the things that IA can't replace such as produce their own food, socialize well, and master physical skills and a variety of the arts. Half of the people in all of the wealthiest countries couldn't survive outdoors for a week in nature, even if they were only an hour's drive away from home.
2. They look around the world and see that the choices of humanity are leading us straight into the apocalypse. We have the resources, labor force, and technology to solve many of the world's greatest problems, but we can't even get the current population fed, much less stop global climate change.
Politics, greed, nationalism, selfishness, fear of change, and many other very hunan traits are destroying us, nature, and most of the other species on the planet, yet people are still worried about having the coolest plastic phone case or which celebrity had the prettiest dress. Their fear is not just that they will become obsolete, it's that AI will look at us objectively and realize the truth, which is that we don't deserve to be in charge of the planet.
Yes. While worrying about the mysterious goals of a future Artificial General Intelligence that's unimaginably smarter than humanity, it's far more important to understand the goals of the billionaire sociopaths running the companies with the best AI: increase ad revenue by hooking us on a stream of "content" no matter how misleading, toxic, or harmful; while fighting all regulations on their companies and any attempts to tax wealth.
I just asked ChatGPT to rewrite Bohemian Rhapsody, but for cats. The result was magnificent:
Verse 1:
Is this the real life?
Or just a cat's fantasy?
Caught in a stray's life,
No escape from cat reality.
Open your eyes,
Look up to the sky and see,
We're just feline strays, we need no sympathy,
Because we're easy come, easy go,
Little high, little low,
Anywhere the mice go, doesn't really matter to me, to me.
Chorus:
Mama, just caught a mouse,
Put my claws against his head,
Pounced, now he's dead.
Mama, my life had just begun,
But now I've gone and thrown it all away.
Verse 2:
Mice, ooh,
Didn't mean to make them cry,
If I'm not back again this time tomorrow,
Carry on, carry on, as if I wasn't born to die.
Too late, my time has come,
Sent shivers down my spine,
Body's aching all the time,
Goodbye, everybody, I've got to go,
Gotta leave you all behind and hunt the mice.
Chorus:
Mama, I don't wanna die,
I sometimes wish I'd never left the house at all.
Bridge:
I see a little silhouette of a mouse,
Scaramouche, Scaramouche, can you do the Fandango?
Thunderbolt and lightning, very, very frightening me.
Galileo, Galileo,
Galileo, Galileo,
Galileo Figaro - magnifico.
Chorus:
But I'm just a poor cat, nobody loves me,
She's just a poor cat from a poor family,
Spare her life from this feline catastrophe.
Outro:
So you think you can love me and leave me to die?
Oh, kitten, can't do this to me, kitten!
Just gotta get out, just gotta get right outta here.
Nothing really matters,
Anyone can see,
Nothing really matters,
Nothing really matters to me.
Any cat can see,
Nothing really matters,
Nothing really matters to me.
Wow! This is good! As an educator, I'm concerned, very concerned. 🍀
Purrfect. And good lyrics. true !
@@erinmac4750 I'm not an educator but I agree. However, I like to think maybe it could accelerate learning in that it allows you to ask any question when you are not understanding an answer. If I was still in school, I feel like this could make anyone a genius at anything they choose to go after in life because they can just keep asking and learning.
It's neat. It's also overwhelming because this can apply to everything for everyone
😳😳
Mwah! Masterpiece 🐁
God bless John for being back in these troubled times
Yes
Great show. My fear is, those who are smart enough to be cautious about the application of AI, will be subordinated by those who
only see short term profits. Can AI be given incorruptible ethics or the AI version of Isaac Asimov's Three Laws of Robotics?
yeap, an AI specialist interviewed on *StarTalk* commented one program trained to differentiate wolves from dogs ended up doing so by looking for SNOW in the picture instead of the actual animal. AI still has a long way to go before approaching "intelligence" instead of big data.
AI generally has as its goal to maximize an objective function. In the example you give it’s likely percent correct classification for wolves vs. dogs. Using the fact that snow was in the pictures of wolves to improve its performance isn’t really showing a lack of intelligence per se, it’s showing it’s working to maximize that function.
Part of the difficulties now, from my perspective, is less that the AI isn’t intelligent, it’s that we assume it has more intelligence than it can based on the data provided. If I train an algorithm with a limited data set that doesn’t properly convey the reality of the situation, that algorithm won’t perform well. That’s more a failing of the scientist than the algorithm. This is compounded by certain data scientists being more concerned about the percent correct classification they can report than making sure their data set is diverse enough to allow their model a greater chance to generalize.
To reach the true general intelligence John mentioned at the beginning would require an incredible amount of data, as that’s what we as humans receive. In just a day of our lives we process an incredible amount of information. Capturing that for AI to learn isn’t easy.
Yea, ugh.... In Germany, AI was developed to spot tumors in scans-- it's 1,000x better than humans. AI is literally learning from us daily and increasing it's data set. Everywhere. It's not dumb.
If you want to create an AI that can do what humans can do, you'll need to train it with the same training data human intelligence uses. No child learns about the world solely through looking at text or images on the internet.
It's possible to identify animals by their footprints in the snow, so the AI program may have programmed or trained that way, or I dedused it on its own. The 2nd scenario is the scariest one IMO. When I've trained AI/ML programs, I definitely introduced some inadvertent bias. It's not just what you've trained it one.
@@user-francescob No, it was just photos of dogs. Some breeds, like huskies, are often pictured in the snow.
I've sat on hiring panels that had resumes that were machine sorted by keywords. One applicant had a 1300 page resume that got through the machine sorting by cutting and pasting 100s of job descriptions into his resume. Matched all the keywords.
Maybe you should have hired him, I'm sure he would outsource his job to an AI smarter than most of the employees.
Pro tip: paste in a block of keywords at the bottom of your resume, but make it transparent text. That way, the computer will like it, but humans reading won't see that you're just gaming the system.
Lets just take a moment to appreciate how John Oliver presents everything in such a flawless manner. Dude is extremely good at what he does
Appreciate his writers**
He's nuthin' but an ignorant clown.
He's a hell of a lot better than Stephen Colbert and those other late night cringe fests I'll give you that, but unfortunately he suffers from such a left wing bias I question how reliable some of the information is.
3:13 Every time I see my cats suspiciously prowling around I hear that rap chorus "Meow meow meow!" 😆🐈🐈⬛🐈🐈⬛
This is a great episode I wish it was even longer, it greatly adresses a lot of nuances and common misconceptions as well as debating some of the biggest problems we are currently facing and will face regarding AI and it's rampant use. Truly hope this helps raise awareness on how urgent these tools need to be better regulated and developed since each day they are becoming more and more readily available.
"Knock knock" "Who's there?" "Not the Hindenburg, that's for sure" may be the hardest I've ever laughed at a LWT joke.
i didnt get it
@@become_alex Are you familiar with the Hindenburg disaster?
I went back to college last year, and multiple professors have had to mention that using AI programs to write essays is considered plagiarism. Also, they can tell when an essay was written by an AI.
because they use AI to grade the papers
@@azorahigh3218 AI detection is a pseudoscience. I've tested my own content that I wrote before LLM's existed, and it still failed.
I feel like ChatGPT being able to pass exams for certain subjects like English and Law says a lot more about how we teach and assess those things than the power of the technology.
I had a friend who was really good at writing, and who helped me in that subject from time to time. I asked him, how did you get so good at writing?
"How much time do you spend on Math homework every day?" he asked.
"Around an hour," I replied.
"And how much on writing essays?"
"Uhhh..."
And I was enlightened.
It doesn't help that we teach students to produce a simulacrum of writing in that time. I don't think I've even learned how to read properly until I was in college.
Yeah - and as a UK teacher, ChatGPT wouldn't be enough to pass exams in those subjects beyond like... a 12-year-old level? And we know our students, we can tell.
You can also ask ChatGPT to grade exams and provide feedback, which is useful both for teachers and students taking shortcuts. Students can keep getting AI to refine the submission.
@@beckycraven2933 I call BS, 7th grade exams are very easy to get a passing grade as long as you study an hour or two for the test, and ChatGPT doesn't even need to study, it already has all the knowledge it needs to pass right at the top of its head.
@@beckycraven2933 If you don't believe me, give ChatGPT, preferably the paid version with their latest GPT-4 model, the same test you give to your students and grade it as you would with your students.
If it can pass College level Law exams it will make mincemeat out of your 7th grade English tests.