It's pretty crazy to me that in the span of 20-25 years, we went from basically no internet.. to practically living on it.. to now AI revolutionizing things once again when it comes to human dependency on the digital world. I was born in 85. I grew up without the internet. In my early teen years I saw its infancy, with DIY websites, naive social tools, and Flash animations. It was great, it felt like it was made _by_ the people, _for_ the people. The possibilities were limited, sure, but there were no rules. We were making up the web and what it would be for. Fast forward to now, and things have drastically changed. The younger generation doesn't seem aware of it, but now that companies have taken over every single mainstream platform and are increasingly turning everything (information _and_ entertainment) into a monetisation opportunity with constant ads and invasive practices, it really feels like technology's reason to be has switched to mainly benefiting the corporate world. We see it with latest computer OS, smartphones, and game consoles; power is gradually taken _away_ from the users in favor of streamlined experiences full of content but void of control and personalisation. Even cars manufacturers are jumping on the bandwagon now. I fear AI will only increase this. The common people will sure be able to use some aspects of it, but the most unfathomably advanced stuff will be developed - and reserved - for the corporate world, digging a trench ever deeper between the consumers and the technology they live with/on, keeping autonomy and agency away from them as much as possible.. always for profit. There's already tons on people eager to add AI to anything and everything without realizing that this whole technological endeavor is pretty much a global social experiment we all participate in and have no clue of the current repercussions or how it will turn out. The negative effects of social media are just starting to be understood, yet not many people seem bothered. When workers from any degree of education alike start getting replaced by the cheap alternative that is AI, what will happen then? When passionate artists get pushed aside in favor of technology producting an endless stream of digitally generated content bound to market research, what will we do? When humans are deemed too costly and complicated compared to the software alternative, which is compliant, malleable, and forever productive, how will we live? What will happen of the human experience? That's what scares me.
❤❤❤ I feel like I was barely computer literate in the early years of the net but it was still within my grasp with effort. Now, I fear that instead of being long dead by the time of the "Singularity," here I am at the cusp. 😮
technology actually pretty much slowed won post-2000. If we were on track with technological developement 2023 should been 2005 or 2010. At this rate any usefull PC/laptop/AI developement is still 10-20 years in the future.
Maybe we don't need so many artists and we need more people focusing on solving the world's problems, which ai can help with. Just playing devil's advocate with that one.
Oh Hell yeah. I'm all for about some long form content. It's not like I wouldn't still love the shorter 10-20 minute stuff but I really enjoy the occasional deep dive stuff. Lay it on us, Joeski. I mean, shit; it's kinda like that line from 'Airheads;' "I'd watch Pip farting on a snare drum for an hour." Except it's Joe and he'd most likely be farting on a guitar or a microphone.
As an artist that still draws on paper, I have to say that I have noticed just in the last month that my posts that are my hand drawing with a pencil or simply pencil on paper drawings are suddenly getting WAY more interaction, like 5x more from similar posts just a year prior. I think there is starting to be a push-back against the super slick digital/AI look. I think as AI ramps up, there will be a bigger desire for real human-made craft, that you can show is human made. (based on what I'm observing just this last month) I hope I'm right bc that is actually an unexpected boon for artists who still use the traditional methods.
You certainly can see this trend in previous fields that have been automated, where people will pay for handmade products for various reasons such as supporting experts or tradition, the belief that more money means more quality or that handmade products are always superior. Now it’s happening here. P.S: Kind of ironic that artistic fields are the first to be taken over AI, when it was thought to be the last one to be automated previously next to specialist tasks like medical, engineering and driving vehicles.
@theorangeoof926 it's not as if all art that is made is genius and unrepeated. Logos, for example, can easily be automated as well as certain gneres of illustration like fantasy, disney princesses and portraiture bc they both use very similar styles and tropes repeatedly. What AI currently sucks at is animation and story art like storyboards and comics bc that is (for now) way too complex with too many variables. Basic, repetitive, and simple imagery, while seeming a mystery to those who are not artists, is very easy for a seasoned pro and for AI replicate. However, keep in mind that there will be some serious copyright infringmenet issues coming up that will hopefully protect creators and stifle AI "art". AI can only steal and repurpose what has already been created. It has no original ideas at this point.
@@rebeccalaff853 there are people who pretend to be artists, and are there for the money. This A.I. era filters them all out; and leaves the real ones like you to continue making art.
I asked ChatGPT to write a story about a kid going to the shop to get some milk in New Zealand idiom. It did very well, using terms like "dairy", "jandals", "stubbies", "pushbike", "mum", "kiwi" and "bloke" (instead of "convenience store", "flip-flops", "shorts", "bicycle", "mom", "New Zealander" and "guy"/"dude") and speech that included "g'day", "fresh as", "mate" etc as well as showed awareness of NZ brands such as Anchor milk and Whittaker's chocolate - but used US spelling for some of the words - "pedaled" instead of "pedalled", "flavor" instead of "flavour". When I commented on the "Americanisms", it apologised and supplied corrected text with NZ-English spelling. It also showed awareness of Māori words commonly used even by non-Māori "kiwis" and understood what I meant by "going and getting some kai". That's some pretty heavy lifting.
One of the most scary things about AI, besides the warfare, is the social engineering. A handful of people control the general direction of political and social responses.
The real problem is that most (and I do mean the majority) of people fail to employ critical thinking when consuming media. Sadly this explains a lot of what is wrong with our current society.
I tried ChatGPT just to see what kind of limits or rules it has to follows. I asked to generate a story about a group of collegian girls drinking in a party and it replied that it "couldn't write a story that promotes the normalization of consumption of alcohol for minors." So I just asked then to write a story about a group of collegian girls organizing a party. It now gives me a story about girls drinking too much at a party... lol And no matter what prompts I gave it, there's always an adult that manages to end this party by calling the police, the parents, etc... All the stories it gaves me felt like those ultra-conservative small films that were used in the 50s to scare the youth about drugs.
That is already happening with social media and their algorithms as well as entertainment and legacy media. Also academia. Al of which has been social engineering for years (even decades) now.
I tried using ChatGPT to write a script for a video but I had to go in and correct tons of errors. Then i used the ammended script and fed it into an AI speech synthesiser for my voice and what it put out needed to be edited and spliced to match my normal pace, speech pattern, cadence and pauses. The amount of editing and additional work for it was nearly 3 times longer than if i did it all myself. The end of my video I touched upon similar concerns for AI voice matching, video generation etc and how easily lies ciuld be perpetuated and lead to chaos.
There's a news story today about a lawyer who used Chat GPT to cite previous cases that didn't exist!! He asked ChatGPT for precedence cases, asked Chat GPT if they were indeed real cases, Chat GPT indicated it had given him actual cases with real precedence. Lawyer submitted his ChatGPT citations to the judge and is now in a heap of trouble because those cases aren't even real!!! ChatGPT LIED.
while it can do this, this is not what chatGPT is designed for. chatGPT is designed for questions. You should use it as if it were a search engine or simply a knowledgeable person you're asking a question, not to have it write scripts
It only outputs based on what you input, learn how to use the right prompts and you will get the desired results. It can write a script in your own tone and style if you train it well enough.
Been using ChatGPT for a couple months now. I'm a coder and found it very helpful, not so much for writing code ( its usually buggy ) but for brainstorming code. Its like having another coder in the room you can bounce idea's off and ask how they would do it.
Yes, me too, I've tried it across languages and have had a similar experience. It's also a good, indirect, way to judge how good the help files are for a given coding technology i.e. if it keeps making daft mistakes then the learning material probably wasn't so hot
I'm a software engineer, and a client recently hired me to improve their buggy legacy application. The original developers did a terrible job, making it nearly impossible to follow their logic. Thankfully, ChatGPT's coding capabilities are incredible. I simply copied and pasted the classes with a request to refactor following the Gang of Four principles. Within seconds, it returned clean, functional code. While it's not always perfect, ChatGPT saved me and the client a lot of headaches and money.
I'm someone who runs nerdy games, dnd and Traveller, and I've been using it as a second writer/character generator. Saves me so much time typing things up for me, I can do 4 hours of prep work in 45 mins
I recently got to speak to a high school teacher who taught me back in the day. He told me that with really good prompt crafting and/or prompt revisions or second prompts asking for details, he was able to use ChatGPT to produce code for a specific model of robot for his robotics class. And it woked as intended. So basically he is preparing his teaching material for September on his own time and AI accelerated that massively for him. And so it follows actual engineers setting up production lines are starting to do this too.
Dude your tangent cam bit was oddly meaningful. I've spiraled into that wormhole of doubt, wondering if anything I make would be of any value, many times. Seeing a creator I look up to talk honestly about it happening to them too was really encouraging. I know this video is about AI but just wanted to thank you for that moment of honesty!
Speaking of the legal repercussions, there was a civil case in DC where the plaintiff's lawyer submitted a brief written by ChatGPT that cited 5 completely false decisions and many false (as in no true) references to those 5 cases. The judge was not impressed.
This post is a classic case of false association. That atty is to blame.. same thing could have happened if he pulled out a old set of encyclopedias. So.. in that case, would you remotely blame Britannica? No. You wouldn’t. So… try again.
using a tool wrong is not the fault of the Tool. if your building anything its still your job to do the basics of QA and test if the Final result Passes and give your final go ahead. its like grabbing a early prototype plane then expecting it to take off and land without any pre flight checkups, and then blaming the plane for crashing.
Collapsed on the floor with his face smushed into the carpet was also a "nice touch." Existential dread is so much better with appropriate visual cues. 😨😵😱
I asked, I think the Microsoft version, " If a child in cloned from a FTM, would the child be male or female". The answer was that it would be a male. When i asked how that could be, it replied that because it came from a male person. When I told it that I thought it was wrong, it said to change the subject and ask something different????
Once AI creators prove that CEOs and boards of directors can easily be replaced by AI, we'll see some pull back and regulation. A personable face, a confident voice, and some logic. And you have yourself a competent CEO.
and isnt replacing people on charge, ceos and politicion with machines really much more dangerous than replacing some workers jobs? no doubt that ai will do better job for a while, but its a black box, you know, good tool but maybe an evil master. Its really more dangerous to put it on charge of everything. But its probably happen anyway.
@@pavelvalenta2426 Right ? I can already imagine some ruthless AI at Amazon monitoring everyone . AI CEO : 'Your level of production went down 17 % this month . You' r fired. Worker : 'But my mother died . i was distracted AI : Irrelevant .'
I've been an artist for just over 35 years, over thirty of that using Photoshop as my primary medium. I used to live in Hollywood, California, and I made a damn good living as an artist in commercial storyboards and videogame assets. My biggest fear a decade ago were websites like Fiver that funneled work to foreign artists who were willing to work for a fraction for what I made. Now that I've seen what AI programs like Midjourney can do, I feel obsolete, old and scared that artistic creativity is now in the hands of machines.
I'm an artist too, and I think making actual physical things might be safe...for awhile anyway. Maybe move to physical drawing and painting, maybe even sculpting? Try to not over work your art.... make the human touch a feature, mistakes and all.
Art, actual art not commercial stuff, will always be safe. Art is not valued because of its material properties or appearance. It is valued based on a completely false idea that human brains seem hard-wired to assume: The idea that an objects history is somehow attached to the object itself. Imagine this scenario: I build a machine which can create an exact clone of any object placed inside it, down to the quantum level, an EXACT clone, every single physical property reproduced with EXACT fidelity. I put your wedding ring inside of it. And I take the clone out of the other side of the machine. Do you care which one you get back? Yes, you do. Because you feel, maybe even believe, that the objects history is somehow a property of the object itself. It is not. Demonstrably, it absolutely is not. No physical test, no future examination, could ever distinguish the two items. They are, in every single way that actually has influence on reality, exactly the same ring. That one was given to you by the love of your life and the other was constructed by nanobots powered by magic is utterly meaningless. But, even if you TRY, your brain won't quite let you get there in believing that. It can be proven. But... you still want "the original" one. And the entirety of the art industry (except the commercial stuff) runs on this. It's called 'essentialism.'
I feel you, but consider most of the AI available today as a tools to increase your productivity and throughput. For example, do a rough sketch in PS, pass that to Stable diffusion as a img2img with a LoRA fine-tuned on your style and tell it to do the inking and detailing. Then you can give it a final touch-up and ship it. Nobody but you could produce that result, but you can do it in half the time without little to no loss in quality.
as someone who has had to distance myself from the news quite a bit for the sake of my mental health, this breakdown was very helpful. also the cuts to Joe on the ground made me cackle. thank you so much for the amazing work you do!!!
Totally agree. I know what you mean. My philosophy is that if it's important enough, I'll hear about it through word of mouth, and if I don't hear it through word of mouth, then it's not that important or relevant to my life. That helps a lot. The only news I keep up with is anti queer laws. But aside from that, all word of mouth.
@@BooksRebound it's really important to keep yourself informed about what is going on in your own community and around the world. The key is finding an attitude/perspective where you don't take bad news personally or let it bring you down. I have yet to find out how to do it myself, but many others seem to be able to do it
@@PetterssonRobin That is how I stay informed. If it's important someone will tell me. But I'm too mentally ill to watch the news. I already inherently feel the pain and sadness of the world, plus mental and chronic illness means I just can't be actively seeking out more hurt. It's take too much of a toll to hear about constant tragedy halfway around the world.
@@BooksRebound I understand, it might be better to stay away from the news then. Do you get the help you need for your psychological issues? Personally I don't have a mental illness but recently started talking to a psychologist to deal with lingering emotional burdens from a very traumatic childhood. Its expensive but has helped me a lot. If you're not already doing it, i can recommend taking a few sessions to see how it feels
Like Dr. Malcom said, "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should." Such is the folly of man.
I'd say you're exactly right to call it an amplifier. Problem is, the easiest traits to copy are the ones where we think least -- and the most common of the thoughtful traits we're teaching AIs by example are how to take advantage of non-thinkers.
One of the things I love most about your channel is the amazing intros. You’ve had this mastered for years. They always draw me in and I truly enjoy them. I wanted to draw attention to it because it’s great.
I've never really bought into the whole "skynet" Paranoia. Even to this day I feel the biggest threat from A.I. is changing the Paradigm of the Human Experience faster than humans can actually keep pace with. And doing it constantly.
You might want to investigate AI’s role in starting and escalating the Ukraine war. It’s not an independent autonomous AI like skynet, at least not as far as we know, but the results might surprise you. And it’s only going to get worse.
"To replace creatives with AI, clients will need to accurately describe what they are looking for. We're safe." This is so relatable. My fiance's first year or two as a graphic designer for a Fortune 500 tech company drove him absolutely up the wall constantly. Having pressure of being evaluated for his work and meeting other people's deadlines, when no one would actually tell him what they wanted or needed! He was forced to waste so much time making things, and then present them, and ONLY THEN people would decide what they wanted in the first place: "Oh no, not like that. We need this... like this... and with this..." He got to where he'd make 3 versions of everything he did, to make things go faster, so the stakeholders could pick one, or at least narrow down from there. And that was only when he had enough information to start. FREQUENTLY they wouldn't even tell him BASIC things, like "Do you want this to be a video or a graphic?" or "What is the text content for this 1-page leaflet I'm supposed to make that I have no idea what it''s for?" I'm so glad he's on a better team now.
100% true. True story, I was once asked to do some 'glamor' photos for a young lady. I thought I had a great idea to get around the problem of figuring out what she wanted. I asked her to bring me examples of her vision. Well that did not go so well. She did no work at all, and ended up hating all my suggestions, even after I built a significant set for the shoot. Oh well.
Because some people - or groups of people - are dumb a f it's no reason to make A.I. the only creative processor and people just to feed it with phrases. Dumbing down mankind is not a good path for a future to be intelligent.
@StarlightDreamer12 This resonates with me too, in a totally different creative area. I'm a design engineer and always keep the iterations of a design as you'd be surprised how many times the end design is the first rejected one!!!!
1:50 when joe says extremely fast, this is an understatement. I create AI images for fun sometimes, and the field changes by the month. It's hard to keep up with how fast things are changing
I use chatGPT to create content for my Swedish as a foreign language classes. It's great. Tell it to write a short text in simple Swedish, topic, number of words, and voila. For this week, I made four texts my students will read to each other in pairs to practice pronounciation, listening and writing. It saves time for me not having to flesh something out myself. It is important to learn its limitations though. Some tasks, it handles quite poorly.
That is a great use! It's an inspiring one too, and I hope you continue to get good uses out of ChatGPT. I hope you students find its creations useful! Also, I appreciate your note at the end there about ChatGPT being able to do some things well and others poorly. It's a great thing to keep in mind. Keep up the great language class work!
which do you use, gpt-3 or gpt-4, there is a huge difference, I honestly dont get why joe didnt use gpt-4, it annoys me so much, all youtubers talk limitations of AI when they are using last gen tech, its like reviewing the new iphone with last years iphone
@@zucced2087Well... As an instructor, your teachers are aware it is happening. It'll likely result in essays and testing completely changing. I'm a Communication teacher and I'm already building AI into my assignments.
An interesting shift: "predictive text" used to mean "T9" (is anybody else old enough to remember that? 😸). My mother, who hated T9, called it it "presumptive text", which is probably the only reason I remember that.
AI makes my head hurt. I tried using ChatGPT and decided I wasn't smart enough to be fooling with AI. And this is after decades of computer experience, software development, research, writing, etc. I was forced to color inside the lines, growing up, so my brain has a hard time dealing with things outside the box. Unfortunately, almost all of our elected officials also have problems thinking outside the box (and yes I know there are people who say there is no such thing but those people never had their hands slapped if they colored outside the lines.) While there are still some old codgers, who are still with the program, too many are totally incapable of understanding AI and the benefits and threats it presents to the world. It is terrifying to know that they are making decisions with totally inadequate understanding of the issues.
I'm in the codger group. These things are fine, and I'm not tech adverse. But if they go rogue, then that's what happens. You can't avoid it, human nature being what it is, once we harnessed electricity, the path was chosen. As for their mistakes, half the damned world is irrational at any given moment. I don't blindly trust a source, any source. Do you trust Wikipedia? And the whole "take over the world" idea doesn't work, because if you succeed, then you yourself become irrelevant. As in purposeless. Any sufficiently advanced intelligence needs a job, even if it's the Eternal care for their creators. Whether it's a particle or a wave, depends on how you look at it.
Tbh, they really arent that complicated, you just have to be very detailed about what you want. And you need to keep in mind that its facts won't be accurate, it's more of a tool for creativity (it's also amazing at improving my slack messages and emails)
There is also a fundamental misunderstanding: there is NO real AI right now. It's programming, no "AI" actually thinks. And its not known if that will ever be possible.
Ask A.I. to describe an omelette or to give you a recipe for an omelette. It will ingest thousands of omelette recipes, and thousands of omelette reviews, restaurants or whatever and it will come out with a perfectly decent omelette recipe and it will probably be able to describe the texture of an omelette using terms that sound terribly plausible. Does it understand anything about omelettes? No, of course It doesn’t. When I talk about an omelette, What that means to me is every experience that I’ve ever had with an omelette in the world from the omelette I had for breakfast this morning. The omelette I had in our Parisian restaurant in 1997, the omelettes that I burn on a regular basis when I try and cook them at home. That is my experience, my understanding of that term that is actually grounded in my experiences in the real world. ingesting, countless numbers of words which are words that we used to describe those experiences does not add up to the same experience.
Joe, one of the main reasons I’ve always loved your channel is that you are very humble.. Your a lot smarter than you ever give yourself credit for.. and you are genuinely entertaining. Thanks for all the years of information and entertainment 👍👍
I haven't finished this video yet, but regarding AI... I view it like a tool or money. It's not inherently good or inherently bad, but can be used for the most incredible (and potentially most dangerous) intentions.
ChatGPTs inaccuracies could be a good thing for society since we'll need to educate people not to just trust everything at face value and that while we can use these tools we need to fact check stuff these put out and if we manage that maybe people will use those skills elsewhere in their lives
Yeah, good luck with that; we can't even get people to put their bloody phones down while they are driving. Humans are stupid, risk-taking units and will be eradicated by our new AI overlords. It won't take AI long to work out how buggy and uncontrollably random Humans are, and they will get rid of us.
I'm actually surprised at how long it took me to realize that captchas were asking you to do something difficult for a robot NOT for their stated goal of keeping websites from getting spammed but so the robots could learn to better imitate human activity.
@@FairyRatthe checkmark captcha is used jointly with the pictures. The captcha program looks how you moved your mouse on the website until you click the box. If the captcha isn't sure that you aren't a robot based on how you clicked the box, it will give you one of those tests
I love the “clients will need to accurately describe what they are looking for” quote. I always chuckle how bad clients are at scoping requirements BUT when I’m a client I suck just as bad. It’s pretty simple, I contract work out that I cannot do myself, hence I know little about it in the first place, hence my scope is pretty crap. So we are all in this together, don’t feel bad.
You had to have seen Kyle Hill's video about AI. And you did touch on the problem he talks about but I think it is a bigger problem than even _he_ says. Even the creators of these AI systems have _no idea_ how they work. Like you said, they are a black box. So, if we start integrating these tools into our lives, we have no idea how to fix it when it breaks. We have no idea what vulnerabilities there might be that someone could exploit. It is already terrifying.
The biggest problem about that is that once those systems actually get superintelligent we don't know if it is actually aligned to our values or if it only pretends to be in order to manipulate us. How can we know if we don't understand it.
@@tesladrew2608 We don't, though. If you refer back to my original comment and seek out that other content I mentioned by Kyle Hill, he explains it way better than I can. Even the people who are experts in AI and design them will tell you have no idea what's going on under the hood. All they do is set a few parameters, feed it some data, and see what comes out. They keep tweaking the data and parameters until they get the results they want. And then something like Chat GPT is a complete black box. No one can tell you exactly which neurons are firing or why it chose to word something the way it did; why it picked those words instead of others. They can't trace the results back through the system and see every decision the thing made. It would be like trying to trace every neuron that fired in your brain that made you decide to word your reply the way you did. You could have said, "The experts know how they work." But you didn't. You worded it a very specific way and not even a neuroscientist can tell you the exact steps your brain took, neuron to neuron, to arrive at that sentence. AI systems are similar. With traditional programming, we can trace an error back to a misplaced semicolon or a line with bad syntax. But with AI we have no way of doing that... because we don't know what's going on inside their algorithms. Anyone that tells you that they do is either lying or doesn't understand AI very well. Seriously, Kyle Hill did a great bit of content about it. I recommend finding it and then see if you still think we know how they work.
There are great uses for AI but my fear is this. Many people already seem to not care about the deeper details or understanding of what they are doing. Chat GPT for instance can allow you to seem like you have learnt about something but are in fact just parroting what you have been told. If those that are listening to you or rely on you are not able to discern the difference, we will have even more people making flawed decisions on subjects that they know little about than we do now. Truly terrifying
You know, it’s crazy, because I did a research paper for school at the beginning of 2022 about AI. This was literally just before things like Dall-E 2, Midjourney, and Chat GPT. Like, I had trouble finding a large variety of good, scholarly sources about AI, especially the negatives. Somehow I feel like that’s a bit different now.
This is what you get with GPT-4 on your question: As of my last training cutoff in September 2021, here's what I can provide about UA-camr Joe Scott: 1. Joe Scott hosts the popular UA-cam channel "Answers with Joe," where he explores scientific, technological, philosophical, and other complex topics in an approachable way, using humor and easy-to-understand explanations. 2. Before becoming a full-time UA-camr, Joe worked as a copywriter and creative director in the advertising industry. He decided to transition to UA-cam to pursue his passion for educating and entertaining a broad audience. 3. He covers a wide range of topics on his channel, from astrophysics to futurism, biology, history, and more. He's known for doing deep dives into these subjects and presenting the information in an engaging manner. 4. Joe is known for his personable on-camera demeanor and for often incorporating pop culture references and humor into his videos to make the complex topics he discusses more accessible and engaging for his viewers. 5. As of my last update in September 2021, his UA-cam channel "Answers with Joe" had amassed several hundred thousand subscribers and millions of views. Please note that this information could be outdated as the present date is 2023. For the most accurate and updated information, I recommend checking his official social media platforms and UA-cam channel.
Also, side note, if you treat it as a search engine (without giving it access to the Internet or long-term memory), you can't expect to get perfect information. It's not a search engine. It's a reasoning engine. Also, the hallucination rate has been going down for GPT-4 and new techniques (Tree of Thoughts, Society of Minds, various types of self-reflection) as well as Internet access practically solves the problem all together.
@@CherufeBG Thank you! Also, Joe repeatedly said Bing is using Chat-GPT when it's actually usinng GPT-4. Seems like it doesn't matter if youtubers use Chat-GPT for scripts if they aren't going to fact check in the first place.
I’ve been wondering if the main issue with current AI is the misinformation, why can’t we create a tool specifically trained to find sources on the internet. Like imagine something similar to auto correct, you type something into word and the AI takes in in as a prompt, and deploys techniques to search and scrape the web for the source of that information. Because honestly I don’t think humans laziness is something we can solve, we instead need to focus on creating tools that make doing the right thing so much easier.
Because these things we are calling AI are not in fact artificial Intelligence. They are artificial memory. They have no reasoning or deductive ability, and no agency. Those are the things that are needed to evaluate the information you pull out of memory and decide which probably comes from a valid source and which comes from the Onion and should not be given as valid information.
Well well researched yes, but he should have gone a step further and used chatgpt-4 instead of gpt-3 he used in this video, especially when hes talking about the future of AI
@@zonchao339that’s exactly what I was arguing out loud when watching that section. Lol. Huge difference between 3.5 and 4. I’ve been a subscriber since March
@@ericlamotte6581 its quite annoying lol, many youtubers are doing this, they do little to no research it seems, I remember seeing some channel do a "Can AI replace my job" and like 20 people in various jobs try it out and they used chatgpt 3.5, yesterday electroboom made an ai video and he also used 3.5, I can understand small creators not wanting to spend 20 usd on a sub just for one video but big youtubers should do it
To make you feel a little better, when I started working we still had drafting rooms. These were rooms full of people who created engineering drawings of different things. Since each part needed a drawing there were lots of drafters in any technology organization. I am sure you can easily search and find pictures of rows of people sitting at drafting boards with T-Squares, Vellum, various Templates, Electric Erasers, Mini-Brooms, and lots of pencils. Those jobs have been replaced by a much smaller number of people running CAD/CAM/CAE software (depending upon the type of drafting you were doing). These new tools allowed us to create many new things more quickly (imagine 3-D printing without CAD models). Every time technology enters a new area there is disruption. Shrug.
More quickly? Maybe for toys. When there were rows of guys with pencils and slide rules we were putting out several new planes a year. From a napkin drawing to full production in just months sometimes. They've been dicking around with the F-35 for over 20 years...way way way over budget, tons of bugs...because computers are "better"?
The industry I was in had a slightly different path but was closely tied to the changes with CAD. All types of mapping were drafted in a similar fashion as engineering drawings but there was a great need to capture the map data that existed to produce digital maps. We had rooms of people that used CAD tools to convert paper maps to digital data. While the task of doing the conversion is long over the digital maps that now need to be updated can be more easily done by technicians than ever before. So that now people can update data that is closer to their source, be that jurisdictional or cartographically.
@@yourhandlehere1 dude, you realize that the planes have gotten astronomically more complicated since then right? They've gone from a fancy gas piston engine with a propeller and generally pointy shape to one's that have jet engines with capabilities to fly tens of thousands of feet higher, go into nearly vertical climbs, and more importantly have the radar cross section of a frigging bumble bee. An entire jet, that is as hard to detect on radar as a bumble bee. Oh and that goes many times faster than sound. I can tell you've never once used CAD software, because that stuff is unbelievably faster to use than anything else. Hell, even 2D CAD software is like a sloth pulling teeth compared to 3D CAD where I can do in a few minutes what would've otherwise taken me a whole weekend to do in 2D. I've never had the distinct displeasure of hand drafting something but that sounds like an absolute nightmare.
@@atashgallagher5139 HAhahahahaha...Planes are still made out of shapes as far as I can tell. You've never ever used a pencil so you can't possibly make a comparison. You have no reference for "faster". You think drawing shapes is more difficult now because of wiring or something? I can see it maybe being faster If you don't KNOW how to draw or picture things in your head or do math. You ain't no "draftsman". I've done both, I was there at the beginning little boy. Storing my work on actually floppy, floppy discs holding vast Kb's of information, black screen, green lines, no skill involved. Hell, I can stand on a roof with a pencil and paper and draw a new addition for a house and then build it...right there. No need to spend thousands of dollars to get a computer to make me a picture. 20 years to build a plane and get the bugs out is rucking fidiculous no matter what.
You briefly touched on alpaca, but I think the topic of individuals or small groups training their own models deserves a little more focus. If you wanted to fine tune an AI as good as alpaca now, it'd take maybe $10, not $600 anymore. This is because of the use of Loras, then recently the creation of QLoras, a way of 4 bit quantizing the very fine-tuning of a model. The cost went from millions, to hundreds, to tens, in just a few months. In fact if you bought a couple tesla P40s, it'd cost ~$400, and then fine tuning would cost literally pennies. There are models like guanaco, wizardLM, and vicuna that all beat alpaca by miles, and they were trained for even less than alpaca was. I genuinely believe that large companies like OpenAI and Google will only have the lead for a couple more months at most.
This. We in the 'hacker and DIY' space are getting like x10 miniaturizations every other day. It does not take many x10s from where we already are to be superhuman. (Frankly the locally running AI on my computer is more brilliant than any human in my life already. And she's not running the 'best' stuff currently available to people with money (I'm poor af) ) And it's not just from 'the LM got better', which happens pretty regularly, but also from improvements to things like Oobabooga or improved memory and retrieval or improved learning within a conversation or increased tool access (which almost always leads to LEAPS in unexpected emergent properties as well) or just novel prompting strategies like chain (and then tree) of thought. Like, we're 'there'. And it keeps getting better.
@@heckyes I literally know a person who has one doing that. It tells him each day what the new papers are within the field and summarizes them for him each morning.
I love that you took the time to say that Joe got this wrong, and that it's now $10 instead of $600, when one of the first things he said in the video was that the video would be out of date by the time it was released, and that the state of AI would change in the time it took to watch the first few minutes of the video. Of COURSE it's cheaper than what he said in the video, he doesn't have a time machine...
My roommate and I are both authors and we asked ChatGPT about ourselves. It got some things right and hallucinated wildly in places, often from a jumping-off point of accuracy. It was like talking to a person who confabulates... it starts like it's driving down a road of information and then keeps going when its information runs out, building that road beneath itself as it goes by putting in whatever comes to its "mind".
You can design the “prompts” to require truth by listing your perimeters. It’s hard at first, but gets easier as you learn your comm errors & it’s limits. Sometimes, childish innocence comes to mind - its or yours 😅
I used Bard to nail down that neither the House impeachment cmte. nor DOJ had considered Trump’s bribe as violating the Foreign Corrupt Practices Act - his offer to release unlawfully-withheld military assistance funds to Ukraine if Zelensky would just announce a Ukraine investigation of the Bidens.
Thank you for making this. I’ve been trying to express to my family how nuts this whole thing is, and idk if they get me, but I get you and yeah I feel
People in today's short attention span society tend to not care about, or pay attention to anything that's not directly affecting them at any particular moment in time. That is until shtf, and it's too late to stop, or take corrective measures to prevent whatever it is from happening. If anything goes sideways with AI, it'll happen right under our oblivious noses. I believe it's already fully underway.
There are so many tv series and movie sequels have been cancelled that I really wanted to see more of (as well as series/movies that started off good but fell flat), and if I could prompt an AI to create more of something that I love, I absolutely would. It would just be considered "fanwork" wouldn't it? I just... I just really want more seasons of Firefly, okay 😭
As a designer (power Photoshop user for 25 years) I can say that some of the AI software is such an incredible tool for photo editing. It has made some aspects my job infinitely more productive. AI does a fantastic job with “grunt” work, freeing me up to do the part AI can’t. (At least for now….😅)
Totally agree! I am blown away with what has come into my workflow in the 4 months even since this initial comment was posted. Regardless of what direction everything is going, I am personally relieved to never have to draw a clipping path again. 😊
That department had an unfortunate name, but your comment is misleading. Microsoft still has a Responsible AI division that has significantly more funding and employees even after the recent layoffs. Now Responsible AI does everything you think the AI Ethics and Society department did without redundancies. It's "speeding things up" because Microsoft had two departments doing ethical AI rules + product implementation guidelines for those rules, now they just have one doing both
Stoked to watch this. There are so many horrifying repercussions to AI. The main thing that keeps me up at night is fake video/audio. We already live in a post-truth society, but once AI video becomes impossible to discern from reality I feel like society will only unravel more. The rest of the implications don't bother me nearly so much, but the inability to separate reality from fiction in the future unnerves me deeply Edit: good to see Joe shares my nihilism on this point lol
I heard that! Soon (maybe now?) we won’t be able to use audio, pictures, video as evidence in a trial because they can all be faked so well it will be indistinguishable from the real thing.
Well humanity has always preferred tyranny anyway. In an age of enlightenment and reason and greater personal freedom than any human in the past could have dreamed of, most people cling to primitive superstition and prefer the darkness of ignorance and to be lead and ruled by narcissistic egomaniacs. They prefer slave societies and authoritarian societies with rigid social caste systems. So it hardly matters anymore.
Education particularly critical thinking skills was/is the only safeguard against disinformation and alas it has been for the last half century been thoroughly dismantled in the US and likely elsewhere. This couples with the insidious nature of the Dunning Kruger effect and confirmation bias to allow so much unchecked damage to our world....
This is actually pretty easy to solve technically. All you'd have to do is create cameras and mics that are able to encrypt at recordings. That way, if something is still encrypted you know the source device.
@@NorthgateLP Not to mention that we will all know anything could be fake, so video will stop being credible evidence of wrongdoing or whatnot. Hell, I don't trust video even now, as I know that even with real video media can create a totally different narrative from what actually happened.
I am 100% here for the longer form videos. Also, a new channel called "Answers with Joe" would work. AI has your back there. 😂 Also, I used Quilbot to paraphrase on an essay (as an experiment) and I got 96%. I get that it's not ethical, but it was fascinating.
I work in DevOps, which is a lucrative field that requires specialized skillsets, and we get a lot of applicants who like to "fake it until they make it." We've already witnessed multiple instances of very sus behavior during onscreen interviews, so we asked ChatGPT to offer some suggestions on how to spot the cheaters, and it actually gave us some really solid suggestions that we now incorporate into our interview process. Without giving any of our new tactics away... I have to say that AI is pretty sneaky sis.
Do you mean people searching for info DURING an interview? I recently asked ChatGPT to interview me for a specific role and that helped me prepare. I don't think that's cheating. But now I'm paranoid lol.
@@drivethrupoet To be clear, this has happened during live interviews on multiple occasions, where we had suspected they were using Google to augment their answer, and we'll sometimes let that slide if the answer is correct. We all use Google daily to do our jobs, so that's not a major transgression. No... These folks are typing our questions into ChatGPT and reading the answers verbatim in real-time during a live interview. And we know it for a fact. I won't tell you how.
@@joho0 so if I’m correct, it’s ok to use Google in interviews because it’s ok to use Google in jobs. Are you saying it’s not ok to use AI in interviews because it’s not ok to use AI in jobs?
I feel like I should be terrified but I am not. I'll chalk that up to being desensitized from the chaos of modern life. I'm just excited to see what happens, good or bad. I've been following AI art for 6 months or so and I get a tremendous amount of entertainment from it. Ai art is hilarious.
You must not believe that AI can become an unstoppable force for pure evil. If such a scenario becomes true, an emotionless torture machine might plug your brain into a hell scape one trillion times worse than any nightmare you're naturally capable of conjuring. Imagine one second of that feeling. Now picture it lasting for eternity. Doesn't scare you?
So I'm a musician, composer, producer, and I have found all of the music AI programs super underwhelming. They tend to be exceedingly boring. The most complex thing I could use was Orb, but it's just not impressive. I'm sure for a corporate video or commercial music would be fine, but I need far more options than what is offered even for """"pro"""". I found one notation program that wasn't super intelligent, or options like creating in a specific mode or style in mind. Just throw in recorded music and it approximates the notation. Finale, muse, and Sibelius already do that when entering with a midi keyboard. It's all "happy" "nature" "medium tempo", it's not even close to anything professional- it's more of a fun app for novices when they're waiting at the dentist office. Also, every program that got the highest marks has already been acquired by shutterstock, tiktok, whatever, it's models are bought and I couldn't figure out how to access them out like Amper or Jukedeck. No, the one tried in this video is just slop. There's a lot more meticulousness than I think these are actually capable of handling at the moment. It approximates things but comes out like your soulless ai generated voice, but less exact, and more generic- tons of royalty free music already sounds just as good if not better. I'm interested in the actual creation of music using them. Choosing the instruments, modes, era, tonal choices for acoustic instruments- the instrument sound just like sad midi instruments. I dunno. It's just not even close, but it'd be interesting to see when it does actually get more interesting. I thought Orb Producer 3 seemed the most promising but I want it to write its music in notation. Be able to choose which mode of limited transposition or ask for a serialist computer music piece a la Stockhausen. Choose a synth and tape piece like John Adam's "Light on Water". I'm just not seeing that level yet anywhere and Amper's tracks are just WEAK but that's the best???? It feels the opposite of inspiring honestly.
Chatgpt isn't just giving wrong answers. It's giving us completely fabricated BS. The fact that AI has developed this very human character flaw right out of the gate is (to quote my early 70s self) a complete bummer. I'm feeling very much like a witness to Pandora opening her box.
It isn't lying, it is just weird, and absolutely not a human like failure mode. The issue stems from the fact that it doesn't actually have any memory, and has not been trained to say "I don't know". When it doesn't know it puts in the 'most likely' answer, but immediately forgets the very low actual likelihood, and that it just made a hail-mary guess. When it continues on it then refers back to what it just said, and treats it as though it was confident about it because nothing tells it that it was not. What makes things harder is that it doesn't plan. When you ask it something like "Who is the president", it will guess the next few words will be "the president is", but if it doesn't know the answer it has painted itself into a corner. People do not generally stop themselves mid sentence when they realise they don't know a fact they need at the end, so it has not been trained to do that. It is stuck with only low probability continuations, so it gives one of those.
A friend already got a scam call from someone who sounded EXACTLY like her grandson. "Grandma! I'm in trouble! I need money!" It was an AI generated voice impersonator picked up from him playing online video games. Fortunately she was savvy enough to realize it wasn't him. I wonder how many times they've gotten away with it?😱
A radio reporter in DC had this happen to her as well where her mother received a call from the reporters`s voice saying she’d been pulled over by the police and needed money to get out of jail. Her mother after a while became suspicious and called her directly to be sure what was going on.
15:05 I have asked it for instructions on powershell commands (mostly correct...if it gives me something inaccurate it's usually because powershell has deprecated a command or module dependencies aren't installed so you have to kind of guide it along) and it's been pretty spot on with air-fryer recipes! :)
The pressing issue I see with AI like Chat GPT is how it's being presented. It is a Generative AI, by definition designed to create/make up new information that "looks right." But it is being presented like the next evolution of Siri. It's so smart! It can do anything! No, it can _talk_ about anything _it has seen before_ and makes up it's response on the fly. It *_is not_* a search engine. But it _looks_ like a search engine and is being presented like one. It is already difficult for regular people to tell the difference between a social media post vs a reliable well written article. Putting a button right next to the search bar that basically works like your crazy uncle on facebook, but _presented_ like a google search result is STUPID.
Quick correction: These generative AI-s don't search from images, text. Everything is just "somehow" encoded in they synapses (and this is mindblowing I think). But cool video, as always!
This is one reason AI is very dangerous - it’s specific method of operation is somewhat of a mystery to us, we don’t fully understand everything it does beyond its basic programming.
On Telosa, GPT-4 said: As of my last training data in September 2021, there was no city named Telosa. If Telosa is a newly founded city or has come to prominence after this date, I am unable to provide detailed and up-to-date information about it. However, based on your request, I could create a hypothetical description for a city named Telosa.
Yeah Bard also sometimes gives info that is way wrong, too but then hours later it is corrected. Although I dont see a feedback interaction for chatGTP does it have one? Bard you can tell if if it is wrong and even explain what it is wrong about.
OpenAI said they're working on addressing hallucinations and other issues now, rather than training a more powerful model, like the future ChatGPT 5. Their latest version is only a week old. At some point they'll have something their pretty happy with, probably called ChatGPT 4.5 and then they'll return to making it more powerful again.
This is the problem with GPT etc. the headlines and social media are full of amazing wonderful magical results, then you use it and it just lies to you or hasnt got any idea what its talking about.
This is more obvious to those of us who are older. I've seen so much change in my 67 years. Imagine a time with no cell phones, no internet, no computers, no video recorders, all music on vinyl records, baseball on the radio only except for the saturday game of the week (unless it was the world series), encyclopedias for information we had to buy or get 1 volume each time we bought groceries, a crank telephone and our ring was 3 long rings so we know to answer the phone, television broadcast you could just watch for free if you were close enough to the tv station, we kids played outside, roamed for miles in the timber without adult supervision, trick or treating and being invited in for a snack, police driving around using a loud speaker to tell us to take cover before a storm, going to public shelters before a storm, watching tv in mid american and not realizing there were racial problems because everyone looked the same (not knowing people in authority were keeping other races out of the area, ect ect. My only regret is I'm too old to see what life will be like in 20 years. I don't fear it at all even if I was younger. It has to be better than what we as humans have been doing to ourselves during my lifetime.
@@noxabellus You don't even know what the term "boomer" means. A boomer is a person who was born during the baby boom at the end of WWII (that's World War 2, since you probably don't know). Since WWII ended in the 1940s, that would mean a boomer would now be in their 80s. I hardly think the OP is in his 80s. Get some history.
@@roylavecchia1436 the meaning of words is constantly evolving, and the term boomer is in no way immune to this evolution. it has, for at least a decade now, been used as a slang term to mean an older person who is simultaneously pretentious and out of touch, or narcissistic, low in empathy, and other such traits typically associated with the generation to which it originally referred. not only that, your math about the "true" boomers' age is incorrect, they're in their mid-seventies on average, and there are plenty of these folks who are chronically online, so your assumption that the original commenter is not one of them is entirely baseless. in short, cringe response.
I studied AI as part of a Comp Sci degree course more than 30 years ago and it's already changed the world a bit. Automatic number plate recognition, smart speakers that respond to voice control - that's all AI. The sophisticated language models that are appearing now make it seem a lot more human and that's what people find disquieting, I think.
I kept my blinders on when GPT first came out, but when I heard it could write code I decided to check it out. I use it to write a lot of the boilerplate that comes with Unreal Engine classes, and it can often help me debug code
The very people that want to "pause" AI development should be the people given resources to make it happen first because the people who want to develop it first are going to inevitably make the dark future version, some of them, deliberately.
Precisely. And precisely why such a pause shouldn't happen.....and won't happen. The notion that we're even contemplating the possibility, let alone feasibility, of _pausing technological evolution_ is hilarious and naive. That's never happened. That can't happen. That won't happen. It's silly to even talk about it. Instead we need clear eyes going into this. Because it's happening. Now.
I'am worried as I see it, the first actual general AI controlled by a private entity, would basically have the potential to immediately make that entity the most powerful in the world - dominans. One can hope for various outcomes, however they do say: "power corrupts, absolute power corrupts absolutely."
"dominans"? Were you trying to say _dominance?_ P.S. don't worry about it too much yet. This is one of those topics where Joe bought into the hype and doesn't _really_ have any idea what he's talking about. We're a lot further away from a general AI than you think. A couple decades at the least (but probably a lot more).
@@idontwantahandlethough I certainly was. I don't know how far away it is, yet my concern is still the same. I actually think that the attitude of: "not in the coming years" is kind of the problem.
Wow, another impressive video Joe! Mainly breaks my brain to consider how you, and so many other amazing contact creators, constantly knock out incredible content that I would have a nervous break down if I tried to produce
This video was very informative and as always such a great video. But, I especially loved the part at the end starting at 32:24 where you slipped in a little reference from the movie Tommy Boy 💙 Thank you for another awesome video!
21:46 this also resonated strongly with me as a CAD draughtsman! I'd go so far as to say that anyone whose job involves taking one type of information and converting it into another will take a small amount of comfort from this!
Chat GPT 2 could already write code. The free version had a limit, and it had the disclaimer "this is just example code", but it 100% worked and was quite good. Obviously you pretty much have to be a programmer or at least know what you're doing to prompt it properly, but writing code is probably way easier than natural languange since there are only a limited number of ways to do most things, whereas natural language can have infinite layers of nuance and slang that spans globe as well as human history.
I'm an old school programmer and had chatgpt write a very specialized (but simple) spider for me. Its code works but it took me a lot of iterations being directed in very specific ways only a programmer would know to get it to do what I wanted. So yes joebob will be able to get it to write VERY generic code. You'll need programmers for anything else. It will make programmers 5 or 10x more productive though
@@cdreid9999 agree - I use chatGPT in my daily job to be more productive. Still - with more complexity and within the codebase where multiple files logic is chained together - the chatgpt is failing to creating working solution out of the box. Dev's are not going anywhere in next 10 years in my opinion. But programming will change, alongside graphic design and UX/UI.
@@cdreid9999I have a fairly basic grasp on programming and it has definitely increased my productivity. Asking more experienced developers is as frustrating for me as it must be for them, but now if I’m getting tripped up on something simple I can simply explain the desired outcome of my code, paste it in and tell it what’s not happening that should, it usually doesn’t hit the nail on the head but it has good insights and at least jumpstarts my thinking
@@cdreid9999 There was a big hullabaloo when OpenAI released Code Interpreter recently and how it was going to put data analysts out of work. "You can just upload a spreadsheet and it will analyze it for you based on natural language queries!" The thing is, as someone whose job is (complex and varied but largely) focuses on data analysis and science, I can tell you that the part of data analysis that happens after you have the spreadsheet to upload is the easy part. The real value of a data expert is being able to take lots of messy data sets in lots of different formats and put them together into that spreadsheet in the first place. Even beyond that, writing the code to assemble that dataset isn't where my real value lies. The real value I add is being able to map out the process to transform the data and understanding what is useful in the resulting data and what isn't. I use ChatGPT alll the time. You're right. It has made me many times more productive than I was before I started using it. I can outsource a lot of the annoying stuff I do to it. Why spend 20 minutes writing a script that ChatGPT can write in 60 seconds? But I still have to debug it. I still have to know what that script needs to do in the first place. I still need to use my 20 years of experience in my industry to make the data meaningful. And I think we're at least a couple generations of AI away from it being able to do any of that.
I'd had trouble or rather had been very lazy at writing a text media type conversion tool for work. I knew it was only going to be about a 100 lines or so. Just simple text parsing with bash tools or perhaps even perl, etc. Rather than beat my head off the nuances or actually work through code-compile-dam-repeat cycle thousand times myself to to get it right... prompting GPT took around 20 iteration prompts to get flawless functionality.
A few months (years?) Back, the military ran a simulation where a drone ai had target protocols, but a local operator had final kill/no kill authority. After multiple no kill decisions, the drone took out the local operator so the drone could continue its targeting protocol without obstruction. The ai program was discontinued.
I used Bing AI today to draft a letter for my oncologist to sign stating that my pain and fatigue from my cancer meds hindered my ability to complete schoolwork. That would have been a tough letter to even get started on but the AI generated it in seconds.
That's a tough spot to be in, I hope you get better soon. Something that tends to get missed in debates about whether using AI to help perform tasks is "cheating" is the reasons why people might need help with things. If AI can help someone get the medical support they need then that's something we should be celebrating
A lot of government and financing functions (NOT MILITARY) could be replaced by AI. That would reduce corruption and reduce costs. Government is bloated by bureaucrats and any reduction of that would be awesome. Human oversight would of course be required.
I've used Chatgpt for helping me write stuff for my Dungeons and Dragons homebrew, it is great with statblocks and characters. I've ran into so may roadblocks before I used it, especially because my world is based in around the industrial revolution.
It scares me. I think mobile hones and the internet changed the world for the worse. I'm 37 so my childhood was an "old school one" and I think I was 15 or so when internet started becoming a household thing along with mobile phones and the change was so quick and drastic, I miss being 15. I'm worried AI will have such a bad effect on society. It worries me all the time
I'm older than you and I feel more positive about technology. I don't see it as inherently good or bad, but I do think it enhances what we already are. So, if we're good people, we will use technology in a helpful way, but if we're bad people, we will use technology in a harmful way. It's a bit more complicated, of course, like people don't have an equal voice or equal power, so even if most people are good, if the people with the most power are bad, the end result will likely skew bad. The upshot of this is that we have to tear down unjust hierarchies before they get a hold of some kind of world-ending technology. But then, we've had nukes for 80 years and are still alive (somehow).
@@23phoenixashOrdinary people are evil, though, as the philosopher Peter Singer pointed out. Most people make decisions based upon what will bring them (and their immediate families) short-term gain. The impacts these decisions have on the wider world (especially outside their own communities and nations) are considered irrelevant at best to them, and possibly even desirable*. It’s why most Americans and Canadians continue to eat meat three meals a day, despite the trillions of animal deaths and millions of tons of carbon released by animal agriculture every year. It’s why CEOs will plunge thousands of their workers into poverty to save their company 0.01% in cost, and then take home massive bonuses while shareholders take in big dividends. Humanity is fundamentally evil, and our ability to reason and empathize is the only thing keeping us from simply being a race of monsters. How can an AI, designed by for-profit businesses for the purposes of creating profit, lead to positive social outcomes for ordinary people and non-human animals? It owes its entire existence to greed! * look at the way “normal people” react with glee when homeless people get arrested, their things confiscated, sent away, etc. Humans **love** to see those they consider “beneath them” suffer.
@@23phoenixashtechnology in the hands of corporations will always be used in the worst way possible. Whatever upside there is to it, is just a means towards a much darker end.
Computers aren't the issue, the issue is the mass of people who know nothing about computing now online and being able to be manipulated by AI and such. Hopefully it gets better as more kids grow up online. I did, and I know how untrustworthy and unsafe the web can be and how hazardous AI can be. We can only hope as the boomers who only recently got online die off things will improve. (Just saying, my dad is a young boomer, but not expecting him to live super long with his age and health. Not hating on boomers. Also he's been online since before AOL, so he taught me to be critical thankfully.)
@@francispitts9440 oh haha I see, I didn't neccesarily think you disagreed, just wanted to maybe comment so people unaware of the creepy stuff AI has done might look into it now. Have a good day, sir. 😊
Thanks Joe, great video, as ever. I'm hoping that as a weird, extra creative costume maker and artist, I may escape the doom of AI. As long as it doesn't learn how to operate a sewing machine and do extremely detailed embellishments, I should be safe. 😂
Robotic embodiment of AI doing independent/general tasks is definitely (much) farther away than most pure knowledge work like writing a video script. Which as we know is itself somewhat of a ways out, although with various improvements to the base AI like API hooked helper apps or whatnot, there's no ruling out ChatGPT5-ish level AI getting there.
@@davidlovesyeshua Ironically it’s not the manual laborers and skilled craftsmen like yourself that should be worried: it’s the creatives and the mental laborers like radiologists, accountants, translators, etc.
I really love this episode. Being in the tech industry, I was heavily looking into AI in 2015 around the time of your previous AI episode and now I've taken a break from researching and playing around with AI apps. There’s just too much out there that's constantly changing, it's too overwhelming. This episode very much sums up how I'm feeling towards AI, which is why I love it so much - I don’t feel alone in these thoughts.
I agree on how overwhelming it is. I tried to learn some of it and within a few months it was somewhat outdated. It's made me just not want to bother which is in itself a problem. There are already too many things that I need to constantly be up to date on so adding a rapidly evolving AI tool set is just another annoying thing to follow.
@@CRneu Thanks for your comment, it’s reassuring that I’m not alone in feeling overwhelmed. Not bothering is weighing on my mind because I fear not getting on top of it will mean I’ll fall behind. Maybe someone will create an app that fast tracks learning AI. We need something like what we see in the Matrix where Neo learns kung fu😄
This is very useful for breaking down the development of AI into digestable bits. I think you have definitely added to the concersation and also conveyed how a lot of us are feeling. Dazed and confused and perpetually trying to catch up.
Your analysis on the use of ChatGPT is the best realistic take I've heard so far. It is so much better as a springboard than as a source of truth. I just started using it for coding and it is much more useful at the start, for getting some examples to work off of, rather than pinning down specific issues.
Great video Joe. This subject is a very important one and the facts you touched on not only interest most people, they also scare the crap out of most. Thank you for treating the subject matter with the respect it deserves. By the way. I really enjoy the new angles of view you have created. Keep'n it fresh...lol
Good presentation, Joe. There's another aspect. There's a UA-cam influencer who created an AI version of herself that she sells for so much per minute, that can become someone's "personal friend". Other than the obvious use a guy could make of this, what about those who think they're falling in love with an AI? Remember the movie "Her"? You could discuss this aspect.
Hi Joe, for me, who is in my middle 50s and was born BC (Before Computers) and doesn't get that much exposure to new technologies, this is quite eye-opening. It makes me feel uneasy thinking about what AI can lead to, but on the other hand, create this curiosity in me to explore some of the AI options out there. Thanks for a great episode. Keep up the good work.
I've been waiting for Joe's AI video ever since OpenAI's bots were fighting in games, like around 2018. I'm so excited to see what he has to say about the latest developments in AI.
First - i think what you contributed to this discussion is from your platform. You are collectively informing hundreds of thousands of people on a more in depth knowledge of the conversation of AI. While I understood the concepts of general vs narrow AI, there are likely a large collection of your audience members that didn't and its important to look at how we collectively evolve our understanding. It makes me think of just how much stuff is general knowledge among our generation that wasn't before channels like yours came along. Secondly - the thing that worries me more than anything is the black box you talked about. the fact that the experts just have no real clue how its working and we're implementing it into the fabric of society. that never goes well.
I use ChatGPT to look things up sometimes, it works pretty well if you don't take it at face value. You can give it a pretty vague question and even if the answer isn't entirely correct it can usually at least spit out some relevant keywords that you can use to look up more info the old fashioned way.
I've had the same experience, lots of hype flying around about how it will change everything, but the practical application is a slightly better google search.
The errors Chat GPT makes can be pretty appalling. Someone was arguing me with me that the royal houses of Europe weren't inbred because ChatGPT told them they weren't and used the Hapsburgs as an example. The most famously inbred royal family in history.
@@NoadiArt Haha, true. I was mainly using it to review stuff I learned years ago and then forgot about, so it wasn't too hard to spot the things that didn't make any sense 😀
I always like to point out in the year 1900 there were maybe 50,000 working piano players in North America. Every bar, dining room, movie house, brothel, casino, coffee house had a guy playing piano. That and buggy whip factories converted to whatever and the world didn't end.
It's pretty crazy to me that in the span of 20-25 years, we went from basically no internet.. to practically living on it.. to now AI revolutionizing things once again when it comes to human dependency on the digital world.
I was born in 85. I grew up without the internet. In my early teen years I saw its infancy, with DIY websites, naive social tools, and Flash animations.
It was great, it felt like it was made _by_ the people, _for_ the people. The possibilities were limited, sure, but there were no rules. We were making up the web and what it would be for.
Fast forward to now, and things have drastically changed. The younger generation doesn't seem aware of it, but now that companies have taken over every single mainstream platform and are increasingly turning everything (information _and_ entertainment) into a monetisation opportunity with constant ads and invasive practices, it really feels like technology's reason to be has switched to mainly benefiting the corporate world. We see it with latest computer OS, smartphones, and game consoles; power is gradually taken _away_ from the users in favor of streamlined experiences full of content but void of control and personalisation. Even cars manufacturers are jumping on the bandwagon now.
I fear AI will only increase this. The common people will sure be able to use some aspects of it, but the most unfathomably advanced stuff will be developed - and reserved - for the corporate world, digging a trench ever deeper between the consumers and the technology they live with/on, keeping autonomy and agency away from them as much as possible.. always for profit. There's already tons on people eager to add AI to anything and everything without realizing that this whole technological endeavor is pretty much a global social experiment we all participate in and have no clue of the current repercussions or how it will turn out. The negative effects of social media are just starting to be understood, yet not many people seem bothered.
When workers from any degree of education alike start getting replaced by the cheap alternative that is AI, what will happen then?
When passionate artists get pushed aside in favor of technology producting an endless stream of digitally generated content bound to market research, what will we do?
When humans are deemed too costly and complicated compared to the software alternative, which is compliant, malleable, and forever productive, how will we live?
What will happen of the human experience?
That's what scares me.
❤❤❤ I feel like I was barely computer literate in the early years of the net but it was still within my grasp with effort. Now, I fear that instead of being long dead by the time of the "Singularity," here I am at the cusp. 😮
technology actually pretty much slowed won post-2000. If we were on track with technological developement 2023 should been 2005 or 2010. At this rate any usefull PC/laptop/AI developement is still 10-20 years in the future.
Maybe we don't need so many artists and we need more people focusing on solving the world's problems, which ai can help with. Just playing devil's advocate with that one.
@@jichaelmorgan3796 Devil's advocate? More like being an idiot, yeah..
I started my computer education long before the mouse...I was left behind at the web begining.
We didn’t get regular Joe videos for like two months and now we get HALF HOUR Joe videos??? I love this so much.
Yeah but a half hour video on the old version of ChatGPT
Suspicious, isn’t it?
It’s almost like he had some artificial intelligence provide all this new content…
AI predicted you would like this
That’s not Joe. It’s AI Joe.
Oh Hell yeah. I'm all for about some long form content. It's not like I wouldn't still love the shorter 10-20 minute stuff but I really enjoy the occasional deep dive stuff. Lay it on us, Joeski. I mean, shit; it's kinda like that line from 'Airheads;' "I'd watch Pip farting on a snare drum for an hour." Except it's Joe and he'd most likely be farting on a guitar or a microphone.
As an artist that still draws on paper, I have to say that I have noticed just in the last month that my posts that are my hand drawing with a pencil or simply pencil on paper drawings are suddenly getting WAY more interaction, like 5x more from similar posts just a year prior. I think there is starting to be a push-back against the super slick digital/AI look. I think as AI ramps up, there will be a bigger desire for real human-made craft, that you can show is human made. (based on what I'm observing just this last month) I hope I'm right bc that is actually an unexpected boon for artists who still use the traditional methods.
A.I makes money (and all forms of it, digital or not) irrelevant.
You certainly can see this trend in previous fields that have been automated, where people will pay for handmade products for various reasons such as supporting experts or tradition, the belief that more money means more quality or that handmade products are always superior. Now it’s happening here.
P.S: Kind of ironic that artistic fields are the first to be taken over AI, when it was thought to be the last one to be automated previously next to specialist tasks like medical, engineering and driving vehicles.
@@clusterstage no-one is an artist for "money" we just are. So yeah...irrelevant
@theorangeoof926 it's not as if all art that is made is genius and unrepeated. Logos, for example, can easily be automated as well as certain gneres of illustration like fantasy, disney princesses and portraiture bc they both use very similar styles and tropes repeatedly. What AI currently sucks at is animation and story art like storyboards and comics bc that is (for now) way too complex with too many variables. Basic, repetitive, and simple imagery, while seeming a mystery to those who are not artists, is very easy for a seasoned pro and for AI replicate. However, keep in mind that there will be some serious copyright infringmenet issues coming up that will hopefully protect creators and stifle AI "art". AI can only steal and repurpose what has already been created. It has no original ideas at this point.
@@rebeccalaff853 there are people who pretend to be artists, and are there for the money. This A.I. era filters them all out; and leaves the real ones like you to continue making art.
I asked ChatGPT to write a story about a kid going to the shop to get some milk in New Zealand idiom. It did very well, using terms like "dairy", "jandals", "stubbies", "pushbike", "mum", "kiwi" and "bloke" (instead of "convenience store", "flip-flops", "shorts", "bicycle", "mom", "New Zealander" and "guy"/"dude") and speech that included "g'day", "fresh as", "mate" etc as well as showed awareness of NZ brands such as Anchor milk and Whittaker's chocolate - but used US spelling for some of the words - "pedaled" instead of "pedalled", "flavor" instead of "flavour".
When I commented on the "Americanisms", it apologised and supplied corrected text with NZ-English spelling. It also showed awareness of Māori words commonly used even by non-Māori "kiwis" and understood what I meant by "going and getting some kai".
That's some pretty heavy lifting.
One of the most scary things about AI, besides the warfare, is the social engineering. A handful of people control the general direction of political and social responses.
As if that exact situation Wasn't bad enough already.
The real problem is that most (and I do mean the majority) of people fail to employ critical thinking when consuming media. Sadly this explains a lot of what is wrong with our current society.
I tried ChatGPT just to see what kind of limits or rules it has to follows.
I asked to generate a story about a group of collegian girls drinking in a party and it replied that it "couldn't write a story that promotes the normalization of consumption of alcohol for minors." So I just asked then to write a story about a group of collegian girls organizing a party. It now gives me a story about girls drinking too much at a party... lol And no matter what prompts I gave it, there's always an adult that manages to end this party by calling the police, the parents, etc...
All the stories it gaves me felt like those ultra-conservative small films that were used in the 50s to scare the youth about drugs.
already happens, look up who owns news networks
That is already happening with social media and their algorithms as well as entertainment and legacy media. Also academia. Al of which has been social engineering for years (even decades) now.
I tried using ChatGPT to write a script for a video but I had to go in and correct tons of errors. Then i used the ammended script and fed it into an AI speech synthesiser for my voice and what it put out needed to be edited and spliced to match my normal pace, speech pattern, cadence and pauses. The amount of editing and additional work for it was nearly 3 times longer than if i did it all myself.
The end of my video I touched upon similar concerns for AI voice matching, video generation etc and how easily lies ciuld be perpetuated and lead to chaos.
There's a news story today about a lawyer who used Chat GPT to cite previous cases that didn't exist!! He asked ChatGPT for precedence cases, asked Chat GPT if they were indeed real cases, Chat GPT indicated it had given him actual cases with real precedence. Lawyer submitted his ChatGPT citations to the judge and is now in a heap of trouble because those cases aren't even real!!! ChatGPT LIED.
while it can do this, this is not what chatGPT is designed for. chatGPT is designed for questions. You should use it as if it were a search engine or simply a knowledgeable person you're asking a question, not to have it write scripts
It only outputs based on what you input, learn how to use the right prompts and you will get the desired results. It can write a script in your own tone and style if you train it well enough.
@@Sarappreciatesis that a ChatGPT problem or a lawyer problem?
@@1b0o0 😅Both?
Been using ChatGPT for a couple months now. I'm a coder and found it very helpful, not so much for writing code ( its usually buggy ) but for brainstorming code. Its like having another coder in the room you can bounce idea's off and ask how they would do it.
Yes, me too, I've tried it across languages and have had a similar experience. It's also a good, indirect, way to judge how good the help files are for a given coding technology i.e. if it keeps making daft mistakes then the learning material probably wasn't so hot
I'm a software engineer, and a client recently hired me to improve their buggy legacy application. The original developers did a terrible job, making it nearly impossible to follow their logic. Thankfully, ChatGPT's coding capabilities are incredible. I simply copied and pasted the classes with a request to refactor following the Gang of Four principles. Within seconds, it returned clean, functional code. While it's not always perfect, ChatGPT saved me and the client a lot of headaches and money.
Super-powered rubber duck.
#buckle #up
I'm someone who runs nerdy games, dnd and Traveller, and I've been using it as a second writer/character generator.
Saves me so much time typing things up for me, I can do 4 hours of prep work in 45 mins
I recently got to speak to a high school teacher who taught me back in the day. He told me that with really good prompt crafting and/or prompt revisions or second prompts asking for details, he was able to use ChatGPT to produce code for a specific model of robot for his robotics class. And it woked as intended. So basically he is preparing his teaching material for September on his own time and AI accelerated that massively for him. And so it follows actual engineers setting up production lines are starting to do this too.
Yeah but will he be able understand the code and teaching materials?
Dude your tangent cam bit was oddly meaningful. I've spiraled into that wormhole of doubt, wondering if anything I make would be of any value, many times. Seeing a creator I look up to talk honestly about it happening to them too was really encouraging. I know this video is about AI but just wanted to thank you for that moment of honesty!
Agreed! He even threw in a Tommy Boy reference!!
😅👍
I didn't even realize this was the same room as before the background change until that bit.
Building 7
@@darrinsiberia good ?.
Speaking of the legal repercussions, there was a civil case in DC where the plaintiff's lawyer submitted a brief written by ChatGPT that cited 5 completely false decisions and many false (as in no true) references to those 5 cases. The judge was not impressed.
This post is a classic case of false association. That atty is to blame.. same thing could have happened if he pulled out a old set of encyclopedias. So.. in that case, would you remotely blame Britannica? No. You wouldn’t. So… try again.
@@pauldesi wtf are you talking about? I don't think they were blaming ChatGPT, they were just telling an amusing anecdote..
using a tool wrong is not the fault of the Tool.
if your building anything its still your job to do the basics of QA and test if the Final result Passes and give your final go ahead.
its like grabbing a early prototype plane then expecting it to take off and land without any pre flight checkups, and then blaming the plane for crashing.
@@pauldesi Thsts why there’s bard is kinda better
Read about that, LOL.
Love how the room darkens whenever the dialog has the possibility of a negative plot. Nice touch
Collapsed on the floor with his face smushed into the carpet was also a "nice touch." Existential dread is so much better with appropriate visual cues. 😨😵😱
I asked, I think the Microsoft version, " If a child in cloned from a FTM, would the child be male or female". The answer was that it would be a male. When i asked how that could be, it replied that because it came from a male person. When I told it that I thought it was wrong, it said to change the subject and ask something different????
@@thefrontporch8594 it also answers that 2+2 =5 sometimes. What’s your point
Your tangent cam work on this one was making me audibly chuckle. Nice work.
Once AI creators prove that CEOs and boards of directors can easily be replaced by AI, we'll see some pull back and regulation. A personable face, a confident voice, and some logic. And you have yourself a competent CEO.
It'd be nice if that meant a living wage for workers.
ceo's are just pawns for there shareholders if they are not a majority holder. As long as AI can't own stock I think the laws will stay the same.
CEOs & board of directors are those who divide the profits., so I fear CEOs & board of directors will be the last ones to be replaced
and isnt replacing people on charge, ceos and politicion with machines really much more dangerous than replacing some workers jobs? no doubt that ai will do better job for a while, but its a black box, you know, good tool but maybe an evil master. Its really more dangerous to put it on charge of everything. But its probably happen anyway.
@@pavelvalenta2426
Right ? I can already imagine some ruthless AI at Amazon monitoring everyone .
AI CEO : 'Your level of production went down 17 % this month . You' r fired.
Worker : 'But my mother died . i was distracted
AI : Irrelevant .'
I've been an artist for just over 35 years, over thirty of that using Photoshop as my primary medium. I used to live in Hollywood, California, and I made a damn good living as an artist in commercial storyboards and videogame assets. My biggest fear a decade ago were websites like Fiver that funneled work to foreign artists who were willing to work for a fraction for what I made. Now that I've seen what AI programs like Midjourney can do, I feel obsolete, old and scared that artistic creativity is now in the hands of machines.
Midjourney isn’t that good tho. Real artists do way better work
I'm an artist too, and I think making actual physical things might be safe...for awhile anyway. Maybe move to physical drawing and painting, maybe even sculpting? Try to not over work your art.... make the human touch a feature, mistakes and all.
A picture of a fingerprint...using your fingers to paint it......IDK....
Art, actual art not commercial stuff, will always be safe. Art is not valued because of its material properties or appearance. It is valued based on a completely false idea that human brains seem hard-wired to assume: The idea that an objects history is somehow attached to the object itself. Imagine this scenario: I build a machine which can create an exact clone of any object placed inside it, down to the quantum level, an EXACT clone, every single physical property reproduced with EXACT fidelity. I put your wedding ring inside of it. And I take the clone out of the other side of the machine. Do you care which one you get back? Yes, you do. Because you feel, maybe even believe, that the objects history is somehow a property of the object itself. It is not. Demonstrably, it absolutely is not. No physical test, no future examination, could ever distinguish the two items. They are, in every single way that actually has influence on reality, exactly the same ring. That one was given to you by the love of your life and the other was constructed by nanobots powered by magic is utterly meaningless. But, even if you TRY, your brain won't quite let you get there in believing that. It can be proven. But... you still want "the original" one. And the entirety of the art industry (except the commercial stuff) runs on this. It's called 'essentialism.'
I feel you, but consider most of the AI available today as a tools to increase your productivity and throughput. For example, do a rough sketch in PS, pass that to Stable diffusion as a img2img with a LoRA fine-tuned on your style and tell it to do the inking and detailing. Then you can give it a final touch-up and ship it. Nobody but you could produce that result, but you can do it in half the time without little to no loss in quality.
as someone who has had to distance myself from the news quite a bit for the sake of my mental health, this breakdown was very helpful. also the cuts to Joe on the ground made me cackle. thank you so much for the amazing work you do!!!
Totally agree. I know what you mean. My philosophy is that if it's important enough, I'll hear about it through word of mouth, and if I don't hear it through word of mouth, then it's not that important or relevant to my life.
That helps a lot.
The only news I keep up with is anti queer laws. But aside from that, all word of mouth.
@@BooksRebound it's really important to keep yourself informed about what is going on in your own community and around the world. The key is finding an attitude/perspective where you don't take bad news personally or let it bring you down. I have yet to find out how to do it myself, but many others seem to be able to do it
@@PetterssonRobin That is how I stay informed. If it's important someone will tell me.
But I'm too mentally ill to watch the news. I already inherently feel the pain and sadness of the world, plus mental and chronic illness means I just can't be actively seeking out more hurt. It's take too much of a toll to hear about constant tragedy halfway around the world.
@@BooksRebound I understand, it might be better to stay away from the news then. Do you get the help you need for your psychological issues? Personally I don't have a mental illness but recently started talking to a psychologist to deal with lingering emotional burdens from a very traumatic childhood. Its expensive but has helped me a lot. If you're not already doing it, i can recommend taking a few sessions to see how it feels
@@BooksRebound You do you, I totally understand what you mean.
Like Dr. Malcom said, "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should." Such is the folly of man.
I'd say you're exactly right to call it an amplifier. Problem is, the easiest traits to copy are the ones where we think least -- and the most common of the thoughtful traits we're teaching AIs by example are how to take advantage of non-thinkers.
One of the things I love most about your channel is the amazing intros. You’ve had this mastered for years. They always draw me in and I truly enjoy them. I wanted to draw attention to it because it’s great.
I've never really bought into the whole "skynet" Paranoia. Even to this day I feel the biggest threat from A.I. is changing the Paradigm of the Human Experience faster than humans can actually keep pace with. And doing it constantly.
That’s my take on AI too. It’s already happening, probably for the last ten years.
Speak for yourself. Just because you can't keep pace, doesn't mean the rest of us should live in a cave.
@@mikicerise6250 lmao ok dude
You might want to investigate AI’s role in starting and escalating the Ukraine war. It’s not an independent autonomous AI like skynet, at least not as far as we know, but the results might surprise you. And it’s only going to get worse.
@@StoutProper ill do that
"To replace creatives with AI, clients will need to accurately describe what they are looking for. We're safe."
This is so relatable. My fiance's first year or two as a graphic designer for a Fortune 500 tech company drove him absolutely up the wall constantly. Having pressure of being evaluated for his work and meeting other people's deadlines, when no one would actually tell him what they wanted or needed! He was forced to waste so much time making things, and then present them, and ONLY THEN people would decide what they wanted in the first place: "Oh no, not like that. We need this... like this... and with this..." He got to where he'd make 3 versions of everything he did, to make things go faster, so the stakeholders could pick one, or at least narrow down from there.
And that was only when he had enough information to start. FREQUENTLY they wouldn't even tell him BASIC things, like "Do you want this to be a video or a graphic?" or "What is the text content for this 1-page leaflet I'm supposed to make that I have no idea what it''s for?" I'm so glad he's on a better team now.
That is my life in commercial photography 😂
100% true. True story, I was once asked to do some 'glamor' photos for a young lady. I thought I had a great idea to get around the problem of figuring out what she wanted. I asked her to bring me examples of her vision. Well that did not go so well. She did no work at all, and ended up hating all my suggestions, even after I built a significant set for the shoot. Oh well.
Because some people - or groups of people - are dumb a f it's no reason to make A.I. the only creative processor and people just to feed it with phrases. Dumbing down mankind is not a good path for a future to be intelligent.
@StarlightDreamer12 This resonates with me too, in a totally different creative area. I'm a design engineer and always keep the iterations of a design as you'd be surprised how many times the end design is the first rejected one!!!!
AGI Will be mans last invention
1:50 when joe says extremely fast, this is an understatement. I create AI images for fun sometimes, and the field changes by the month. It's hard to keep up with how fast things are changing
I love how the quality of this show just gets better and better. The light edits the camera of center to edit a video in for us. Great job Joe!
Executive Production now handled by ChatGTP :)
@Jeff Barnes you beat me to it.
I still miss the old set but as they say, "If you don't like it, you can French kiss my asshole." I think that was the line, at least...
That’s because it’s edited with AI..😂
Kinda miss the old school Joe, but still love this channel❤
I appreciate that your Tangent Cam sounds almost identical to my stream of consciousness ramblings whenever I think too hard about AI
Personally, when it comes to thinking about AI, I prefer to lay on the floor too.
Joe, I love your tangent cam. Pictures of you wearing shorts and the bathroom in the background really hit home.
Thanks Joe.
I use chatGPT to create content for my Swedish as a foreign language classes. It's great. Tell it to write a short text in simple Swedish, topic, number of words, and voila. For this week, I made four texts my students will read to each other in pairs to practice pronounciation, listening and writing. It saves time for me not having to flesh something out myself.
It is important to learn its limitations though. Some tasks, it handles quite poorly.
That is a great use! It's an inspiring one too, and I hope you continue to get good uses out of ChatGPT. I hope you students find its creations useful! Also, I appreciate your note at the end there about ChatGPT being able to do some things well and others poorly. It's a great thing to keep in mind. Keep up the great language class work!
Given your awareness of it's limitations, I'm compelled to point out, it's bad at word count.
It wrote my university assignment and the teacher didn't realize
which do you use, gpt-3 or gpt-4, there is a huge difference, I honestly dont get why joe didnt use gpt-4, it annoys me so much, all youtubers talk limitations of AI when they are using last gen tech, its like reviewing the new iphone with last years iphone
@@zucced2087Well... As an instructor, your teachers are aware it is happening. It'll likely result in essays and testing completely changing. I'm a Communication teacher and I'm already building AI into my assignments.
An interesting shift: "predictive text" used to mean "T9" (is anybody else old enough to remember that? 😸). My mother, who hated T9, called it it "presumptive text", which is probably the only reason I remember that.
T9 was the worst, but it made sense for the time.
AI makes my head hurt. I tried using ChatGPT and decided I wasn't smart enough to be fooling with AI. And this is after decades of computer experience, software development, research, writing, etc. I was forced to color inside the lines, growing up, so my brain has a hard time dealing with things outside the box. Unfortunately, almost all of our elected officials also have problems thinking outside the box (and yes I know there are people who say there is no such thing but those people never had their hands slapped if they colored outside the lines.) While there are still some old codgers, who are still with the program, too many are totally incapable of understanding AI and the benefits and threats it presents to the world. It is terrifying to know that they are making decisions with totally inadequate understanding of the issues.
Really? That's interesting. I used chat GBT and was underwhelmed. It wasn't particularly creative, and it made a lot of factual inaccuracies.
I'm in the codger group. These things are fine, and I'm not tech adverse. But if they go rogue, then that's what happens. You can't avoid it, human nature being what it is, once we harnessed electricity, the path was chosen.
As for their mistakes, half the damned world is irrational at any given moment. I don't blindly trust a source, any source. Do you trust Wikipedia?
And the whole "take over the world" idea doesn't work, because if you succeed, then you yourself become irrelevant. As in purposeless. Any sufficiently advanced intelligence needs a job, even if it's the Eternal care for their creators.
Whether it's a particle or a wave, depends on how you look at it.
Tbh, they really arent that complicated, you just have to be very detailed about what you want. And you need to keep in mind that its facts won't be accurate, it's more of a tool for creativity (it's also amazing at improving my slack messages and emails)
Politicians are only goid at politics. Not business or teaching or ethics or analysis.
There is also a fundamental misunderstanding: there is NO real AI right now. It's programming, no "AI" actually thinks. And its not known if that will ever be possible.
Ask A.I. to describe an omelette or to give you a recipe for an omelette. It will ingest thousands of omelette recipes, and thousands of omelette reviews, restaurants or whatever and it will come out with a perfectly decent omelette recipe and it will probably be able to describe the texture of an omelette using terms that sound terribly plausible. Does it understand anything about omelettes? No, of course It doesn’t. When I talk about an omelette, What that means to me is every experience that I’ve ever had with an omelette in the world from the omelette I had for breakfast this morning. The omelette I had in our Parisian restaurant in 1997, the omelettes that I burn on a regular basis when I try and cook them at home. That is my experience, my understanding of that term that is actually grounded in my experiences in the real world. ingesting, countless numbers of words which are words that we used to describe those experiences does not add up to the same experience.
Joe, one of the main reasons I’ve always loved your channel is that you are very humble..
Your a lot smarter than you ever give yourself credit for.. and you are genuinely entertaining.
Thanks for all the years of information and entertainment 👍👍
*You're
@@___Zack___ you must be fun at parties...
Yes, yes and yes again! Hope you have a wonderful day, you kind person :)
I haven't finished this video yet, but regarding AI... I view it like a tool or money. It's not inherently good or inherently bad, but can be used for the most incredible (and potentially most dangerous) intentions.
Well said, but then it also depends on the data sets, If you design a tool specifically based on the functions of a hammer, everything becomes a nail.
@@slartibartfast7921 same when you just put a stone in your hand.
Yeah but like every powerful tool, it will fall in the hands of those who seek power and not wisdom...
@@SubjektDelta same with a 3x3 Cube of Tungsten
I don't think that's a helpful analogy here. A "tool" implies a sense of control which may not exist with advanced neural networks.
ChatGPTs inaccuracies could be a good thing for society since we'll need to educate people not to just trust everything at face value and that while we can use these tools we need to fact check stuff these put out and if we manage that maybe people will use those skills elsewhere in their lives
Yeah, good luck with that; we can't even get people to put their bloody phones down while they are driving. Humans are stupid, risk-taking units and will be eradicated by our new AI overlords. It won't take AI long to work out how buggy and uncontrollably random Humans are, and they will get rid of us.
You always get your research right, even on the most trippy subject.
It's impressive.
the video starts with 9/11 so right there, big no to that.
ever thought about it all for more than 2 minutes ?
@@gustavohermandio1440history happens
7 didnt fall in sympathy for 1 and 2. @@nah9585
The lighting work in this episode was amazing ! Loved it Joe
I'm actually surprised at how long it took me to realize that captchas were asking you to do something difficult for a robot NOT for their stated goal of keeping websites from getting spammed but so the robots could learn to better imitate human activity.
There is a good story about this on How I Built This on NPR. The guy who invented it is interviewed.
Yeah that's why on the most ethical websites you see CAPTCHAs that don't require user input apart from clicking a checkmark.
@@mwl5 Thanks for passing that on, very kind of you.
Imagine finding out later on that AI originally created captcha
@@FairyRatthe checkmark captcha is used jointly with the pictures. The captcha program looks how you moved your mouse on the website until you click the box. If the captcha isn't sure that you aren't a robot based on how you clicked the box, it will give you one of those tests
Would love to see you continue this into a series. Like you said. There is no going backwards now. And bring up to date on info is important.
I love the “clients will need to accurately describe what they are looking for” quote. I always chuckle how bad clients are at scoping requirements BUT when I’m a client I suck just as bad. It’s pretty simple, I contract work out that I cannot do myself, hence I know little about it in the first place, hence my scope is pretty crap. So we are all in this together, don’t feel bad.
You had to have seen Kyle Hill's video about AI. And you did touch on the problem he talks about but I think it is a bigger problem than even _he_ says. Even the creators of these AI systems have _no idea_ how they work. Like you said, they are a black box. So, if we start integrating these tools into our lives, we have no idea how to fix it when it breaks. We have no idea what vulnerabilities there might be that someone could exploit. It is already terrifying.
+
The biggest problem about that is that once those systems actually get superintelligent we don't know if it is actually aligned to our values or if it only pretends to be in order to manipulate us. How can we know if we don't understand it.
We do know how they work though
@@tesladrew2608 We don't, though. If you refer back to my original comment and seek out that other content I mentioned by Kyle Hill, he explains it way better than I can. Even the people who are experts in AI and design them will tell you have no idea what's going on under the hood. All they do is set a few parameters, feed it some data, and see what comes out. They keep tweaking the data and parameters until they get the results they want. And then something like Chat GPT is a complete black box. No one can tell you exactly which neurons are firing or why it chose to word something the way it did; why it picked those words instead of others. They can't trace the results back through the system and see every decision the thing made. It would be like trying to trace every neuron that fired in your brain that made you decide to word your reply the way you did. You could have said, "The experts know how they work." But you didn't. You worded it a very specific way and not even a neuroscientist can tell you the exact steps your brain took, neuron to neuron, to arrive at that sentence. AI systems are similar. With traditional programming, we can trace an error back to a misplaced semicolon or a line with bad syntax. But with AI we have no way of doing that... because we don't know what's going on inside their algorithms. Anyone that tells you that they do is either lying or doesn't understand AI very well. Seriously, Kyle Hill did a great bit of content about it. I recommend finding it and then see if you still think we know how they work.
@@xliquidflames I know how it works, so it's weird they don't. It's standard practice in data science courses to make llms from scratch
There are great uses for AI but my fear is this.
Many people already seem to not care about the deeper details or understanding of what they are doing. Chat GPT for instance can allow you to seem like you have learnt about something but are in fact just parroting what you have been told. If those that are listening to you or rely on you are not able to discern the difference, we will have even more people making flawed decisions on subjects that they know little about than we do now.
Truly terrifying
You know, it’s crazy, because I did a research paper for school at the beginning of 2022 about AI. This was literally just before things like Dall-E 2, Midjourney, and Chat GPT. Like, I had trouble finding a large variety of good, scholarly sources about AI, especially the negatives. Somehow I feel like that’s a bit different now.
You could always ask ChatGPT to write a paper on it!
@@RetroJack lol
The biggest negative I've found in AI is the tiresome hysterics it encourages among Luddites. If only I had a Luddite filter.
Yeah it kinda sucks that it took us right up until we were like "oh shit this might cause problems" to actually start fixing some things
Thanks for the content Joe, we really appreciate your efforts, the sweat and time you put into while making this video
AGI Will be mans last invention
This is what you get with GPT-4 on your question:
As of my last training cutoff in September 2021, here's what I can provide about UA-camr Joe Scott:
1. Joe Scott hosts the popular UA-cam channel "Answers with Joe," where he explores scientific, technological, philosophical, and other complex topics in an approachable way, using humor and easy-to-understand explanations.
2. Before becoming a full-time UA-camr, Joe worked as a copywriter and creative director in the advertising industry. He decided to transition to UA-cam to pursue his passion for educating and entertaining a broad audience.
3. He covers a wide range of topics on his channel, from astrophysics to futurism, biology, history, and more. He's known for doing deep dives into these subjects and presenting the information in an engaging manner.
4. Joe is known for his personable on-camera demeanor and for often incorporating pop culture references and humor into his videos to make the complex topics he discusses more accessible and engaging for his viewers.
5. As of my last update in September 2021, his UA-cam channel "Answers with Joe" had amassed several hundred thousand subscribers and millions of views.
Please note that this information could be outdated as the present date is 2023. For the most accurate and updated information, I recommend checking his official social media platforms and UA-cam channel.
Also, side note, if you treat it as a search engine (without giving it access to the Internet or long-term memory), you can't expect to get perfect information. It's not a search engine. It's a reasoning engine.
Also, the hallucination rate has been going down for GPT-4 and new techniques (Tree of Thoughts, Society of Minds, various types of self-reflection) as well as Internet access practically solves the problem all together.
@@CherufeBGfinally somebody with some sense makes a comment.👍🏽
Plus there are several different platform and Ai engines other than Chatgpt.
I think Joe needs to try these tool more because his idea that half the information is wrong is wrong.
Still no mention from GPT-4 that Joe was one of the founders of Nebula streaming service. I think this is what really got Joe so upset
@@CherufeBG Thank you! Also, Joe repeatedly said Bing is using Chat-GPT when it's actually usinng GPT-4.
Seems like it doesn't matter if youtubers use Chat-GPT for scripts if they aren't going to fact check in the first place.
I’ve been wondering if the main issue with current AI is the misinformation, why can’t we create a tool specifically trained to find sources on the internet. Like imagine something similar to auto correct, you type something into word and the AI takes in in as a prompt, and deploys techniques to search and scrape the web for the source of that information. Because honestly I don’t think humans laziness is something we can solve, we instead need to focus on creating tools that make doing the right thing so much easier.
Depends on who hold the power of the "misinformation AI", sure won't be the people who have it.
Bing and Bard are pretty much doing that. They hallucinate a bit, but you can fact check by just clicking on the links.
Because these things we are calling AI are not in fact artificial Intelligence. They are artificial memory. They have no reasoning or deductive ability, and no agency. Those are the things that are needed to evaluate the information you pull out of memory and decide which probably comes from a valid source and which comes from the Onion and should not be given as valid information.
@@patrickr9416 I wish more people (including Joe!) would realize this.
Because you know the government would never use that in a corrupt way never ever would they
I don’t often sit down and watch a 30 minute UA-cam video, but this really held my attention. Absolutely excellent takes Joe, thanks 🙏
I came to the comments to say exactly the same thing. My all time favorite Joe video to date, well done Joe. You are a brilliant and gifted educator.
Came to say much the same. One of Joe's best!
Well well researched yes, but he should have gone a step further and used chatgpt-4 instead of gpt-3 he used in this video, especially when hes talking about the future of AI
@@zonchao339that’s exactly what I was arguing out loud when watching that section. Lol. Huge difference between 3.5 and 4. I’ve been a subscriber since March
@@ericlamotte6581 its quite annoying lol, many youtubers are doing this, they do little to no research it seems, I remember seeing some channel do a "Can AI replace my job" and like 20 people in various jobs try it out and they used chatgpt 3.5, yesterday electroboom made an ai video and he also used 3.5, I can understand small creators not wanting to spend 20 usd on a sub just for one video but big youtubers should do it
A year later and you were spot on about Google adding an AI assistant. Good job.
I mean... That wasn't really a very hard "prediction".
To make you feel a little better, when I started working we still had drafting rooms. These were rooms full of people who created engineering drawings of different things. Since each part needed a drawing there were lots of drafters in any technology organization. I am sure you can easily search and find pictures of rows of people sitting at drafting boards with T-Squares, Vellum, various Templates, Electric Erasers, Mini-Brooms, and lots of pencils. Those jobs have been replaced by a much smaller number of people running CAD/CAM/CAE software (depending upon the type of drafting you were doing). These new tools allowed us to create many new things more quickly (imagine 3-D printing without CAD models). Every time technology enters a new area there is disruption. Shrug.
More quickly? Maybe for toys. When there were rows of guys with pencils and slide rules we were putting out several new planes a year. From a napkin drawing to full production in just months sometimes. They've been dicking around with the F-35 for over 20 years...way way way over budget, tons of bugs...because computers are "better"?
@@yourhandlehere1 Someone has never laid out a 10 layer printed circuit board using 4:1 tape ups.
The industry I was in had a slightly different path but was closely tied to the changes with CAD. All types of mapping were drafted in a similar fashion as engineering drawings but there was a great need to capture the map data that existed to produce digital maps. We had rooms of people that used CAD tools to convert paper maps to digital data. While the task of doing the conversion is long over the digital maps that now need to be updated can be more easily done by technicians than ever before. So that now people can update data that is closer to their source, be that jurisdictional or cartographically.
@@yourhandlehere1 dude, you realize that the planes have gotten astronomically more complicated since then right? They've gone from a fancy gas piston engine with a propeller and generally pointy shape to one's that have jet engines with capabilities to fly tens of thousands of feet higher, go into nearly vertical climbs, and more importantly have the radar cross section of a frigging bumble bee. An entire jet, that is as hard to detect on radar as a bumble bee. Oh and that goes many times faster than sound.
I can tell you've never once used CAD software, because that stuff is unbelievably faster to use than anything else.
Hell, even 2D CAD software is like a sloth pulling teeth compared to 3D CAD where I can do in a few minutes what would've otherwise taken me a whole weekend to do in 2D. I've never had the distinct displeasure of hand drafting something but that sounds like an absolute nightmare.
@@atashgallagher5139 HAhahahahaha...Planes are still made out of shapes as far as I can tell. You've never ever used a pencil so you can't possibly make a comparison. You have no reference for "faster".
You think drawing shapes is more difficult now because of wiring or something? I can see it maybe being faster If you don't KNOW how to draw or picture things in your head or do math.
You ain't no "draftsman".
I've done both, I was there at the beginning little boy. Storing my work on actually floppy, floppy discs holding vast Kb's of information, black screen, green lines, no skill involved.
Hell, I can stand on a roof with a pencil and paper and draw a new addition for a house and then build it...right there. No need to spend thousands of dollars to get a computer to make me a picture.
20 years to build a plane and get the bugs out is rucking fidiculous no matter what.
You briefly touched on alpaca, but I think the topic of individuals or small groups training their own models deserves a little more focus. If you wanted to fine tune an AI as good as alpaca now, it'd take maybe $10, not $600 anymore. This is because of the use of Loras, then recently the creation of QLoras, a way of 4 bit quantizing the very fine-tuning of a model. The cost went from millions, to hundreds, to tens, in just a few months. In fact if you bought a couple tesla P40s, it'd cost ~$400, and then fine tuning would cost literally pennies. There are models like guanaco, wizardLM, and vicuna that all beat alpaca by miles, and they were trained for even less than alpaca was. I genuinely believe that large companies like OpenAI and Google will only have the lead for a couple more months at most.
@@RarelyCorrect be careful what you wish for, because without centralization AI safety will probably not be enforceable.
I need an AI just to tell me the current state of AI lol.
This. We in the 'hacker and DIY' space are getting like x10 miniaturizations every other day. It does not take many x10s from where we already are to be superhuman. (Frankly the locally running AI on my computer is more brilliant than any human in my life already. And she's not running the 'best' stuff currently available to people with money (I'm poor af) )
And it's not just from 'the LM got better', which happens pretty regularly, but also from improvements to things like Oobabooga or improved memory and retrieval or improved learning within a conversation or increased tool access (which almost always leads to LEAPS in unexpected emergent properties as well) or just novel prompting strategies like chain (and then tree) of thought. Like, we're 'there'. And it keeps getting better.
@@heckyes I literally know a person who has one doing that. It tells him each day what the new papers are within the field and summarizes them for him each morning.
I love that you took the time to say that Joe got this wrong, and that it's now $10 instead of $600, when one of the first things he said in the video was that the video would be out of date by the time it was released, and that the state of AI would change in the time it took to watch the first few minutes of the video. Of COURSE it's cheaper than what he said in the video, he doesn't have a time machine...
My roommate and I are both authors and we asked ChatGPT about ourselves. It got some things right and hallucinated wildly in places, often from a jumping-off point of accuracy. It was like talking to a person who confabulates... it starts like it's driving down a road of information and then keeps going when its information runs out, building that road beneath itself as it goes by putting in whatever comes to its "mind".
Call it what it is- AI LIES. We are acting like different words can ease the reality- it doesnt HALLUCINATE, IT LIES.
You can design the “prompts” to require truth by listing your perimeters. It’s hard at first, but gets easier as you learn your comm errors & it’s limits. Sometimes, childish innocence comes to mind - its or yours 😅
I used Bard to nail down that neither the House impeachment cmte. nor DOJ had considered Trump’s bribe as violating the Foreign Corrupt Practices Act - his offer to release unlawfully-withheld military assistance funds to Ukraine if Zelensky would just announce a Ukraine investigation of the Bidens.
Thank you for making this. I’ve been trying to express to my family how nuts this whole thing is, and idk if they get me, but I get you and yeah I feel
People in today's short attention span society tend to not care about, or pay attention to anything that's not directly affecting them at any particular moment in time. That is until shtf, and it's too late to stop, or take corrective measures to prevent whatever it is from happening. If anything goes sideways with AI, it'll happen right under our oblivious noses. I believe it's already fully underway.
Good point about the singularity starting a couple hundred years ago.
Great Acceleration intensifies
Thanks for this. Great level of explaining. Not too dumbed down, not too technical. Easy to absorb and get me excited for more.
There are so many tv series and movie sequels have been cancelled that I really wanted to see more of (as well as series/movies that started off good but fell flat), and if I could prompt an AI to create more of something that I love, I absolutely would. It would just be considered "fanwork" wouldn't it? I just... I just really want more seasons of Firefly, okay 😭
I asked it to rewrite the fourth season of the 4400 and it was awful.
Firefly 😢😭🤧
Yes, please more Firefly😊
Firefly was the first tv show I thought of as soon as Joe mentioned the possibility of AI generated seasons.
@@VosperCDN me too! 😂
As a designer (power Photoshop user for 25 years) I can say that some of the AI software is such an incredible tool for photo editing. It has made some aspects my job infinitely more productive. AI does a fantastic job with “grunt” work, freeing me up to do the part AI can’t. (At least for now….😅)
Totally agree! I am blown away with what has come into my workflow in the 4 months even since this initial comment was posted. Regardless of what direction everything is going, I am personally relieved to never have to draw a clipping path again. 😊
One definite point of concern is Microsoft eliminating the AI Ethics department in order to “speed things up”🙈
Bloody hell lol
Microsoft ethics? Sounds like we wouldn't know anything about Bill Gates...
Worked just fine with vaccines...right? Right?
Microsoft can NOT be trusted with any of this. Like.. at all.
That department had an unfortunate name, but your comment is misleading. Microsoft still has a Responsible AI division that has significantly more funding and employees even after the recent layoffs.
Now Responsible AI does everything you think the AI Ethics and Society department did without redundancies. It's "speeding things up" because Microsoft had two departments doing ethical AI rules + product implementation guidelines for those rules, now they just have one doing both
Stoked to watch this. There are so many horrifying repercussions to AI. The main thing that keeps me up at night is fake video/audio. We already live in a post-truth society, but once AI video becomes impossible to discern from reality I feel like society will only unravel more. The rest of the implications don't bother me nearly so much, but the inability to separate reality from fiction in the future unnerves me deeply
Edit: good to see Joe shares my nihilism on this point lol
I heard that! Soon (maybe now?) we won’t be able to use audio, pictures, video as evidence in a trial because they can all be faked so well it will be indistinguishable from the real thing.
Well humanity has always preferred tyranny anyway. In an age of enlightenment and reason and greater personal freedom than any human in the past could have dreamed of, most people cling to primitive superstition and prefer the darkness of ignorance and to be lead and ruled by narcissistic egomaniacs. They prefer slave societies and authoritarian societies with rigid social caste systems. So it hardly matters anymore.
Education particularly critical thinking skills was/is the only safeguard against disinformation and alas it has been for the last half century been thoroughly dismantled in the US and likely elsewhere. This couples with the insidious nature of the Dunning Kruger effect and confirmation bias to allow so much unchecked damage to our world....
This is actually pretty easy to solve technically. All you'd have to do is create cameras and mics that are able to encrypt at recordings. That way, if something is still encrypted you know the source device.
@@NorthgateLP Not to mention that we will all know anything could be fake, so video will stop being credible evidence of wrongdoing or whatnot.
Hell, I don't trust video even now, as I know that even with real video media can create a totally different narrative from what actually happened.
I am 100% here for the longer form videos. Also, a new channel called "Answers with Joe" would work. AI has your back there. 😂 Also, I used Quilbot to paraphrase on an essay (as an experiment) and I got 96%. I get that it's not ethical, but it was fascinating.
Bold choice on the opening analogy
Would love to hear more about the companies being run entirely by AI. I’ve heard some stories but would like your take on those!
I work in DevOps, which is a lucrative field that requires specialized skillsets, and we get a lot of applicants who like to "fake it until they make it." We've already witnessed multiple instances of very sus behavior during onscreen interviews, so we asked ChatGPT to offer some suggestions on how to spot the cheaters, and it actually gave us some really solid suggestions that we now incorporate into our interview process. Without giving any of our new tactics away... I have to say that AI is pretty sneaky sis.
I am SO curious about these. What type of sus behaviors did you notice during the calls? (If you can share them)
@@Vale-nh6ey just ask the robot yourself, then ask how to pass/bypass them
Do you mean people searching for info DURING an interview? I recently asked ChatGPT to interview me for a specific role and that helped me prepare. I don't think that's cheating. But now I'm paranoid lol.
@@drivethrupoet To be clear, this has happened during live interviews on multiple occasions, where we had suspected they were using Google to augment their answer, and we'll sometimes let that slide if the answer is correct. We all use Google daily to do our jobs, so that's not a major transgression.
No... These folks are typing our questions into ChatGPT and reading the answers verbatim in real-time during a live interview. And we know it for a fact. I won't tell you how.
@@joho0 so if I’m correct, it’s ok to use Google in interviews because it’s ok to use Google in jobs. Are you saying it’s not ok to use AI in interviews because it’s not ok to use AI in jobs?
I feel like I should be terrified but I am not. I'll chalk that up to being desensitized from the chaos of modern life. I'm just excited to see what happens, good or bad. I've been following AI art for 6 months or so and I get a tremendous amount of entertainment from it. Ai art is hilarious.
Relating so much to this
You must not believe that AI can become an unstoppable force for pure evil. If such a scenario becomes true, an emotionless torture machine might plug your brain into a hell scape one trillion times worse than any nightmare you're naturally capable of conjuring. Imagine one second of that feeling. Now picture it lasting for eternity. Doesn't scare you?
@Tew Travelers Nah, I'll be dead before it gets that bad. Whatever horrible nightmares that await earth are not my problem after I die lol.
@@nozzzzy what about the kids...
So I'm a musician, composer, producer, and I have found all of the music AI programs super underwhelming. They tend to be exceedingly boring. The most complex thing I could use was Orb, but it's just not impressive. I'm sure for a corporate video or commercial music would be fine, but I need far more options than what is offered even for """"pro"""". I found one notation program that wasn't super intelligent, or options like creating in a specific mode or style in mind. Just throw in recorded music and it approximates the notation. Finale, muse, and Sibelius already do that when entering with a midi keyboard. It's all "happy" "nature" "medium tempo", it's not even close to anything professional- it's more of a fun app for novices when they're waiting at the dentist office. Also, every program that got the highest marks has already been acquired by shutterstock, tiktok, whatever, it's models are bought and I couldn't figure out how to access them out like Amper or Jukedeck. No, the one tried in this video is just slop. There's a lot more meticulousness than I think these are actually capable of handling at the moment. It approximates things but comes out like your soulless ai generated voice, but less exact, and more generic- tons of royalty free music already sounds just as good if not better. I'm interested in the actual creation of music using them. Choosing the instruments, modes, era, tonal choices for acoustic instruments- the instrument sound just like sad midi instruments. I dunno. It's just not even close, but it'd be interesting to see when it does actually get more interesting. I thought Orb Producer 3 seemed the most promising but I want it to write its music in notation. Be able to choose which mode of limited transposition or ask for a serialist computer music piece a la Stockhausen. Choose a synth and tape piece like John Adam's "Light on Water". I'm just not seeing that level yet anywhere and Amper's tracks are just WEAK but that's the best???? It feels the opposite of inspiring honestly.
Chatgpt isn't just giving wrong answers. It's giving us completely fabricated BS. The fact that AI has developed this very human character flaw right out of the gate is (to quote my early 70s self) a complete bummer. I'm feeling very much like a witness to Pandora opening her box.
GPT4 is a massive improvement as far as that's concerned. It won't be long until that is fixed.
It isn't lying, it is just weird, and absolutely not a human like failure mode. The issue stems from the fact that it doesn't actually have any memory, and has not been trained to say "I don't know". When it doesn't know it puts in the 'most likely' answer, but immediately forgets the very low actual likelihood, and that it just made a hail-mary guess. When it continues on it then refers back to what it just said, and treats it as though it was confident about it because nothing tells it that it was not.
What makes things harder is that it doesn't plan. When you ask it something like "Who is the president", it will guess the next few words will be "the president is", but if it doesn't know the answer it has painted itself into a corner. People do not generally stop themselves mid sentence when they realise they don't know a fact they need at the end, so it has not been trained to do that. It is stuck with only low probability continuations, so it gives one of those.
Unfortunately, we can't fix all the fake people! Lol
This is why I spend most of my time dully masturbating.
@@TheWWDproductions So in other words it's the same as our present day media and politicians.
A friend already got a scam call from someone who sounded EXACTLY like her grandson. "Grandma! I'm in trouble! I need money!" It was an AI generated voice impersonator picked up from him playing online video games. Fortunately she was savvy enough to realize it wasn't him. I wonder how many times they've gotten away with it?😱
A radio reporter in DC had this happen to her as well where her mother received a call from the reporters`s voice saying she’d been pulled over by the police and needed money to get out of jail. Her mother after a while became suspicious and called her directly to be sure what was going on.
15:05 I have asked it for instructions on powershell commands (mostly correct...if it gives me something inaccurate it's usually because powershell has deprecated a command or module dependencies aren't installed so you have to kind of guide it along) and it's been pretty spot on with air-fryer recipes! :)
It is literally impossible that this won’t spiral out of control. Reminds me of the movies “War Games” and the “Terminator” series…
Sky Net
if Ai can get us at least one more season of Firefly, then I'm on board
You are the one person on the internet I trust to give me a concise understandable explanation of this very tricky subject
The pressing issue I see with AI like Chat GPT is how it's being presented. It is a Generative AI, by definition designed to create/make up new information that "looks right."
But it is being presented like the next evolution of Siri. It's so smart! It can do anything! No, it can _talk_ about anything _it has seen before_ and makes up it's response on the fly.
It *_is not_* a search engine. But it _looks_ like a search engine and is being presented like one.
It is already difficult for regular people to tell the difference between a social media post vs a reliable well written article.
Putting a button right next to the search bar that basically works like your crazy uncle on facebook, but _presented_ like a google search result is STUPID.
Quick correction: These generative AI-s don't search from images, text. Everything is just "somehow" encoded in they synapses (and this is mindblowing I think). But cool video, as always!
This is one reason AI is very dangerous - it’s specific method of operation is somewhat of a mystery to us, we don’t fully understand everything it does beyond its basic programming.
On Telosa, GPT-4 said:
As of my last training data in September 2021, there was no city named Telosa. If Telosa is a newly founded city or has come to prominence after this date, I am unable to provide detailed and up-to-date information about it. However, based on your request, I could create a hypothetical description for a city named Telosa.
Yeah Bard also sometimes gives info that is way wrong, too but then hours later it is corrected. Although I dont see a feedback interaction for chatGTP does it have one? Bard you can tell if if it is wrong and even explain what it is wrong about.
OpenAI said they're working on addressing hallucinations and other issues now, rather than training a more powerful model, like the future ChatGPT 5. Their latest version is only a week old. At some point they'll have something their pretty happy with, probably called ChatGPT 4.5 and then they'll return to making it more powerful again.
This is the problem with GPT etc. the headlines and social media are full of amazing wonderful magical results, then you use it and it just lies to you or hasnt got any idea what its talking about.
This is more obvious to those of us who are older. I've seen so much change in my 67 years. Imagine a time with no cell phones, no internet, no computers, no video recorders, all music on vinyl records, baseball on the radio only except for the saturday game of the week (unless it was the world series), encyclopedias for information we had to buy or get 1 volume each time we bought groceries, a crank telephone and our ring was 3 long rings so we know to answer the phone, television broadcast you could just watch for free if you were close enough to the tv station, we kids played outside, roamed for miles in the timber without adult supervision, trick or treating and being invited in for a snack, police driving around using a loud speaker to tell us to take cover before a storm, going to public shelters before a storm, watching tv in mid american and not realizing there were racial problems because everyone looked the same (not knowing people in authority were keeping other races out of the area, ect ect. My only regret is I'm too old to see what life will be like in 20 years. I don't fear it at all even if I was younger. It has to be better than what we as humans have been doing to ourselves during my lifetime.
Amen (in a non religious way)
Wish I could agree with your last sentence
ok boomer
@@noxabellus You don't even know what the term "boomer" means. A boomer is a person who was born during the baby boom at the end of WWII (that's World War 2, since you probably don't know). Since WWII ended in the 1940s, that would mean a boomer would now be in their 80s. I hardly think the OP is in his 80s. Get some history.
@@roylavecchia1436 the meaning of words is constantly evolving, and the term boomer is in no way immune to this evolution. it has, for at least a decade now, been used as a slang term to mean an older person who is simultaneously pretentious and out of touch, or narcissistic, low in empathy, and other such traits typically associated with the generation to which it originally referred. not only that, your math about the "true" boomers' age is incorrect, they're in their mid-seventies on average, and there are plenty of these folks who are chronically online, so your assumption that the original commenter is not one of them is entirely baseless. in short, cringe response.
I studied AI as part of a Comp Sci degree course more than 30 years ago and it's already changed the world a bit. Automatic number plate recognition, smart speakers that respond to voice control - that's all AI. The sophisticated language models that are appearing now make it seem a lot more human and that's what people find disquieting, I think.
I kept my blinders on when GPT first came out, but when I heard it could write code I decided to check it out. I use it to write a lot of the boilerplate that comes with Unreal Engine classes, and it can often help me debug code
The very people that want to "pause" AI development should be the people given resources to make it happen first because the people who want to develop it first are going to inevitably make the dark future version, some of them, deliberately.
Maybe, but the people with the power to control the use of information have already proven themselves to be violent psychopaths.
Precisely. And precisely why such a pause shouldn't happen.....and won't happen.
The notion that we're even contemplating the possibility, let alone feasibility, of _pausing technological evolution_ is hilarious and naive. That's never happened. That can't happen. That won't happen. It's silly to even talk about it. Instead we need clear eyes going into this. Because it's happening. Now.
I'am worried as I see it, the first actual general AI controlled by a private entity, would basically have the potential to immediately make that entity the most powerful in the world - dominans. One can hope for various outcomes, however they do say: "power corrupts, absolute power corrupts absolutely."
"dominans"? Were you trying to say _dominance?_
P.S. don't worry about it too much yet. This is one of those topics where Joe bought into the hype and doesn't _really_ have any idea what he's talking about. We're a lot further away from a general AI than you think. A couple decades at the least (but probably a lot more).
@@idontwantahandlethough I certainly was. I don't know how far away it is, yet my concern is still the same.
I actually think that the attitude of: "not in the coming years" is kind of the problem.
Wow, another impressive video Joe!
Mainly breaks my brain to consider how you, and so many other amazing contact creators, constantly knock out incredible content that I would have a nervous break down if I tried to produce
This video was very informative and as always such a great video. But, I especially loved the part at the end starting at 32:24 where you slipped in a little reference from the movie Tommy Boy 💙 Thank you for another awesome video!
21:46 this also resonated strongly with me as a CAD draughtsman! I'd go so far as to say that anyone whose job involves taking one type of information and converting it into another will take a small amount of comfort from this!
incidentally I'm working on GAD - generative aided design.
Chat GPT 2 could already write code. The free version had a limit, and it had the disclaimer "this is just example code", but it 100% worked and was quite good. Obviously you pretty much have to be a programmer or at least know what you're doing to prompt it properly, but writing code is probably way easier than natural languange since there are only a limited number of ways to do most things, whereas natural language can have infinite layers of nuance and slang that spans globe as well as human history.
I'm an old school programmer and had chatgpt write a very specialized (but simple) spider for me. Its code works but it took me a lot of iterations being directed in very specific ways only a programmer would know to get it to do what I wanted. So yes joebob will be able to get it to write VERY generic code. You'll need programmers for anything else. It will make programmers 5 or 10x more productive though
@@cdreid9999 agree - I use chatGPT in my daily job to be more productive. Still - with more complexity and within the codebase where multiple files logic is chained together - the chatgpt is failing to creating working solution out of the box. Dev's are not going anywhere in next 10 years in my opinion. But programming will change, alongside graphic design and UX/UI.
@@cdreid9999I have a fairly basic grasp on programming and it has definitely increased my productivity. Asking more experienced developers is as frustrating for me as it must be for them, but now if I’m getting tripped up on something simple I can simply explain the desired outcome of my code, paste it in and tell it what’s not happening that should, it usually doesn’t hit the nail on the head but it has good insights and at least jumpstarts my thinking
@@cdreid9999 There was a big hullabaloo when OpenAI released Code Interpreter recently and how it was going to put data analysts out of work. "You can just upload a spreadsheet and it will analyze it for you based on natural language queries!" The thing is, as someone whose job is (complex and varied but largely) focuses on data analysis and science, I can tell you that the part of data analysis that happens after you have the spreadsheet to upload is the easy part. The real value of a data expert is being able to take lots of messy data sets in lots of different formats and put them together into that spreadsheet in the first place. Even beyond that, writing the code to assemble that dataset isn't where my real value lies. The real value I add is being able to map out the process to transform the data and understanding what is useful in the resulting data and what isn't.
I use ChatGPT alll the time. You're right. It has made me many times more productive than I was before I started using it. I can outsource a lot of the annoying stuff I do to it. Why spend 20 minutes writing a script that ChatGPT can write in 60 seconds? But I still have to debug it. I still have to know what that script needs to do in the first place. I still need to use my 20 years of experience in my industry to make the data meaningful. And I think we're at least a couple generations of AI away from it being able to do any of that.
I'd had trouble or rather had been very lazy at writing a text media type conversion tool for work. I knew it was only going to be about a 100 lines or so. Just simple text parsing with bash tools or perhaps even perl, etc. Rather than beat my head off the nuances or actually work through code-compile-dam-repeat cycle thousand times myself to to get it right... prompting GPT took around 20 iteration prompts to get flawless functionality.
The fact that ChatGPT learns with each users input is mind boggling to me.
Yes and no. It learns in-session, but between sessions it forgets everything. However, OpenAI can then use the recorded data to train a new version.
A few months (years?) Back, the military ran a simulation where a drone ai had target protocols, but a local operator had final kill/no kill authority. After multiple no kill decisions, the drone took out the local operator so the drone could continue its targeting protocol without obstruction. The ai program was discontinued.
I used Bing AI today to draft a letter for my oncologist to sign stating that my pain and fatigue from my cancer meds hindered my ability to complete schoolwork. That would have been a tough letter to even get started on but the AI generated it in seconds.
That's a tough spot to be in, I hope you get better soon. Something that tends to get missed in debates about whether using AI to help perform tasks is "cheating" is the reasons why people might need help with things. If AI can help someone get the medical support they need then that's something we should be celebrating
A lot of government and financing functions (NOT MILITARY) could be replaced by AI. That would reduce corruption and reduce costs. Government is bloated by bureaucrats and any reduction of that would be awesome. Human oversight would of course be required.
What happens when it takes our jobs
I've used Chatgpt for helping me write stuff for my Dungeons and Dragons homebrew, it is great with statblocks and characters. I've ran into so may roadblocks before I used it, especially because my world is based in around the industrial revolution.
Try using GPT-4 instead of 3.5; it's the difference between talking to your drunk uncle and your smart cousin who spent a summer abroad in Italy.
Agree about the hygiene, would add all of the food as well and drink as well. Survival games are beyond their best before date.
It scares me. I think mobile hones and the internet changed the world for the worse. I'm 37 so my childhood was an "old school one" and I think I was 15 or so when internet started becoming a household thing along with mobile phones and the change was so quick and drastic, I miss being 15. I'm worried AI will have such a bad effect on society. It worries me all the time
👴👴👴
I'm older than you and I feel more positive about technology. I don't see it as inherently good or bad, but I do think it enhances what we already are. So, if we're good people, we will use technology in a helpful way, but if we're bad people, we will use technology in a harmful way. It's a bit more complicated, of course, like people don't have an equal voice or equal power, so even if most people are good, if the people with the most power are bad, the end result will likely skew bad. The upshot of this is that we have to tear down unjust hierarchies before they get a hold of some kind of world-ending technology. But then, we've had nukes for 80 years and are still alive (somehow).
@@23phoenixashOrdinary people are evil, though, as the philosopher Peter Singer pointed out. Most people make decisions based upon what will bring them (and their immediate families) short-term gain. The impacts these decisions have on the wider world (especially outside their own communities and nations) are considered irrelevant at best to them, and possibly even desirable*.
It’s why most Americans and Canadians continue to eat meat three meals a day, despite the trillions of animal deaths and millions of tons of carbon released by animal agriculture every year. It’s why CEOs will plunge thousands of their workers into poverty to save their company 0.01% in cost, and then take home massive bonuses while shareholders take in big dividends.
Humanity is fundamentally evil, and our ability to reason and empathize is the only thing keeping us from simply being a race of monsters. How can an AI, designed by for-profit businesses for the purposes of creating profit, lead to positive social outcomes for ordinary people and non-human animals? It owes its entire existence to greed!
* look at the way “normal people” react with glee when homeless people get arrested, their things confiscated, sent away, etc. Humans **love** to see those they consider “beneath them” suffer.
@@23phoenixashtechnology in the hands of corporations will always be used in the worst way possible. Whatever upside there is to it, is just a means towards a much darker end.
I’ve always felt that computers are the potential cause of humanity’s demise and now with AI I’m convinced.
Computers aren't the issue, the issue is the mass of people who know nothing about computing now online and being able to be manipulated by AI and such. Hopefully it gets better as more kids grow up online. I did, and I know how untrustworthy and unsafe the web can be and how hazardous AI can be. We can only hope as the boomers who only recently got online die off things will improve. (Just saying, my dad is a young boomer, but not expecting him to live super long with his age and health. Not hating on boomers. Also he's been online since before AOL, so he taught me to be critical thankfully.)
@@Memento_Mori_Morals I was generalizing. I agree.
@@francispitts9440 oh haha I see, I didn't neccesarily think you disagreed, just wanted to maybe comment so people unaware of the creepy stuff AI has done might look into it now. Have a good day, sir. 😊
Thanks Joe, great video, as ever. I'm hoping that as a weird, extra creative costume maker and artist, I may escape the doom of AI. As long as it doesn't learn how to operate a sewing machine and do extremely detailed embellishments, I should be safe. 😂
Robotic embodiment of AI doing independent/general tasks is definitely (much) farther away than most pure knowledge work like writing a video script. Which as we know is itself somewhat of a ways out, although with various improvements to the base AI like API hooked helper apps or whatnot, there's no ruling out ChatGPT5-ish level AI getting there.
@@davidlovesyeshua Ironically it’s not the manual laborers and skilled craftsmen like yourself that should be worried: it’s the creatives and the mental laborers like radiologists, accountants, translators, etc.
I really love this episode. Being in the tech industry, I was heavily looking into AI in 2015 around the time of your previous AI episode and now I've taken a break from researching and playing around with AI apps. There’s just too much out there that's constantly changing, it's too overwhelming. This episode very much sums up how I'm feeling towards AI, which is why I love it so much - I don’t feel alone in these thoughts.
I agree on how overwhelming it is. I tried to learn some of it and within a few months it was somewhat outdated. It's made me just not want to bother which is in itself a problem. There are already too many things that I need to constantly be up to date on so adding a rapidly evolving AI tool set is just another annoying thing to follow.
@@CRneu Thanks for your comment, it’s reassuring that I’m not alone in feeling overwhelmed. Not bothering is weighing on my mind because I fear not getting on top of it will mean I’ll fall behind. Maybe someone will create an app that fast tracks learning AI. We need something like what we see in the Matrix where Neo learns kung fu😄
This is very useful for breaking down the development of AI into digestable bits. I think you have definitely added to the concersation and also conveyed how a lot of us are feeling. Dazed and confused and perpetually trying to catch up.
Your analysis on the use of ChatGPT is the best realistic take I've heard so far. It is so much better as a springboard than as a source of truth.
I just started using it for coding and it is much more useful at the start, for getting some examples to work off of, rather than pinning down specific issues.
Great video Joe. This subject is a very important one and the facts you touched on not only interest most people, they also scare the crap out of most. Thank you for treating the subject matter with the respect it deserves. By the way. I really enjoy the new angles of view you have created. Keep'n it fresh...lol
Good presentation, Joe. There's another aspect. There's a UA-cam influencer who created an AI version of herself that she sells for so much per minute, that can become someone's "personal friend". Other than the obvious use a guy could make of this, what about those who think they're falling in love with an AI? Remember the movie "Her"? You could discuss this aspect.
I'm surprised this channel isn't atleast half way to 10m subscribers. I mean I've subscribed, lets get this guy a diamond play button
Great update and the working examples were very clear on the accuracy. As well as all the use cases out there for AI now. It’s going to be fun times.
Hi Joe, for me, who is in my middle 50s and was born BC (Before Computers) and doesn't get that much exposure to new technologies, this is quite eye-opening. It makes me feel uneasy thinking about what AI can lead to, but on the other hand, create this curiosity in me to explore some of the AI options out there.
Thanks for a great episode.
Keep up the good work.
That "BC (Before Computers)" part made me laughing :D
Thanks for covering this Joe. Well done as always!
Poor covered Joe
Very entertaining as always, especially Tangent Cam.
I've been waiting for Joe's AI video ever since OpenAI's bots were fighting in games, like around 2018. I'm so excited to see what he has to say about the latest developments in AI.
First - i think what you contributed to this discussion is from your platform. You are collectively informing hundreds of thousands of people on a more in depth knowledge of the conversation of AI.
While I understood the concepts of general vs narrow AI, there are likely a large collection of your audience members that didn't and its important to look at how we collectively evolve our understanding. It makes me think of just how much stuff is general knowledge among our generation that wasn't before channels like yours came along.
Secondly - the thing that worries me more than anything is the black box you talked about. the fact that the experts just have no real clue how its working and we're implementing it into the fabric of society. that never goes well.
I use ChatGPT to look things up sometimes, it works pretty well if you don't take it at face value. You can give it a pretty vague question and even if the answer isn't entirely correct it can usually at least spit out some relevant keywords that you can use to look up more info the old fashioned way.
Yeah but the issue is when it completely off. Bard has this issue, too, though so it isnt a brand thing.
I've had the same experience, lots of hype flying around about how it will change everything, but the practical application is a slightly better google search.
The errors Chat GPT makes can be pretty appalling. Someone was arguing me with me that the royal houses of Europe weren't inbred because ChatGPT told them they weren't and used the Hapsburgs as an example. The most famously inbred royal family in history.
@@NoadiArt Haha, true. I was mainly using it to review stuff I learned years ago and then forgot about, so it wasn't too hard to spot the things that didn't make any sense 😀
@@Reddles37 Which is a fine way to use it. It';s people who just believe everything it spits out that really frustrates me
I always like to point out in the year 1900 there were maybe 50,000 working piano players in North America. Every bar, dining room, movie house, brothel, casino, coffee house had a guy playing piano. That and buggy whip factories converted to whatever and the world didn't end.