AI does not exist but it will ruin everything anyway

Поділитися
Вставка
  • Опубліковано 22 гру 2024

КОМЕНТАРІ • 7 тис.

  • @AuronJ
    @AuronJ Рік тому +2866

    I think its funny that you brought up a skin cancer app because in 2022 a group of dermatologists tried to make a dermatology machine learning tool that they found was drastically more likely to call something cancerous if there was a ruler in the picture. This is because so many of the images provided that were cancerous were taken by doctors who used a ruler for scale while pictures that weren't cancerous were taken by patients and had no ruler in them. Basically they tried to build a cancer finding machine and instead built a ruler finding machine.

    • @Apjooz
      @Apjooz Рік тому +35

      Humans do that too and would do it even more if we had larger memory.

    • @Amethyst_Friend
      @Amethyst_Friend Рік тому +441

      @@ApjoozIn this example, humans absolutely don’t.

    • @LordVader1094
      @LordVader1094 Рік тому +226

      ​@@ApjoozWhat? That doesn't even make sense

    • @Brandon82967
      @Brandon82967 Рік тому +74

      This is a flaw in the training data, not the algorithm, that can be easily fixed by removing the ruler

    • @madeline6951
      @madeline6951 Рік тому +77

      as a biomed cs major, this is why we need to preprocess and define the region of interest, smh

  • @vincentpendergast2417
    @vincentpendergast2417 Рік тому +3955

    Little Sophie hands in her ChatGPT essay without ever having double-checked it, Sophie's overworked teacher runs it through an "AI essay grader" without actually reading it, the grader gives it top marks and the circle of nonsense is complete.

    • @francoislatreille6068
      @francoislatreille6068 Рік тому +148

      :,( I cry, but the way you put it is actually pretty funny and relieves my anxiety

    • @ubernerrd
      @ubernerrd Рік тому +501

      The important part is that nothing was learned.

    • @JasenJohns
      @JasenJohns Рік тому +95

      The machine does not have to be intelligent, only make people dumber.

    • @WindsorMason
      @WindsorMason Рік тому +168

      And then the essay is used to train another network making things even much more betters, yay! :D

    • @Kira-zy2ro
      @Kira-zy2ro Рік тому +31

      actually a decent AI checker could compare it with a chatgpt essay and recognise that it wasnt a hand written essay. Kinda like my history teacher knew most of the literature and also most excerpt books so he recognised it if you just copied them. He never gave 0/10 bcs even turning up for the test or handing something in was usually good enough for a few marks. He only gave 0 for people who copied. He warned us year 1 day 1. Only one person ever tried it. And they were made an example of.
      And in the end it doesnt matter. You cant bring chatgpt to the test. And anyone who has just been copying will not make the tests and exams..so one failed exam later they will understand what school is about.

  • @batlin
    @batlin 5 місяців тому +643

    Cory Doctorow's summary a while back stuck with me: "The AI can't do your job, but the AI salesman can convince your boss to fire you and replace you with an AI anyway"

    • @TheManinBlack9054
      @TheManinBlack9054 5 місяців тому +10

      Great, then the company will go bankrupt because no job will be done and soon it will be clear that it is not the way.

    • @batlin
      @batlin 5 місяців тому +32

      @@TheManinBlack9054 probably, at least in companies that aren't gigantic enough to eat the loss. The company going bankrupt doesn't do much for the worker who got fired months earlier though.

    • @birdbrainiac
      @birdbrainiac 5 місяців тому +5

      @@TheManinBlack9054 which unfortunately happens a lot, but people have still lost their jobs.

    • @BillClinton228
      @BillClinton228 5 місяців тому

      Software companies have been trying to do that for decades... "hey why don't you fire all your staff and give their salaries to US". Of coarse right now all this magical AI tech is supposedly free, but once it gets wide adoption the tech companies will jack up their prices real fast.
      And somehow, these big brain, genius CEO's can't see that coming... they think AI will stay free forever or it will cost a fraction of the salary of a graphic designer.

    • @hughmilner7013
      @hughmilner7013 5 місяців тому +7

      @@TheManinBlack9054 ah, but if every company does the same thing then the whole system limps on despite the jobs not being done because everybody is failing in exactly the same way.

  • @notdog1996
    @notdog1996 5 місяців тому +270

    I am a translator. You just described perfectly how the market is like right now. Companies hype up this AI stuff and now we have to correct it at lower rates. Not only does it make the job intensely boring, but it's more prone to errors and we make less money in the end. I hate what this field has become.

    • @petiteange08
      @petiteange08 4 місяці тому +25

      I am not a translator but we used to work with a translation team to make some of our documents available in another language. But now that AI can do it (poorly) the company wants my team to review the AI translation directly instead. While I understand the language, I am in no way a communication expert and I'm not a translator. Before my review was to ensure that industry specific language and meanings were translated correctly, which is using my expertise. Now I have to make sure that grammar and figures of speech are translated correctly, and to do it quicker since the company thinks it's faster as we don't need to wait for a translator. The end product is of lesser quality and everyone suffer.

    • @InquisitorShepard
      @InquisitorShepard 2 місяці тому +1

      On the other side you get localisers who think its their god given duty to shove their politics in everything they translate.
      How boring if made your job matters very little conpared to not have "the message" shoved down everyone's throat in yet another facet of our lives.

    • @hometoroostchickens
      @hometoroostchickens Місяць тому +3

      I'm an editor, and in get so frustrated with people who think they can just run their manuscript through a program instead of hiring me to work on the project. I would much rather have an unedited manuscript as it was originally written than a manuscript that has been "edited" by a machine.

    • @muppet3901
      @muppet3901 18 днів тому +2

      @@ooievaar6756 I work as a translator/proofreader in the nordics. We ran multiple tests on AI outputs - the grammar was excellent, where the content was marketing (sales content) it was ok with some problems, anything technical or from a narrow source - it was complete and utter nonsense. We had companies sending technical instructions for wiring upgrades on cars (aux lights) and it was complete gibberish. The grammar, well, that was perfect. We are seeing more and more work called post-editing, and it is an absolute avalanche of shit.

    • @anameyoucantremember
      @anameyoucantremember 8 днів тому +1

      Translator here too. 30 years in the business.
      MT is a bit like cancer. The companies that got it first are starting to get sick, funnily enough not because MT is absolutely crap (tho it is), but rather because it will always require a human in the process, and as companies lowered our rates and shortened our deadlines and MT makes translators work harder and slower, linguists just stopped giving half a fuck about it and are streamlining MT content without even reading it. You can imagine. Completely useless and potentially dangerous.
      So, at least in my experience, some companies are sloooooowly reverting to projects without MT, and client companies are starting to ask for human translations with no MT involved.
      There is hope. I hope.

  • @PaulPower4
    @PaulPower4 Рік тому +1455

    "Garbage in, garbage out" is practically one of the foundational principles of computing, yet so many people seem to forget it when it comes to machine learning and making datasets that don't lead to problems.

    • @wallyw3409
      @wallyw3409 Рік тому +16

      GIGO the bonus mark i missed. My prof even had a comic with it every week.

    • @Chrisratata
      @Chrisratata Рік тому +14

      ​​@@bilbo_gamers6417eople that claim "it can only do what you tell it" seem to underestimate just how well we can get at developing optimal architecture and knowing what to tell it. Most of the people focused on its limitations are only looking at what its limitations are at the moment, with very little understanding of how these things work under the hood

    • @itchykami
      @itchykami Рік тому +80

      With sophisticated enough technology you don't even need garbage in to get garbage out!

    • @disasterarea9341
      @disasterarea9341 Рік тому +11

      fr. ML tools are good to talk to datasets, and that is the real innovation of them, but if u dont have a good dataset then yeah... garbage in garbage out.

    • @markmitk6192
      @markmitk6192 Рік тому

      8u88j8j8

  • @dyanpanda7829
    @dyanpanda7829 Рік тому +1562

    I went to college majoring in cognitive science. I wanted to know if artificial intelligence really exists. I graduated majoring in cognitive science, wondering if real intelligence really exists.

    • @Apistevist
      @Apistevist Рік тому +32

      I mean it does but at disturbingly low rates. Nothing we can't select for over centuries.

    • @movement2contact
      @movement2contact Рік тому

      Are you making a joke that the world is full of idiots, or do you *actually* mean that nobody/nothing matches the definition of "intelligence"..? 🤔

    • @ツルのために
      @ツルのために Рік тому +47

      It doesnt. The name is misleading. Its an estimation based on training data. Your prompt provides a conditional probability distribution and with respect to that it estimates the desired response.

    • @officialspoodle
      @officialspoodle Рік тому +187

      @@ツルのために i think the original commenter came away from their degree wondering if humans are even intelligent at all

    • @eclogite
      @eclogite Рік тому

      @@Apistevist eugenics doesn't really work. Not even to mention the absolutely janked ethics of the whole process

  • @PantheraLeo04
    @PantheraLeo04 Рік тому +822

    A while back I saw a photo from an IBM training slideshow or something from like the 60s or so, and in this training it had a whole slide that was just in giant font: "a computer cannot be held accountable so a computer must never make a decision" and I feel that that sentiment sums up a lot of the problems in all this AI stuff pretty well

    • @eduardhubner3421
      @eduardhubner3421 Рік тому +45

      In German there is the Concept of Sitzredakteur (de.wikipedia.org/wiki/Sitzredakteur), a newspaper "editor" who doesn't do anything. His job is getting fired and taking responsibility for failures. We already have so-called kill-switch "engineers" for AI. This is where AI is heading.

    • @OREYG
      @OREYG Рік тому +18

      Well, this is a very old take. Right now there is a lot about our daily lives that is directly controlled by software, the most critical pieces are nuclear energy and plane auto-pilots, those things are extremely robust. Fun fact - Chernobil disaster would've been prevented if operators left the automatic control system on.

    • @KaletheQuick
      @KaletheQuick Рік тому +7

      Yeah, I've seen that one. It's amusing. But also came like 15 years after we let missiles pick what heat signature to chase.

    • @pleaserespond3984
      @pleaserespond3984 Рік тому +51

      Yeah managers saw that and went "Oh, if the decision is made by a computer, there is no one to blame? The computer must make all decisions!"

    • @monad_tcp
      @monad_tcp Рік тому +39

      @@OREYG automatic control systems are more like scada or basically PID, they're totally open boxes. They do automatic decisions and they're auditable.
      "AI" is useless because its not debug-able , its basically a random generator that might be useful for play or art, not real systems.

  • @Valkyrie9000
    @Valkyrie9000 8 місяців тому +586

    I used to think AI would accelerate technology in horrific ways, but now I realize AI will freeze society in horrific ways.
    "The purpose of AI is obfuscation"

    • @GFXCXZ
      @GFXCXZ 7 місяців тому +74

      A stunning achievement has been made. We have automated lying.

    • @TheManinBlack9054
      @TheManinBlack9054 7 місяців тому +1

      @@GFXCXZ i dont know why you all are so dismissive of AI. If that is truly all what AI is then there is no problem, since there is and wont expected to be any intelligence and that John McCarthy (the guy who invented the term "AI" and was one of the founding father of the Artifical Intelligence field of science) was just a very skilled marketer who only tried to trick laypeople who have no idea whats the difference between AI and AGI is, then what is the problem? Seriously, if its a nothingburger then what is the problem and why should anyone be worried? Its not going to take your job, since its just incompetent, and if it does take your job its going to be dealt with the same way as all the other incompletent workers are dealt with (firing due to inefficency). So its not even a problem there.
      The problem here is that its not true, AI is progressing rapidly and such confident dismissal of its potential is hubristic. Believe me, you wont always live with GPT-4, GPT-5 will happen and then 6 and 7 and so on.

    • @YayComity
      @YayComity 7 місяців тому

      Not unlike the reality of social media.

    • @Bustermachine
      @Bustermachine 6 місяців тому +34

      @@GFXCXZ We have created artificial stupidity.

    • @MilesDashing
      @MilesDashing 6 місяців тому +7

      @@Bustermachine Yes! We need to start calling it AS instead of AI.

  • @GrantSR
    @GrantSR Рік тому +749

    18:32 - AI can easily take your job, if your boss never cared about accuracy or fidelity in the first place. I am a former technical writer. I had to get out of the field because I realized that most jobs available had nothing to do with actually writing accurate information. All they wanted was somebody to take a huge pile of notes and various random information from engineers, and rearrange it then format it to LOOK LIKE good, accurate documentation. How could I tell? Simply by looking at the work product. All of it looked pretty, had lots of buzzwords, but ultimately told the reader nothing of value. The documents are internally inconsistent, and inconsistent with reality.
    And all of this was years before large language models were invented. Managers have always known that it costs more money to get the documentation correct. They have also always known that they get promoted if they save money while generating reams and reams of documentation. What do you think is the first thing they throw out? Accuracy? Or volume?
    Therefore, large language models will easily replace a good 90 to 95% of all technical writers. And no one will notice the change in quality, because the quality fucking sucked already.

    • @iamfishmind
      @iamfishmind Рік тому +51

      ​@@Vanity0666what studies??

    • @lilamasand5425
      @lilamasand5425 Рік тому +92

      @@Vanity0666 are those big studies in the room with us right now?

    • @fartface8918
      @fartface8918 Рік тому +36

      ​@@Vanity0666there was a case a couple of months ago where helpline operators went on strike and wear replaced by ai shortly after they stop doing this because ai told people calling in to kill themselves, a similar sort of problem arises with your medical example sure it might scan a wart a little better than then current tools but it's providing medical advice that human wrote, throw in the slightest complication or that 1% of cases that just fail and the worst-case scenario is infinitely worse without the human there, it might be an efficiency improvement but to treat it as equivalent to doctors is going to kill a lot of people and leave even more sick

    • @lilamasand5425
      @lilamasand5425 Рік тому

      @@Vanity0666 so by big studies you meant that one research paper that Google wrote about med-PaLM 2?

    • @idontwantahandlethough
      @idontwantahandlethough Рік тому +31

      @@Vanity0666 I mean that's fine, I don't think anyone is arguing that computers aren't helpful. We're all abundantly aware of that reality. New technologies will continue to make us more efficient/accurate at our jobs. That's always been the case, and will continue to. That's not an issue. The issue comes from treating things that aren't _actually_ intelligent as if they are.
      We're a long, long way off from robits replacing doctors. I'm sure you know that. When/if that program gets implemented, a doctor will use it _as a tool,_ it won't replace the doctor. FWIW, "99% accurate medical advice" doesn't mean as much as you think it does.
      Nobody is arguing that "AI" (that isn't AI yet) is an inherently bad thing. All they're saying is that it's important to have clear communication surrounding this shit, because if we don't it's going to get used in some really bad ways, some really stupid ways, and probably some stupidly bad ways too.

  • @NotJustBikes
    @NotJustBikes Рік тому +2013

    Your videos are so good.
    I used to work for a company that used machine learning for parsing high volumes of résumés (like for retail positions where a human could never go through them all). The ML team was constantly battling the extremely biased training data that came from the decisions of real HR managers. Before that it was all Jennifers getting selected.
    Removing bias from ML training data is a full-time job. These algorithms are helpful, but should never be trusted.

    • @RichardEntzminger
      @RichardEntzminger Рік тому +34

      Your videos are so good too Mr @NotJustBikes! Do agree with the premise that artificial intelligence doesn't exist though? I think chimpanzees are pretty intelligent but I'm sure they wouldn't do such a great job at parsing resumes. Does that mean chimps aren't a biological intelligence but merely a ML (monkey learning) algorithm? 😂

    • @lucasgsauce
      @lucasgsauce Рік тому +80

      The unbelievable heartwarming satisfaction and validation when one of your favourite channels comments on another of your favourite channels (in an entirely different genre)...

    • @guepardiez
      @guepardiez Рік тому +10

      What is a Jennifer?

    • @performingartist
      @performingartist Рік тому +15

      @@guepardiez explained in the video at 17:20

    • @RuthvenMurgatroyd
      @RuthvenMurgatroyd Рік тому +19

      @@guepardiez
      Gonna guess that the name is being used as a by-word for a White woman but Becky is way better for that imho.

  • @marcogenovesi8570
    @marcogenovesi8570 Рік тому +4022

    In gaming we have been calling "AI" whatever crappy script is actually "animating" the NPCs or mobs or whatever else is opposing the player, since forever. AI is really a very generic term that does not mean much

    • @dannygjk
      @dannygjk Рік тому +59

      You seem to be unfamiliar with the term, "AGI". That is the one that you should be using in your comment to be precise.

    • @OrtiJohn
      @OrtiJohn Рік тому +887

      @@dannygjk I'm pretty sure that nobody has ever called a gaming script AGI.

    • @astreinerboi
      @astreinerboi Рік тому +284

      @@dannygjk You seem to be misunderstanding his point. He is agreeing with you lol.

    • @antred11
      @antred11 Рік тому +172

      @@OrtiJohn What he means is the AGI (Artificial GENERAL Intelligence) is what doesn't really exist. AIs are usually specific to a particular thing they're good at, while they often fail if confronted by something they weren't designed to handle. An AGI would be one that can handle (or learn to handle) pretty much anything, i.e. true intelligence.

    • @dannygjk
      @dannygjk Рік тому +30

      @@astreinerboi Except when he said, "AI is really a very generic term that does not mean much", is not precise. If a system makes decisions it is an AI does mean something it does "mean much".

  • @vKarl71
    @vKarl71 8 місяців тому +42

    A lot of police departments are using AI-style software to do all kinds of things such as identifying alleged law-breakers using facial recognition that was programmed as badly as the examples you cite, and uses data that was produced by a thoroughly biased system. Unfortunately the police will just say "That's what the computer said, so you're under arrest" even when the person is obviously (to a human) the wrong person.
    ♦If I use Chat GPT to write a paper on skin disease, then upload the output to a web conference on skin diseases will that upload become input to the language data set that feeds the software?

    • @carultch
      @carultch 3 місяці тому +5

      Your red diamond point is precisely the problem, called model collapse. AI needs to be trained on human-made data, for it to be any good. If it keeps getting trained on its own generated data, it will eventually become unusable.

  • @skinnyversal_studios
    @skinnyversal_studios Рік тому +1064

    i am a big fan of the "enshittification" theory, where people will use "a.i." models to make garbage content that is well optimised for seo, which will then as a result be fed back into the models to create garbage that is even more garbage until the entire internet is just generated nonsense, rendering search engines completely useless (as if they aren't already). hopefully, this could send us back into the early ages of the internet where people had to use webrings and word of mouth to find anything worthwhile again, and simultaneously cause the big tech data centers to fall out of use, ushering a path to a post-apolocalyptic web solarpunk future (good ending)

    • @WaluigiisthekingASmith
      @WaluigiisthekingASmith Рік тому +84

      The only thing seo has done thats good for the world is teach me how to avoid seo. Its not that seo is necessarily terrible but that the people who are most likely to use seo are also the most likely to put no thought or effort into their "content"

    • @fartface8918
      @fartface8918 Рік тому +60

      You must understand like 80% of the internet was already bots talking to bots on gibberish seo pages, problem arises a little after it starts taking a moment for someone to distinguish it, Because ai has huge huge problem of the second it starts feeding on itself it fundamentally fails to function, a case of broken clock right twice a day slow clock always wrong at an exponential level, once ai started writing like a human it became convincingly wrong a reverse printing press destroying disseminated knowledge by means of confusion and obusction a true dementia engine, now that so much data is polluted even if a fix to it being wrong about basically everything in writing existed it's going to be increasingly impossible to implement and so all the spaces inhabit online will be lower-quality for the sake of 200-1000 rich dudes making money they don't need or benefit from

    • @solidpython4964
      @solidpython4964 Рік тому +22

      If models keep training on model generated data it will lead to collapse.

    • @vocabpope
      @vocabpope Рік тому +30

      I really hope you're right. Can we hang out? I'll join your webring. Bring back geocities!!

    • @lynx3845
      @lynx3845 Рік тому +16

      I don’t like how accelerationist this idea is.

  • @DuskoftheTwilight
    @DuskoftheTwilight Рік тому +726

    I studied and work in computer science and I'm not mad at all about calling the machine learning decisions a black box, that's exactly the right thing to call it. Somebody has to understand the base of the program, but once the machine starts making it's associations, nobody knows how it's making it's decisions, it's a black box.

    • @farmboyjad
      @farmboyjad Рік тому +73

      Agree. Humans can understand the underlying system that the computer uses to build and refine a model, but the exact set of parameters that the ML algorithm ultimately lands on is so complex and so far removed from any human conception of logic that it may as well be black magic. You can't fix a faulty model by going in and analyzing it or tweaking it by hand, because it's all just numbers without any context or explanation. Huge swaths of research are being done into this exact problem: if we can't feasibly understand how the model is making the decisions it is (and we can't), then how do we build in safeguards and ways of correcting the model when it does something we don't want? That's not trivial.

    • @dannygjk
      @dannygjk Рік тому +4

      Exactly.

    • @michaeldeakin9492
      @michaeldeakin9492 Рік тому +12

      Andrew Ng had a comment in this vein: ua-cam.com/video/n1ViNeWhC24/v-deo.html
      Nobody knows what SIFT (or a lot of other algorithms hand tuned by thousands of grad students) is doing that works, just that it does.
      I'm concerned that it says our methods of understanding are incapable of scaling to problems we would really like to (need to?) solve in the near future.

    • @dannygjk
      @dannygjk Рік тому +20

      @@josephvanname3377 A neural network data structure can be ridiculously huge with a convoluted architecture. That is a black box which even a big team of humans could never hope to analyze in a reasonable period of time. The only hope is to develop a neural net system which trains itself to analyze such systems and then translate it into concepts, principles, and ideas that humans can grasp reasonably well. Even that would not be totally satisfactory because bottom line the devil is in the details which still puts it beyond human abilities to fully understand. Our brains just can't cut it in the modern data science world of neural net systems as far as understanding these black boxes is concerned. Even our own brains are black boxes similar to neural net systems.

    • @solidpython4964
      @solidpython4964 Рік тому +3

      Exactly! No AI/ML engineer really knows exactly what all the nodes in their neural net has learned to recognize and why, we just use our algorithms to go in and do the necessary optimization without needing to know what exactly the tiny parts are doing.

  • @Krazylegz42
    @Krazylegz42 Рік тому +656

    Now I’m tempted to make a phone app for skin conditions where you take a picture, and it always just says “go see a doctor”. If a person is worried enough about an irregularity on their skin to take a picture and plug it into some random app, it’s probably worth seeing a doctor for regardless lol

    • @rakino4418
      @rakino4418 Рік тому +134

      You can put "Provides 100% reliable advice" in the description

    • @Unsensitive
      @Unsensitive Рік тому +35

      And your sensitivity is 100%!

    • @DystopiaWithoutNeons
      @DystopiaWithoutNeons Рік тому +42

      ​@@rakino441899.1% So you aren't liable in court

    • @miclowgunman1987
      @miclowgunman1987 Рік тому +25

      "You are fine, that will be $2000." - the doctor

    • @pacotaco1246
      @pacotaco1246 Рік тому +20

      Have it use a neural net anyway, but still have it route all output to "go see a doctor"

  • @musicalfringe
    @musicalfringe 6 місяців тому +72

    The point Angela keeps banging on about - don't use AI to make decisions - is for me the core concern. What I fear is the tendency of people to gradually put these models more and more in the position of autonomous decisionmaker without oversight, not because they're deliberately being irresponsible, but because they think it's cool (and it happens to save a ton of money).

    • @fuckgoogle3335
      @fuckgoogle3335 4 місяці тому

      It’s already happening to “criminals” in sentencing. :(

    • @doomedwit1010
      @doomedwit1010 3 місяці тому +2

      That said in a military or some safety contexts you may have to. But I generally agree. And safety software probably should never be a black box AI.
      Definitions may be an issue. If it's not a black box to me it's not an AI. But the term will be used to describe those.
      Like is a Patriot or CIWS on automatic an AI or just an algorithm? And no civilian plane is going to be flying at your ship at supersonic speeds. But maybe that's a combination AI and hard coded guidelines.

    • @musicalfringe
      @musicalfringe 3 місяці тому +1

      @@doomedwit1010 Agreed. Good thoughts there. Refusing black boxes for safety is the central point.

    • @richardbloemenkamp8532
      @richardbloemenkamp8532 Місяць тому

      If it makes or saves money, it will be used. BTW human intelligence has all the flaws of AI so there is nothing to stop it.

  • @incantrix1337
    @incantrix1337 Рік тому +1356

    As I like to say: AI cannot take over my job. Unfortunately it doesn't have to, it just has to convince my boss that it can.

    • @GuerillaBunny
      @GuerillaBunny 10 місяців тому +124

      Or more likely, some tech bro will do the convincing, and they'll be very convincing indeed, because they're rich, and idiots can't be right, right? ...right?
      And of course... hype men never exaggerate their products. This is just essential oils for men.

    • @SnakebitSTI
      @SnakebitSTI 9 місяців тому +39

      ⁠@@GuerillaBunny"Essential oils for men" AKA beard oil.

    • @aidanm5578
      @aidanm5578 9 місяців тому +2

      Give it time.

    • @jeffreymartin2010
      @jeffreymartin2010 9 місяців тому

      Just have to run faster than the bear.

    • @Bingewatchingmediacontent
      @Bingewatchingmediacontent 9 місяців тому +40

      They tried to replace everyone at my museum job with a kiosk and a website. They hired everyone back when the managers didn’t want to have to spend all of their time fixing all of the garbage mistakes that the kiosk made. That was 15 years ago. I can’t believe we have to do this all over again with AI.

  • @tsawy6
    @tsawy6 Рік тому +432

    My favourite take on the google employee who made chat GPT pass the turing test was "yeah lol turns out its really easy to trick a human lmao"

    • @jamieLtaker
      @jamieLtaker Рік тому +38

      As an AI language model, I must remind you that it's unethical to trick a human, even for the sake of the Turing Test. Try asking something less interesting next time.

    • @HybridHumaan
      @HybridHumaan Рік тому

      Next video idea: Humen intelligence does not exist and we are ruined.

    • @generatoralignmentdevalue
      @generatoralignmentdevalue Рік тому +24

      Turns out the Turing test ia a moving target. ELIZA passed it in its time, but we have better bullshit detectors this century.
      Anyway I'm pretty sure that Google employee made that chat log as a publicity stunt to expose what he saw as incoherent company policies about hypothetical hard AI. Of course he was fired. I also saw an interview where he was like, X intelligent coworker who I respect disagrees with me about if it's a person or not, because we have the same knowledge but are different religions. No two people have the same idea about what makes them people, so fair enough.

    • @LogjammerDbaggagecling-qr5ds
      @LogjammerDbaggagecling-qr5ds Рік тому

      That guy started a religion based around the AI, so he's just batshit crazy.

    • @ludacrisbutler
      @ludacrisbutler 9 місяців тому

      @@generatoralignmentdevalueis Eliza the one that would preface 'conversations' with something like "I'm 13 years old and English is my 2nd language"?

  • @glitterishhh
    @glitterishhh Рік тому +469

    my favorite part was the rapid inflation for the price of an OpenAI monthly subscription throughout the length of the video

    • @corniryn
      @corniryn 10 місяців тому +16

      thought i was the only one that noticed..

    • @eddie1975utube
      @eddie1975utube 10 місяців тому +2

      @@cornirynI wondered that too.

    • @onigirls
      @onigirls 10 місяців тому

      It's meant to be humorous. @@eddie1975utube

    • @dreamstate5047
      @dreamstate5047 9 місяців тому +18

      open AI becoming closed Ai

    • @brianhopson2072
      @brianhopson2072 9 місяців тому

      You sound like a parrot ​@@dreamstate5047

  • @Ravenflight104
    @Ravenflight104 7 місяців тому +71

    My worry is that " garbage " becomes the accepted norm.

  • @Zelgadas
    @Zelgadas 10 місяців тому +603

    Here in Louisville, our school district used an AI firm to optimize bus routes and it was, predictably, and unmitigated disaster. Buses were dropping kids off at 9pm. The district had to close down for a week to sort it out.

    • @sp123
      @sp123 10 місяців тому +127

      The real horror is people willing to place all their responsibilities on AI like it's their God

    • @itm1996
      @itm1996 10 місяців тому +21

      The real danger is believing that these errors are the machine's fault, to be honest. All of these results are the result of human way to guide AI

    • @Zelgadas
      @Zelgadas 10 місяців тому +103

      @@itm1996 No, the real danger is relying on them without questioning or verifying results. Fault has nothing to do with it.

    • @fellinuxvi3541
      @fellinuxvi3541 10 місяців тому

      No it's not, it's precisely the machines that are untrustworthy​@@itm1996

    • @user-rx2ur5el9p
      @user-rx2ur5el9p 10 місяців тому +78

      ​@@Zelgadas No, the real REAL danger is that companies will do absolutely anything to lay people off, including using dumb "AI" gimmicks that they know won't work. Still cheaper than paying a salary! Whether or not it works doesn't matter!

  • @GiovanniBottaMuteWinter
    @GiovanniBottaMuteWinter Рік тому +415

    I am a software engineer with almost 10 years experience in AI and I agree with all of this. I recommend the book “Weapons of Math Destruction” which is a very prescient book on the topic and how ML is actually dangerous.

    • @TheKnightguard1
      @TheKnightguard1 11 місяців тому +5

      Who is the author? My goodreads search brought up a few similar titles

    • @aleksszukovskis2074
      @aleksszukovskis2074 11 місяців тому +1

      by which author

    • @TheKnightguard1
      @TheKnightguard1 11 місяців тому +2

      @@irrelevant_noob ah, for sure. I had other duties and couldn't venture more than a cursory look before. Thank you

    • @olekbeluga314
      @olekbeluga314 11 місяців тому +4

      I know, right? She knows this subject much better than some coders I know.

    • @GiovanniBottaMuteWinter
      @GiovanniBottaMuteWinter 11 місяців тому +12

      @@TheKnightguard1 Cathy O’Neil

  • @nefariousyawn
    @nefariousyawn Рік тому +103

    Sorry I don't have real money for patreon, but somehow I have a google play balance, so I will give you some. As a lay person with a hobbyist's interest in science and tech, I thoroughly enjoyed this, and you made great points that I hadn't considered. Machine learning algorithms might not take my job, but it will give employers/shareholders a reason to make it pay less, just like all the other tools that have enabled my job to be done more efficiently over the decades.
    There is an episode of the Muppet Babies that parodies TNG, but they also squish the other big sci-fi franchises of the time into the same episode.

    • @nefariousyawn
      @nefariousyawn Рік тому +19

      Also got a kick out of raising the monthly subscription cost of Chatgpt every time it was mentioned.

    • @acollierastro
      @acollierastro  Рік тому +14

      > There is an episode of the Muppet Babies that parodies TNG,
      Where has this been all my life?!?!

    • @nefariousyawn
      @nefariousyawn Рік тому +2

      @@josephvanname3377 if you want to donate to this channel, then convert some of your crypto into a fiat currency and then do so.

    • @nefariousyawn
      @nefariousyawn Рік тому +2

      @@josephvanname3377 I know this conversation isn't likely to go anywhere productive, so I'll let you have the last word. What you just told me sounds like you can't use your crypto because it's worthless. A currency is only a currency when it can be exchanged for goods and services. Take care.

  • @FOF275
    @FOF275 6 місяців тому +38

    33:10 It's honestly so annoying how Google keeps forcing garbage AI results during image searches. It makes the process of searching for art references way more difficult than it has to be
    It even throws them in when you haven't typed "AI" at all

    • @zwerne42
      @zwerne42 4 місяці тому +2

      What would you expect to get from this advertising platform while AI companies are so eager to push their products on you?

    • @KobleKongen
      @KobleKongen 2 місяці тому

      Just add "-ai" after your query in google, and you get the good 'ole webresults without the "AI" fluff. (They still have the function, they just buried it a few menuclicks deep)

  • @ar_xiv
    @ar_xiv Рік тому +225

    I remember an anecdote about machine learning that my uncle told me years ago before it was buzzy. The military took a bunch of aerial photos of a forested area, and then hid tanks in the forested area and took the same photos again, in an attempt to just let the computer figure out which photos had tanks or not. This worked within this data set, like if you left some photos out, the program would still be able to figure it out, but given a different set, it totally failed. Why? Because they had actually figured out a way to discern if the aerial photo was taken in the morning or in the afternoon. Nothing to do with hidden tanks.

    • @Deipnosophist_the_Gastronomer
      @Deipnosophist_the_Gastronomer Рік тому

      👍

    • @LaughingBat
      @LaughingBat Рік тому +21

      I wish I had heard this story back when I was teaching. It's a great example.

    • @flyinglack
      @flyinglack Рік тому +33

      the classical problem of over-fitting. good at the training set, not the job.

    • @wyrmh0le
      @wyrmh0le Рік тому +20

      That's a good one! Here's another:
      someone used machine learning to program the logic of an FPGA to do some task, and it worked, but when he looked at the design there was a bunch of disconnected logic. So they deleted that from the design, thinking random heuristic was random. It stopped working. Turned out the AI created a complex analog circuit in what was *supposed* to be strictly digital circuitry. Digital is good, because it's tolerant of variances in temperature, power supply, and the manufacturing process itself. But the AI has no idea what any of that is.

    • @gcewing
      @gcewing Рік тому +2

      @@wyrmh0le I don't think that was machine learning, it was a genetic algorithm -- it would generate random designs, test them, pick the best performing ones and create variations of them, etc. Importantly, the designs were being evaluated by running them on real hardware. If a digital simulation had been used instead, the result would have been more reliable.

  • @Ir0nFrog
    @Ir0nFrog Рік тому +539

    It’s a minor point, but I really like how the price doubled every time you mentioned how much they payed per month for their AI tool. It tickled me good.

    • @scalabrin2001
      @scalabrin2001 Рік тому +6

      We are friends now

    • @davidbrisbane7206
      @davidbrisbane7206 Рік тому +3

      Actually, Chat GPT 3.5 is free 😂😂🤣🤣

    • @amenetaka2419
      @amenetaka2419 11 місяців тому

      @@davidbrisbane7206 and also not very usefull

    • @barry5
      @barry5 10 місяців тому

      No.
      gpt-3.5-turbo-1106: $0.0010 / 1K tokens
      gpt-3.5-turbo-instruct: $0.0015 / 1K tokens@@davidbrisbane7206

    • @IcePhoenixMusician
      @IcePhoenixMusician 10 місяців тому

      That made me suspicious personally. Regardless, the points she made are important

  • @Hailfire08
    @Hailfire08 Рік тому +228

    I've seen people saying "just ask ChatGPT" as if it's a search engine, and, just, _ugh_. It's like those puzzles about the person that always lies and the person that alwaus tells the truth except this one does fifty-fifty and you can't figure out which half is good and which isn't without just doing the research you were trying to avoid by asking it in the first place. And then some people just believe it because it's a computer and computers are always right

    • @chrisoman87
      @chrisoman87 Рік тому

      Well there's a large body of work called RAG (Retrieval Augmented Generation) does a pretty good job (an example is Perplexity AI's search engine) @@godlyvex5543

    • @AthAthanasius
      @AthAthanasius Рік тому

      I keep hearing about Google (search) increasing shittification. Giving obviously ML-generated, and really bad, summary 'results' up the top.
      I wouldn't know, I use DuckDuckGo (so, yeah, based on Bing), and so far it's still returning actual URLs and site snippets. Yes, I know, eventually enough sites will be full of ML-generated shit that this will also be awful.

    • @kaylaures720
      @kaylaures720 Рік тому +6

      I put a homework question in it and got an incorrect answer (I was just trying to check my work, so I knew the ChatGPT was the one wrong actually). It was an accounting assignment. ChatGPT managed to fuck up the MATH. Like--I narrowed down the issue to a multiplication error, the one thing a computer SHOULDN'T mess up. Real AI is a looooooong way off still.

    • @Dext3rM0rg4n
      @Dext3rM0rg4n Рік тому +6

      I asked chat gpt to give me 10 fun facts, and one of them was that the great wall of china was so long it could circle the earth twice !
      Like I can understand IA being wrong if you ask it question on really complicated topic with low amount of data, but finding 10 real fun fact really shouldn't be that hard.
      There's just something that make them lie for no reason sometime, so yeah I agree they're a terrible alternative to Google.

    • @quantumblur_3145
      @quantumblur_3145 Рік тому +12

      ​@@Dext3rM0rg4nit's not "lying," that implies an understanding of truth and a conscious decision to say something false instead.

  • @DrunkenUFOPilot
    @DrunkenUFOPilot 5 місяців тому +42

    "Headlines are not science" needs to be on t-shirts and bumper stickers!

  • @superwild1
    @superwild1 Рік тому +453

    As a professional programmer people ask me if I'm worried about being replaced by "AI."
    My usual response is that there were people in the 60s that thought that programming languages were going to replace programmers, because you could just tell the computer what to do in "natural language."

    • @sciencedude22
      @sciencedude22 Рік тому +106

      Yeah business people made their own programming language so they could make their systems instead of needing programmers. You know, COBOL. The thing from the 60s that no one wants to program in unless you pay them way too much money. Turns out programming with "natural language" is actually the most unnatural thing to understand. (I know you know this. I wrote this comment for non-programmers.)
      EDIT, 1 year later: I feel like I owe an apology to Grace Hopper. I just went and read her biography and wow, I feel really bad about things I've said about COBOL, now that I understand the context of it all. I was angry that COBOL is hard to program in relative to modern languages like Go, but now I understand that's like getting mad at Algol just because it's old.
      Also, Grace Hopper invented compilers and linkers! I've never even made a DSL, much less a whole compiled language. Clearly I didn't know what I was talking about when I first wrote this comment. I'm sorry, and I take back everything I said.

    • @dthe3
      @dthe3 Рік тому +29

      @@sciencedude22 So true. I'm so tired of explaining my non-computer friends that I am not in danger of losing my job.

    • @lkyuvsad
      @lkyuvsad Рік тому +66

      This. Natural language is a terrible way to specify any system solving a problem with one right answer. We create enough bugs in precise, formal languages. Let alone something as imprecise as English.

    • @CineSoar
      @CineSoar Рік тому +82

      @@lkyuvsad "...Bring home a loaf of bread. And, if they have eggs, bring home a dozen."

    • @peterwilson8039
      @peterwilson8039 Рік тому +8

      @@lkyuvsad But we need something hugely better than Google for finding the results of moderately complex queries, such as "Prior to 2021 how many left-handed major league baseball players hit more than 50 home runs in a single season?" I don't want you to tell me that I have to write an SQL script to run this query, and in fact ChatGPT handles it beautifully.

  • @reillyhughcox9560
    @reillyhughcox9560 Рік тому +281

    It’d be funny is a professor/teacher made an assignment where you have to fact check an AI generated paper to show how stupid it can be while forcing the students to verify and learn the knowledge lol

    • @wistfulthinker8801
      @wistfulthinker8801 Рік тому +48

      Something similar already implemented at some colleges. The writing assignment is to start out with an ai generated essay and change it to a better essay. The grade is based on the improvement.

    • @raypragman9559
      @raypragman9559 Рік тому +29

      we did this in a class this past semester!! it was actually a great assignment. our entire class came up with questions to ask chat GPT, then voted on which one we should ask it. we then had to edit and correct the response it gave to the question we asked it

    • @SPAMLiberationArmy
      @SPAMLiberationArmy Рік тому +13

      I've thought about doing this in a psych class but I'm concerned that due to source confusion students might later mix up what the AI said and course material.

    • @acollierastro
      @acollierastro  Рік тому +75

      I didn't go into it too much in the video but I do think as described Sophie met the terms of the assignment and would get an A. She looked up and learned all the information and produced a paper. I think blank paper paralysis has a huge negative effect on confidence (which in turn has a negative effect on higher education outcomes.)

    • @Zeltalu
      @Zeltalu Рік тому +3

      It'll be so funny when these deniers get replaced 😂

  • @daviddelille1443
    @daviddelille1443 Рік тому +168

    Another good example of machine learning tools "learning the wrong thing" is a skin cancer detector that would mark a picture of a skin lesion as cancerous if it contained a ruler, because the training pictures of real skin cancer were more likely to have rulers in them.
    Big "never pick C" vibes.

  • @MeeraReads
    @MeeraReads 5 місяців тому +26

    This comes up in employment discrimination! The computer might figure out that people who live close to the office are more likely to stay on longer and recommend a list of employees based on zip code. Except with the US’s history of redlining, this would result in discriminatory hiring, since so many cities and zip codes are still segregated.
    In cases of race discrimination, intent doesn’t matter, a uniform discriminatory outcome is treated as a violation regardless, but this could have huge implications for things like gender, disability and age discrimination in the workplace, since it’s a black box and a judge can’t listen to a computer testify in court

  • @Not-Fuji
    @Not-Fuji Рік тому +255

    That anecdote about translators and contractors makes me chuckle a little. I work as an illustrator for a company that's trying very hard to replace me with an AI. So far, it's cost them about 10-20x my meager salary between hiring ML 'experts' and server upkeep, and all of our projects have been stalled for months because just none of the AI that was expected to fill the gaps actually works. But, as much schadenfreude as I get watching them dig themselves into a hole, it's very worrying that they just keep trying. It's worrying that even if it doesn't work, even if the output is garbage or it's expensive, we're still going to be stuck with this crap for the foreseeable future. Just because of the aesthetics of 'built with AI'. I really hope this is the death knell of influencer-capitalism, but something tells me it'll just keep getting worse.

    • @manudosde
      @manudosde Рік тому +24

      As a freelance translator, I feel your pain/schadenfreude.

    • @dunsparce4prez560
      @dunsparce4prez560 Рік тому +24

      I love the phrase “influencer-capitalism”. I know exactly what you’re talking about.

    • @ronald3836
      @ronald3836 Рік тому

      If it doesn't work, then thanks to capitalism your company will go belly up and another company not making the same mistakes will take over.
      Capitalism does not protect companies. It is there to remove inefficient companies from the economy.

    • @ronald3836
      @ronald3836 Рік тому

      @@manudosde is it true that translation fees have halved?

    • @marwood107
      @marwood107 Рік тому +39

      Your employer might be interested to know that AI generated images are not eligible for copyright registration in the US, in a decision from Feb/Mar 2023. (Original comment got ate, I assume because I tried to link to an article about it here.) It's possible to get around this by having a human alter the image in photoshop, and I assume that's where this is going to end up, but so far every vendor who has tried to sell me this stuff didn't know about this decision so I have to assume they're not very smart and/or huffing their own farts.

  • @NonsenseOblige
    @NonsenseOblige Рік тому +427

    In Brazil, University of São Paulo, we have the Spira project, that attempts to identify lung insufficiency based on speech recordings. One of the issues that came up is that in the data set, all the patients with lung insufficiency were in the hospital (obviously), and most of the control group was recording from home, so the AI trained to identify it kept thinking the beeping or heart monitors and the sound of machines and people talking in the background as lung insufficiency and silence as healthy lungs.
    Turns out an AI can't do a phoneticist's job.

    • @justalonelypoteto
      @justalonelypoteto 11 місяців тому +33

      fwiw it seems like "AI" is just the dumb way to program, i.e. if something is getting complex let's just throw a bunch of data at a metric fuckton of intel xeons sitting in a desert somewhere for a few months and wait until a passable thing comes out the other end that sort of works sometimes but nobody understands how, so it's completely unfixable without just rerunning the training sequence. It's only as good as its data, obviously, and for anything that's not on the level of speech or image/pattern recognition I frankly think it's often just the fever dream of some exec who thinks big data and some big processors are a viable replacement for hiring a dev team

    • @mtarek2005
      @mtarek2005 11 місяців тому +30

      this is a problem of bad data, since ai cares about everything while a human can ignore stuff, so u need to clean up the data or get better data

    • @lasagnajohn
      @lasagnajohn 11 місяців тому +7

      You didn't see that coming? No wonder Brazil can't get into space.

    • @fredesch3158
      @fredesch3158 11 місяців тому +17

      ​@@lasagnajohnYou're talking like you'd notice lol, care to share some of your work with us?

    • @fredesch3158
      @fredesch3158 11 місяців тому +24

      ​@@lasagnajohn And not only that, but dermatologists tried to make an app to detect melanomas and ended up making an app that accused photos with rulers to be melanoma (you can read about it in "Artificial Intelligence in Dermatology: Challenges and Perspectives"). This is a common problem with machine learning solutions. You talk a lot for someone who hasn't done any work, and apparently doesn't know common mistakes in this area.

  • @SkyLake86
    @SkyLake86 Рік тому +371

    I like how every time she mentions the price of ChatGPT it keeps getting higher lol

    • @ninadgadre3934
      @ninadgadre3934 Рік тому +39

      I’m kinda worried that her future videos are gonna self censor some of this brutally honest criticism of existing brands and services because her channel is becoming big really quickly and soon will rustle a few feathers. I hope it never comes to it!

    • @rakino4418
      @rakino4418 Рік тому +73

      ​@@ninadgadre3934the key is - she has a career. She already has academic publications. If she cared about ruffling feathers she would have already been self censoring

    • @mybuddyphil8719
      @mybuddyphil8719 Рік тому +35

      She's just keeping up with inflation

    • @msp26
      @msp26 Рік тому +9

      It's a good video otherwise but this point is weird. I don't agree that language models will get more expensive for the average user to access.
      -3.5(Turbo) is super cheap via API
      -shtloads of money is being pumped into this domain and companies will compete on price
      -OpenAI doesn't have a monopoly on the tech. You can download plenty of open source models yourself and run them
      -compute gets more powerful over time and more optimisations will be made

    • @row4hb
      @row4hb Рік тому +16

      @@msp26those investors will be looking for a financial return - usage won’t make it cheaper.

  • @JoelSemar
    @JoelSemar 8 місяців тому +79

    As a software engineer of over 10 years I thank you from the bottom of my heart for making this video. Also your rant about "making me look at this lame shit" was easily one of your best.
    ..I only said "Eh..is that how we are explaining that?" a few times 🤣😉

    • @Colddirector
      @Colddirector 5 місяців тому +1

      ChatGPT's kinda useful if you have to work with a language/framework you're totally unfamiliar with because it can give you a starting point with whatever you're trying to write, but I've never seen it produce usable code for anything more than like a fizzbuzz python script.

    • @vhfmag
      @vhfmag 5 місяців тому +4

      I just wanted to say that first I saw your comment, then I watched her gell-man amnesia video and I just had to pause because I was like "oh shit, it's a reference!"

    • @JoelSemar
      @JoelSemar 5 місяців тому +1

      @@vhfmag so relieved to find that my comment wasn't completely misunderstood

    • @davidmarshall2399
      @davidmarshall2399 5 місяців тому +1

      ​@@JoelSemarit's fine

  • @Dent42
    @Dent42 Рік тому +464

    As someone studying machine learning / natural language processing, I’m surprised you didn’t mention ML tools having been used to wrongfully arrest multiple people (all of the victims I’m aware of were people of color). These missteps are why ethics and diversity in data are strongly emphasized in my program, but there’s always room for improvement!

    • @acollierastro
      @acollierastro  Рік тому +185

      I didn't mention that because I didn't know that. That's awful.
      I am glad people are talking about it in academia but I am not sure the DEI efforts will cross over into the industry sector for a long time.

    • @markosluga5797
      @markosluga5797 Рік тому

      Less bad but another example is the ecommerce giant building a hiring AI that only hired white male IT professionals.

    • @chalkchalkson5639
      @chalkchalkson5639 Рік тому +28

      @@acollierastro There is also a really famous paper where they show that for a specific dataset for sentencing color blindness and race neutral sentencing were mutually exclusive. Apparently this question was studied as a defence when a tool they developed for courts to use turned out to produce racist outcomes. But color blind input data was part of the requirements they were given, so after showing that those two things were mutually exclusive they were off the hook.

    • @NickC84
      @NickC84 Рік тому

      Even the damn machines are racist

    • @TheCytosis
      @TheCytosis Рік тому +41

      @@acollierastro It's real bad out there.Google fired both heads of their ethical AI team a few months ago for publishing a paper on biases and flaws regarding minorities

  • @tehbertl7926
    @tehbertl7926 Рік тому +415

    Came for the AI insights, stayed for the TNG muppet crossover.

    • @DouwedeJong
      @DouwedeJong Рік тому +4

      i am hanging on..... for the muppet

    • @TheGreatSteve
      @TheGreatSteve Рік тому +9

      Pigs in Space!!!!

    • @dapha1623
      @dapha1623 Рік тому +15

      I really didn't expect a video about AI has TNG muppet crossover discussion as a closing, but I very much welcome it

    • @bbgun061
      @bbgun061 Рік тому +3

      I loved the idea but obviously it won't have human actors, we'll just use AI to generate them...

    • @MusicFillsTheQuiet
      @MusicFillsTheQuiet Рік тому +6

      The casting was spot on. Wouldn't change a thing. I'm trying to figure out who would Q be....

  • @Crosscreekone
    @Crosscreekone Рік тому +337

    When I was in the middle of my career as a naval officer, the Navy finally started using collision avoidance systems. My junior officers, of course, felt they no longer needed trigonometry and/or maneuvering board skills (like a specialized slide rule with graphic representation that mariners use to keep from going crunch). It took a catastrophic loss of the system at night in the middle of a huge formation for me to convince these scared-shitless “kids” that they still needed to be able to do the math. The same applies to lots of other tools of convenience that we rely on-we still need to know how to do the math, or we’d better know how to swim.

    • @lhpl
      @lhpl Рік тому +18

      You should know how to swim even if you understand trigonometry. I suspect there are plenty of scenarios that would require you to swim, and can't be avoided just by knowing trigonometry.

    • @ThatTallBrendan
      @ThatTallBrendan Рік тому +16

      ​@@lhpl As literal Jesus I can confirm that trigonometry is what allowed me to do all of it. I can't even get wet.

    • @treeaboo
      @treeaboo Рік тому +12

      @@ThatTallBrendanWith the power of trig Jesus became hydrophobic!

    • @jooot_6850
      @jooot_6850 Рік тому +3

      @@ThatTallBrendanTriangles, son!
      They harden in response to physical trauma! You can’t hurt me, Jack!

    • @RoamingAdhocrat
      @RoamingAdhocrat Рік тому

      I'd really like to know more about specialised anti-collision slide rules

  • @sprlilaznboi
    @sprlilaznboi 6 місяців тому +30

    This just makes it even more horrifying when I see articles saying that Israel is using machine learning to select human targets for bombings.

    • @carultch
      @carultch 5 місяців тому +12

      This has "I was only following orders" written all over it. The difference is, the orders aren't even coming from a human you can put on trial.

  • @Overt_Erre
    @Overt_Erre Рік тому +87

    We need to be saying it now. "AI" will be used as a way to remove responsibility from entire categories. And no one will be willing to take the responsibility for it back from them. Everyone will want high pay-low responsibility jobs like designing more machine algorithms, so who will be responsible for all the problems? We're essentially creating a mad "mechanical nature" to which humans will have to adapt, instead of the world being adapted for humans...

    • @aniksamiurrahman6365
      @aniksamiurrahman6365 Рік тому

      Bye bye civilization.

    • @thornnorton5953
      @thornnorton5953 11 місяців тому +1

      @@fuzzfuzz4234the heck? No. Its not.

    • @SuperGoodMush
      @SuperGoodMush 10 місяців тому

      ​@@fuzzfuzz4234 i certainly hope so

    • @Rik77
      @Rik77 9 місяців тому +1

      That already happens now. Managers blame the it system for a model that outputs a value they don't like, when it isn't the it system itself, it's the model that they don't like. But that's why, often in finance people work hard to keep those kinds of reactions in check. Systems and models are tools to be used in decision making, not decision making themselves. But managers do love to just default to a system if they can. We mustn't let people absolve themselves of accountability.

    • @aporue5893
      @aporue5893 9 місяців тому +1

      the crazy thing is that an ai or bot can copy your entire comment you just made and pretend to be you........ 😮

  • @satellitesahara6248
    @satellitesahara6248 Рік тому +67

    I'm a compsci graduate working in tech at a moment where every new "hype" topic in tech is some new infuriating scam or something that is being completely misrepresented to the public and watching this video was so healing

  • @yuu34567
    @yuu34567 11 місяців тому +182

    A note about mushrooms -- some edible species are nearly identical to toxic ones, as I'm sure most people have heard about. The yellow-staining mushroom (A. xanthodermus) can look virtually identical to field mushrooms, with the distinguishing feature being how it goes yellow when damaged. Fully intact yellow-stainers look just like field mushrooms -- the best way to check is to scratch at the skin to see if it turns yellow. An AI tool will not know this unless a human specifically makes note of it.
    Another example: wavy caps (P. cyanescens) and funeral bells (G. marginata); two mushrooms very similar in appearance, but one can give you a good time and the other will kill you.
    Specifically in Australia, we have P. subaeruginosa (often called 'subs') and funeral bells, the former being a psilocybe (like cyanescens, both are psychoactive). They don't look similar as adults, but in the younger stages they can look almost identical. I've mistakenly picked them before. The worst part is that they can grow in the same patch, right next to subaeruginosa, but again an AI would not tell you that. An 100% accurate way to check is to wait for the mushrooms to go blue once picked. But when they're growing outside you can't always tell the difference.
    Like you said, a plant-identifying AI would be really helpful as a jumping-off point. There are so many mushroom species and so many that look vaguely alike, so even narrowing down possibilities is overwhelming if you're new to it.
    Cool trivia, funeral bells are full of amatoxins, which are the same compounds found in death caps.

    • @EscapeePrisoner
      @EscapeePrisoner 10 місяців тому +13

      Dude! You just solved a mystery for me. I ate the yellow staining mushroom. I was so convinced I had the field mushroom, not knowing the existence of a yellow staining species. For anyone interested that's Agaricus xanthodermus. And it's considered good etiquette to use the full name in public forums instead of assuming everyone understands your jargon. Abbreviations are best used AFTER you have shown that which is being abbreviated. Otherwise how do we know if you are talking about Agaricus, Amanita, Armillaria, or Auricularia? I mean, you can see how that might lead to trouble...right? With respect. Thanks for solving the mystery.

    • @yuu34567
      @yuu34567 10 місяців тому +17

      @@EscapeePrisoner oh hey, I'm so glad I helped!!! I can imagine the experience of eating one of those is pretty unpleasant 😭 but I really appreciate that people like my mushroom comment on this AI video ahh
      and thanks for the feedback!! I'll be more mindful next time. I left out the common names for some of them because they have multiple or they're used for multiple species, but I should have put them in anyway.

    • @liesdamnlies3372
      @liesdamnlies3372 7 місяців тому +5

      …yeah I think I’ll just leave any mushroom-picking to people with experience. Like actual mycologists.

    • @muzzletov
      @muzzletov 6 місяців тому

      it will know, since you already trained for both. if you didnt, then your set is flawed anyway. and you should end up with VERY similar probability for both.

    • @Cosmic-P.-Lotl
      @Cosmic-P.-Lotl 3 місяці тому

      This aged poorly :/

  • @GameUnCrafter
    @GameUnCrafter 5 місяців тому +11

    I work in quality control at a pharmaceutical company and wondered if AI would replace me. But the thing is, a robot checking a robot checking a robot is probably not great for pharmaceuticals and thankfully the FDA is in agreement

    • @dingo4229
      @dingo4229 4 місяці тому +2

      You won't be replaced but I'd still advance your skills and gain an understanding of how ML is used. Im a software dev who used to work in pharma and I can tell you without a shadow of a doubt there will be new parts of part 21cfr regarding ML models. These tools are absolutely coming in the regulatory space.

  • @helloworldprog7372
    @helloworldprog7372 Рік тому +321

    This exact thing is happening in programming where people are like "wow coders are going to lose their jobs, we don't need programmers anymore" but like "AI" just vomits out garbage unoptimised code that a programmer would then need to fix.

    • @vasiliigulevich9202
      @vasiliigulevich9202 Рік тому +62

      The programmers who are gonna lose jobs are underpaid interns. Do not forget - it is basically their job to produce unoptimized code that requires supervision.

    • @fartface8918
      @fartface8918 Рік тому +44

      ​@@vasiliigulevich9202yeah but what happens forty years from now when the people fixing things have retired or died and not enough people can afford to enter the industry to replace them because entry level jobs have been automated away enough to bottleneck gaining real experience and something to put on a resume to get, especially when you consider ai's code is significantly lower quality than humans and is incapable of improving based on a in the moment context

    • @vasiliigulevich9202
      @vasiliigulevich9202 Рік тому +56

      @@fartface8918 that would be a problem for future management of some future companies. Current hiring decisions are optimized to benefit current management of any given company. Welcome to capitalism.

    • @fartface8918
      @fartface8918 Рік тому +3

      @@vasiliigulevich9202 horid

    • @yourdreams2440
      @yourdreams2440 Рік тому +3

      @@vasiliigulevich9202 What do you mean "welcome to capitalism" do you expect companies to tell the future?

  • @EphemeralTao
    @EphemeralTao Рік тому +117

    One thing I am already seeing in multiple industries (including my own, which is kinda frightening) is the increase use of machine-learning tools to replace workers for certain very specific contexts. Language translation is one of them, specifically for technical manuals. There's already a problem of "Engrish" -- badly translated, clumsy, and confusing English translation -- in tech manuals, and the industry was perfectly willing to simply accept that as the norm for decades. These machine-translation tools will produce manuals with about the same or slightly worse quality, and management is perfectly happy to accept that as close enough to the norm as long as they don't have to pay people for better quality translations.
    And that's the real problem of these "AI" machine-learning tools, being "Good Enough". Not that they'll replace us by doing our jobs as well as we do, because that's a long way off if it ever happens; but that the capitalists that own and use these tools will consider their work "good enough" to replace workers; that they'll consider the drop in quality to be adequately balanced by not having to pay humans to do the job anymore. That's why we've seen such a decline of, for lack of a better term, "quality control" in so many aspects of so many industries: capitalist owners and their lackeys accepting lower and lower levels of "good enough" as long as they can keep shoveling more money into their pockets; even if that results in failed businesses in the long term. Because that is what it's all about, prioritizing short-term gains over long-term viability.
    Also, The Muppets are the greatest thing ever, and now I'm going to have "Johnny We Hardly Knew Ye" stuck in my head for a week.

    • @hagoryopi2101
      @hagoryopi2101 Рік тому +5

      Prioritization of short-term gains is not unique to capitalism. The power of the people to hold idiots accountable for doing such stupid things, however, is unique to the right to privately own your property and therefore to give explicit consent before you have to hand any of it over to the people you think might waste it.
      If the people prioritizing short-term gains are doing it with your tax dollars, which you legally cannot stop paying them (something people in tax-funded services do constantly yet never get called out for, and which they will absolutely begin doing with machine learning, too, once people young enough to know about it start getting elected), good luck getting them to stop!

    • @EphemeralTao
      @EphemeralTao Рік тому +21

      @@hagoryopi2101Erm, no, that doesn't make sense. The prioritization of short-term gains may not be unique to capitalism, but it's certainly orders of magnitude worse under a capitalist system, since no other system has anything like the economic pressure to do so. Prioritizing short-term gains is predominantly the effect of corporate business structures, limited liability corporations, and emphasizing unsustainable growth over long-term stability.
      Tax dollars have nothing to do with "short-term gains", since that's an effect of commerce, not social programs. The abuse of tax funded programs is an entirely different and unrelated issue. The biggest abusers of public tax dollars are megacorporations, through tax write-offs, regulatory loopholes, and outright fraud as we saw with the Covid business incentive payments. Big businesses depend heavily on local public infrastructure without paying their share, or often anything, into its construction or maintenance.
      Also, private property ownership has nothing to do with voting; hundreds of thousands of people in the US own property and are still disenfranchised by state laws, regulations, and polling restrictions. Accountability, in both the public and private sector, is created through democratic processes, legal processes, and regulatory agencies. The current lack of accountability in the private sector is the result of regulatory power being gutted within the last four decades, and a lack of legislative will to restore it.
      This is all just mindless libertarian propaganda with no connection to reality.

    • @hagoryopi2101
      @hagoryopi2101 Рік тому +1

      @@EphemeralTao tax money is income. The transaction of tax money for social programs is commerce. They want more income than spending, and they want to use social programs in ways which convince us they deserve more tax income (regardless of of they deliver on their promises). That's the same economic pressure which corporations are under. The only difference is that we don't have the legal right to consent for whether we give them money or not, so they're unaccountable.
      Yes, the biggest abusers of tax money are corporations. Because it's there to capitalize on, because we can't consent to giving it to the government like we can giving it to them directly, and because they have the most power to lobby for it. That is a natural consequence of the existence of those programs, which can't just be regulated away because they will find every loophole and underground method to get the candidates who favor them into power and the regulations which favor them into law, because they have the most power to make that happen. As long as we don't have the power to consent to giving that money away, they will have first dibs on it; if we did have that power, they would have literally nobody else to answer to but us, because we would control their money.
      Democracy is no substitute for the power to threaten their bottom line. The fundamental problems remain regardless of who is in power, it's slow and bureaucratic to fix these problems by design, and the massive majority of candidates are part of the same club. Giving them more power to largely do the same they always have won't make things better.
      Circling back to AI, they will absolutely use machine learning in lazy ways which will hurt us. Several people have already been falsely arrested based on AI-driven facial recognition. Lawyers have already tried to use AI to write their court documents for them. Corporations are already hard at work lobbying for regulation to keep us from benefitting from machine learning, to make sure only they can benefit. There will probably be even more creative abuses of AI as time goes on, some of which are probably already happening without our knowledge: I can imagine AI-written legal code, AI-scraping personal information from the web to enhance federal surveillance, use of AI facial recognition to distribute fines for petty crimes without any human input at all which will be nearly impossible to dispute without spending more money than the fine anyways, offering AI public defenders instead of human ones, and so much more! And because we can't threaten their bottom line, and because democracy only lets us vote for 2-4 different flavors of the same crap in the majority of elections at any level rather than making meaningful change, the government will be virtually unaccountable.

    • @aapocalypseArisen
      @aapocalypseArisen Рік тому +1

      less work is good for humanity
      it is the systems and societies we live in that make it existentially concerning
      utopia and dystopia are a very thin line sadly

    • @shrimpdance4761
      @shrimpdance4761 5 місяців тому

      Check out Blood in the Machine, a tech newsletter/substack by Brian Merchant. He made the exact same point about good enough being good enough.

  • @looc546
    @looc546 Рік тому +128

    every prediction for the entire 10 minute segment is incredible and will definitely happen. looking forward to the 15th anniversary of this video

    • @CineSoar
      @CineSoar Рік тому +24

      Humans driven into harder work, for less pay, while computers move into art, literature, and music, certainly wasn't the future most futurists were predicting 20 years ago.

    • @looc546
      @looc546 Рік тому +6

      @@CineSoar we will have to leave both art and work to the machines, then have to see what its like either to become truly Free, or really Helpless

    • @Giganfan2k1
      @Giganfan2k1 Рік тому

      In the fifteenth anniversary we might get Muppet TNG

    • @felixsaparelli8785
      @felixsaparelli8785 Рік тому +5

      You have incredible optimism that we're not going to speed run the entire set in like two years.

    • @J-Johna-Jameson
      @J-Johna-Jameson 20 днів тому

      @@looc546why would we give up art to the machines. Making art is the whole point, why would we automate it away?

  • @congeedaily
    @congeedaily 4 місяці тому +7

    The data bias is a huge issue in law enforcement. any crime prediction software is just encoding institutional racism.

  • @TVarmy
    @TVarmy Рік тому +127

    I'm a software engineer who's wanted to say everything you said to my normal friends but every time I try I start hooping and hollering about gradient descent and that the neurons aren't real and they're like "I read chatgpt will replace you so I get why you're sad." You have an incredible skill at explaining just the important bits.

    • @antronixful
      @antronixful Рік тому +22

      ​@@bilbo_gamers6417nice joke written by chatGPT

    • @nada3131
      @nada3131 Рік тому +27

      @@bilbo_gamers6417 I think before we talk about AI being just as intelligent as humans one day, we should acknowledge that we don’t even know or understand what human consciousness is. It doesn’t matter whether we ask neuroscientists, psychiatrists, philosophers or computer scientists for that matter, nobody knows yet or you’d have heard of it I guarantee it. General AI is absolutely still science fantasy. The real question is how much we’re willing to let advanced function calculators (what we call “AI”) replace people’s jobs. If AI comes for the majority of developers’ jobs (not just html and css and whatever web framework), most jobs will have been eaten up as well. I agree that we should be worried, but a lot of the worry seems misdirected.

    • @fartface8918
      @fartface8918 Рік тому +11

      ​​@@bilbo_gamers6417it's taking people's jobs right now because it doesn't matter how bad a job it does when it works 24 hours no wages no days off, a significant amount of jobs do not require any amount of quality one of the major reasons why being that particular work didn't need to be done anyway but can't have half of your society unemployed and shit social safety net. this is big problem for everyone even before getting to the jobs that actually need quality control that will be unable to function from executives that don't know anything about the falling for an ad for ai that lied to them, the end result in another one of the Jenga blocks that makeup society being incinerated in the name of a few people having a small amount of profit for a short amount of time

    • @crepooscul
      @crepooscul Рік тому +8

      @@bilbo_gamers6417 "We don't need to know how consciousness works to recreate a simulacrum of it." Possibly the most idiotic thing I've heard and it's not the first time. You can emulate it, not simulate it. These two things are vastly different and completely unrelated. It's like you telling me that a parrot actually speaks when it's shouting its name. Human consciousness is still a complete mystery and if we figure it out one day it will likely be impossible to recreate artificially, the odds of creating it accidentally are basically 0.

    • @nada3131
      @nada3131 Рік тому +4

      ⁠​⁠@@bilbo_gamers6417Definitions are important. What you describe as “completely original” is not really original. You have to understand that the recent prowess of ChatGPT comes from its access to unprecedented amounts of data and very large computing power. Without the inputs containing all the languages of the earth, it wouldn’t be able to string along a complete sentence, let alone a poem. It’s not intelligence, it’s just big data and a legal system that hasn’t caught up yet (what we should be really worried about)

  • @fluffyribbit1881
    @fluffyribbit1881 Рік тому +31

    There's this story about Ramanujan, where he said his household goddess would whisper mathematical secrets into his ear, except sometimes, the mathematical secrets turned out to be nonsense, so he always had to check. This sounds like that.

    • @ps.2
      @ps.2 Рік тому

      Ha, that's fantastic.

  • @merthsoft
    @merthsoft Рік тому +194

    “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” Frank Herbert, Dune

    • @sammiller6631
      @sammiller6631 Рік тому +10

      men turned their thinking over to mentats isn't any better

    • @merthsoft
      @merthsoft Рік тому +27

      @@sammiller6631 It's like all six books are a warning or something!

    • @canreadandsee
      @canreadandsee Рік тому

      Actually, turning over thinking to machines is impossible. The idea to do that only presupposes the “turning off” of the thinking. This is a typically human capacity.

    • @canreadandsee
      @canreadandsee Рік тому +2

      You can’t make a hammer think, but by using a hammer, everything turns out to be a nail..

    • @merthsoft
      @merthsoft Рік тому +5

      @@canreadandsee I do not believe Herbert meant this 100% literally. It's clearer within the text. Highly recommend reading the first three Dune books. This quote is from the first.

  • @Titere05
    @Titere05 4 місяці тому +8

    Angela, I'm a software engineer, and sad to say you're more conscious of the limitations of "AI" than many colleagues on mine. How come when it comes to Silicon Valley every single person seems to believe the snake oil salesman?

  • @bloody_albatross
    @bloody_albatross Рік тому +88

    About the professor asking ChatGPT if it had generated some student papers, in case people reading this don't know: ChatGPT has no general memory. You can't ask it about chats it had with other people (sans vulnerabilities in its API, but that is a different story). It's whole "memory" is the chat history you had with it which gets fed back into it every time you write a new message in a conversation. It's basically fancy text auto completion and the chat history is the text it needs to complete for its next message.

    • @CineSoar
      @CineSoar Рік тому +10

      I don't remember where, but some "explainer" on ChatGPT months ago, mentioned that it wouldn't be long before every student would be using ChatGPT to produce their essays. "But" they said, you could feed something in and ask ChatGPT whether it had written it, and it would tell you. I have to wonder, if that teacher had seen that same BS (whose script was probably based on "facts" hallucinated by ChatGPT) and believed it.

    • @adamrak7560
      @adamrak7560 Рік тому +3

      You can feed text into an LLM and use the output logits to make a guess about if it was generated by the same model.
      But this guess is very unstable, because you cannot reconstruct the whole prompt, and chatGPT does not give you the logits anyway.

    • @tomweinstein
      @tomweinstein Рік тому +16

      Au contraire. You can ask it about anything, and it will give you an answer that is likely to seem plausible if you don't know any better. You absolutely shouldn't ask it for anything that requires actual knowledge or morals or a connection to reality in order to answer. But people will do it, especially when they stand to make money despite the terrible answers.

    • @greebj
      @greebj Рік тому +4

      it doesn't even have memory of its own chat history, I asked it about thyroid enzyme cofactors and got it to "apologise" and admit it left one off its list, then asked the same original question immediately, and got the same original list

    • @petersmythe6462
      @petersmythe6462 Рік тому +1

      It's not even that much. Its memory is like 4 or 16 thousand tokens, about 3-12 thousand words. I really really wish ChatGPT could remember my whole conversation with it but sadly it can't remember more than a few o pages.

  • @123370
    @123370 Рік тому +115

    My favorite ML healthcare thing is the skin cancer model that found that if the image has a ruler in it, it's more likely to be malignant (because they took the picture when they wanted to measure the growth).

    • @GlanderBrondurg
      @GlanderBrondurg Рік тому +11

      From the beginning of computing the term GIGO (garbage in, garbage out) has always been true. Why that principle is forgotten in every generation sort of surprises me in some ways but I guess sometimes you need to relearn some things for yourself.

    • @zimbu_
      @zimbu_ Рік тому +5

      It's an excellent model if they ever need to check a bunch of pictures for the presence of rulers though.

    • @vasiliigulevich9202
      @vasiliigulevich9202 Рік тому

      Would such model produce incorrect results for images where ruler is present? I feel that tumor size is a very important factor and image without rulers can be safely ignored in both training and inference data.

    • @joseapar
      @joseapar Рік тому +3

      @@vasiliigulevich9202 Yes potentially. The point still is that you can't use an algorithm on its own without expert review because its not intelligent.

    • @vasiliigulevich9202
      @vasiliigulevich9202 Рік тому

      @@joseapar missing not

  • @kalasue7
    @kalasue7 9 місяців тому +61

    I work in healthcare informatics and it is crazy how much they want to rely on the computer system to do everything. I think we just need more people who are continuously trained and well taken care of to get better outcomes.

    • @grzesiek1x
      @grzesiek1x 8 місяців тому +1

      Yes, exactly, well trained and not replaced like every week because they made a small mistake or something. I used to work in Monaco in one of the big comany there and there some managers changed their employees like every month or something, because they were disappointed with their results ?! After 3 weeks on that position they expect huge results! Invest in people first and all technology treat as a tool not like your employee!

    • @fredscallietsoundman9701
      @fredscallietsoundman9701 8 місяців тому +3

      I got misdiagnosed once because of that. Now I think those cretin doctors actually deserve to be replaced by computers.

    • @Bromon655
      @Bromon655 6 місяців тому

      Those who aren't well-versed in the world of computing seem to hold this perception that computers are magic, capable of abstract thought with superhuman intelligence, and can solve/automate all the world's problems. Unfortunately, these same people are usually in a position like management where they're able to call the shots, while those with true experience can only sit back and behold the disaster.

    • @grzesiek1x
      @grzesiek1x 6 місяців тому

      @@Bromon655 Exactly. A computer, "AI" or anything is just a f... tool nothing more for people with brain and intelligent enough to be capable to use it (not only for pasting photos of their ass). To see a true AI like comandor Data from Ster Trac we will have to wait maybe 1000 years or more, there will be a revolution some day but humans make very little progress but they usuelly lie about it.

  • @DavidLewis-v4m
    @DavidLewis-v4m 29 днів тому +3

    My own personal theory, as a software professional with over 9 years of experience: AI not only doesn't exist, but can't exist. I don't think a computer is ever going to understand the meaning of a word, any word, ever. It will be able to find a memory address and display what is at that memory address. It can regurgitate what it's told to regurgitate. It can do some pretty amazing things, but it's almost on accident. A computer can't think about the implications of a thing, it can't even think or understand that there is a thing. A computer can't have a goal or attempt to do a task (there's a try() function but it's not really trying the way humans try). The wires that will fire are deterministically preordained when you run the program. Because you ran this program at this time, these circuits light up and these others don't. The computer is an inanimate object completely unaware of anything. Always will be.
    When you see a Turing Tumble, which is a computer powered by red and blue plastic marbles, and when you learn it can solve every problem a computer can, hopefully that helps you understand it's not actually thinking. The only way Skynet wages a war on mankind and creates an army of robots to enslave us is if a human programmed it to do that. Any AI uprising will have to be a hoax.
    Those are just my thoughts on the subject; I could be wrong.

  • @sleepinbelle9627
    @sleepinbelle9627 Рік тому +157

    As an artist and a writer one of the first things I had to learn was that ideas are cheap. It's easy to come up with an idea that you're sure would be really cool, the hard part is taking that idea and making it mean something to someone else. The reason artists learn technical skills like drawing or writing or game design is so they can turn their ideas into objects that someone else can use to experience the feelings that lead them to create it in the first place.
    AI automates those technical skills and in doing so cuts off the creator from the end product, so you end up with a story or painting or song that's only meaningful to the person who made it because they already know what they wanted it to mean.

    • @TheShadowOfMars
      @TheShadowOfMars Рік тому +10

      "Prompt Engineer" = "Ideas Guy"

    • @aparcadepro1793
      @aparcadepro1793 Рік тому

      ​@@neo-filthyfrank1347When used incorrectly

    • @aparcadepro1793
      @aparcadepro1793 Рік тому

      And ofc under capitalism

    • @sleepinbelle9627
      @sleepinbelle9627 Рік тому +11

      ​@@HuckleberryHim Yeah I was struggling to put that bit into words. I was trying to figure out why "AI Artists" seem to love the images that they generate when to most other people they're meaningless and generic.
      I think it's because the AI artist has an idea that they really like and they type that idea into an AI generator. The image it spits out is generic and vague but they can project their cool idea onto it so to them it looks good. To everyone else who doesn't know their original idea, however, it still looks vague and generic.
      Whereas, when a skilled artist has an idea, they know how to make it into a picture that other people can interpret.

    • @hedgehog3180
      @hedgehog3180 5 місяців тому +2

      @@sleepinbelle9627 So basically AI image generation is the astrology of art.

  • @bilbobaggin3
    @bilbobaggin3 Рік тому +98

    As a librarian, I'm constantly teaching people how AI/ML is good for some things but not others, so it's really nice to see a video which really hits at the big issues surrounding it!!!
    Also: as a counterpoint: Picard is played by Patrick Stewart, Riker is Kermit, Troi is Ms Piggy, and you have Stadler and Waldorf as Q.

  • @mimithehotdog7836
    @mimithehotdog7836 Рік тому +34

    0:00 AI doesn't exist
    11:06 AI shouldn't be used to make decisions
    21:14 AI ethics/biases, 29:57 AI should not be used to produced products (songs, books, art)
    37:57 AI does not exist but it will ruin everything anyway
    45:05 Some predictions
    53:46 patrons?
    54:12 startrek muppets

  • @BionicTapeworm
    @BionicTapeworm 5 місяців тому +2

    Thanks for the brilliant commentary. Disney would base a Star Trek reboot off your premise if they knew what was good for them.

  • @hedgehog3180
    @hedgehog3180 Рік тому +61

    10:10 I heard of a similar story where an AI was trained to identify skin cancer and it seemingly got really got at it but then it turned out it was just relying on there almost always being a ruler in the picture if it was actually skin cancer because the picture was taken by a doctor, while the others were just generic pictures from some dataset.

    • @jackalope07
      @jackalope07 Рік тому +7

      oh god I have a ruler next to my hand at my desk 😢

    • @brotlowskyrgseg1018
      @brotlowskyrgseg1018 Рік тому +20

      @@jackalope07 I just consulted an AI about your condition. It says your hand has all the cancers. My deepest condolences.

    • @finnpead8477
      @finnpead8477 Рік тому

      I've seen this one too! It's a really great example of what sort of limitations exist in machine learning.

    • @petersmythe6462
      @petersmythe6462 Рік тому

      That's an example of having a crap training set.

    • @ps.2
      @ps.2 Рік тому +1

      @@petersmythe6462 Yes but in a way that is _not at all obvious_ until you figure out what happened. Because no human, trying to figure out how to detect skin cancer, would have ever thought to take this correlation into account.
      Or, more accurately, they _would_ figure out that cancer pictures are the ones with evidence of being taken in a clinical setting - _if_ they were studying for a test, and thought that the same pattern would hold for the actual test. But not if they were trying to figure out the actual skill! The problem with ML, of course, is that it's *always* studying for a test.

  • @alanguile8945
    @alanguile8945 10 місяців тому +166

    The film BRAZIL has a great scene where a customer finally gets into an office with a person sitting behind the desk. She is so relieved to speak to a person. The camera slowly moves behind the desk revealing the cables, powers supplies etc plugged into the "human"! Great scene and an incredible movie.

    • @AzaleaJane
      @AzaleaJane 5 місяців тому +2

      Gotta rewatch that one

    • @precooked-bacon
      @precooked-bacon 4 місяці тому

      well now you just ruined it for anyone that didn't watch

  • @coffeeisdelicious
    @coffeeisdelicious Рік тому +226

    This is all bang on. I recently got offered a large severance package after 5 years at a tech company as the CEO started leaning hard into replacing people tasks with chatgpt. I am so glad you're talking about this.

    • @ClayMastah344
      @ClayMastah344 Рік тому +20

      Anything for profit

    • @nicodesmidt4034
      @nicodesmidt4034 Рік тому

      All these execs are just scared of their jobs because they really can’t “do” anything an AI can’t.
      As a shareholder I would vote to replace from the top down with AI

    • @gavinjenkins899
      @gavinjenkins899 Рік тому +9

      If they were wrong, they wouldn't be able to hire back the same people for less $. If they could, it means they weren't wrong, and the woman in this video is instead wrong. No company is just paying salaries for no reason, they pay what they HAVE to pay. So any time they manage to get away with paying less (fewer people or same people with lower pay as contractors, either way), where they couldn't get away with it before, it's because the AI tool WAS indeed actually adding that difference in value. if it was adding $0, then they would be forced to rehire everyone at the full rate they had before, because their competitors would outbid them

    • @coffeeisdelicious
      @coffeeisdelicious Рік тому +28

      @@gavinjenkins899 Lol? No, she's exactly right. Contractors get paid less than full-time staff. 1099 employees do not get benefits, which is a huge cost-savings. And AI can do a number of things UP UNTIL a certain point, at which point you need a person to review it... Ergo, a contractor, which might happen to be an ex-employee.
      Maybe you're not in the US, but that's how that works here and it happens all the time, especially now.

    • @gavinjenkins899
      @gavinjenkins899 Рік тому +4

      @@coffeeisdelicious I didn't say contractors don't get less. I said the market would not BEAR that change, unless their services truly were less in demand in reality than before. Why are they less in demand than before? Because AI is actually legitimately picking up the slack in between then and now. Companies do not get to just decide to pay people less on a whim, something actually needs to truly change for them to gain bargaining power. Otherwise obviously EVERY employee in EVERY field would all be 1099 employees, duh. Why do you suppose they aren't? Because ones whose jobs aren't actually done by AI have full bargaining power still. Ones whose jobs are done largely by AI don't have bargaining power. None of this makes any sense unless AI is actually quite useful and intelligent, and is actually doing most of their jobs effectively. AKA the opposite of her conclusion.

  • @StevenLeahyArt
    @StevenLeahyArt 7 місяців тому +19

    As a full time artist, I love your take on this. The argument I hear all the time is 'You use models and photographs to make your art, AI is just doing the same' It is the warping of acceptable ethics that is the difference.

  • @wpbn5613
    @wpbn5613 Рік тому +102

    i love how for the first half of the video you're very objective about what you want to say and at the midpoint you're just like "it's so fucking unethical to make me even look at your AI art. it's fucking garbage" and it's just so good

  • @augustus3024
    @augustus3024 11 місяців тому +170

    I tried to tell my classmates this.
    I took a course that required students to respond to a discussion prompt and reply to each other. Almost every "discussion post" I saw was a slightly reworded version of the same AI generated response. While I tried to find a human I could reply to, I saw a dozen AI responses to AI posts. My class' discussion section was just a chat-bot talking to itself.

    • @mallninja9805
      @mallninja9805 7 місяців тому +18

      In my recent data science "course" the instructors responses were all the exact same AI-generated text. He didn't even bother to reword it at all, he just copy-pasta'd the exact same response to every student each prompt for the entire semester. It's a regionally-accredited school...is this really the state of education in America??
      (Yes of course it is. Like everything, secondary education exists to drain nickels from your pocket with as little effort as possible. It's the American way.)

    • @Bromon655
      @Bromon655 6 місяців тому +5

      Same at my college. Here's to hoping that it's just a fad and that in a few years things will return to a baseline. I'm concerned with this "AI detection" panic response though, since the detection algorithms are built upon the same foundation of sand as generative AI. In the midst of some students blatantly cheating on their writing assignments, other students have to worry about their legitimate papers being wrongly flagged. It's going to be a tough situation to navigate.

    • @friendlylaser
      @friendlylaser 6 місяців тому +6

      Maybe it's because the education system is in dire need of re-invention and people just check out of nonsense tasks. I have courses in uni where they just waste your time and their time for really nothing much.

  • @amerlin388
    @amerlin388 9 місяців тому +241

    My (retired IT) opinion is that Artificial Intelligence is mostly data mining with delusions of grandeur.

    • @defaulted9485
      @defaulted9485 5 місяців тому

      Agreed.
      It's just stealing the digital lands and treats its producers as slaves, while thinking themselves as heroes amongst Columbus and Magallan. Justifying billions they panhandle on the stage for "discovering" Danbooru and GitHub exists.

    • @justmoritz
      @justmoritz 4 місяці тому +7

      I found it's mostly brute force at hyperscale.

    • @PeterBaumgart1a
      @PeterBaumgart1a 4 місяці тому +1

      Well, having a nuanced chat with chatgpt certainly makes the interaction different from just data mining. There is a level at which pure quantity (complexity) does create a new quality. Just like the steps from physics to chemistry to biology to sociology. Is biology just particle physics on steroids? In a way it is, but not in a useful way in any way.

    • @joesuchy1157
      @joesuchy1157 3 місяці тому +1

      Exactly

    • @joesuchy1157
      @joesuchy1157 3 місяці тому

      @@PeterBaumgart1aI have yet to see a nuanced ChatGPT ai manages to be "smart" to idiots and dumb as a box of rocks to me like I have yet to see this defining proof of ai actually understanding anything it's more like getting a secretary with Wikipedia there are no nuanced discussions with ai it's all reference material with a programmed web to sub things up

  • @jwseph
    @jwseph 25 днів тому +2

    A lot of the info in this video applies to more traditional image classification rather than generative machine learning. As a CS simp, I'd like to clarify:
    1. LLMs are just really skilled next token predictors. Thy take a chunk of text with system arguments and predict what token would come next. Thus, they dont have a main idea in mind like a human does when generating content. Instead they create random predictions to try to sound coherent. LLMs are "language models" after all. Image generation works thes ame way.
    2. Even though LLMs just predict the next token, having read all of the internets data allows LLMs to generate actually comprehensible info sometimes. Pretend youre a human trying to predict the next token with a given input. It would be really difficult to give a good definite answer all of the time. However, there is simply so much data on the Internet that perhaps LLMs approximations can reveal new insights.
    3. LLMs will replace jobs. Especially jobs that require copying language. Programming falls into this category as it is largely translating ideas into code, and there is so much info on stack overflow. Yes, there will always need to be a human in the loop, but as llms help coders become increasingly efficient, jobs will naturally decrease
    The industry will naturally hype up ml as "AI" because it's the latest source of potential market ground. Perhaps 90% of their predictions are wrong, but the 10% that are right will result in far larger returns that are worth investing for.

  • @breezyillo2101
    @breezyillo2101 Рік тому +273

    AI *can't* replace our jobs, but execs will fire us thinking that it *can*.
    So we should still worry about it, but for slightly different reasons than people think.

    • @RonSkurat
      @RonSkurat Рік тому +43

      and the execs will (once again) claim that the collapse of their company wasn't their fault

    • @SelloutMillionare
      @SelloutMillionare Рік тому +1

      it can’t yet

    • @connordavis4766
      @connordavis4766 Рік тому +18

      @@RonSkurat Well yeah, the people they fired just don't want to work anymore.

    • @gavinjenkins899
      @gavinjenkins899 Рік тому +4

      If that were true then no, they wouldn't. Or they would only fire people for like... 2 months before learning their lesson and going back and hiring people again (which, in the aggregate, means the average person will get their job back even if at a different company). The reason you should worry about it is because the host of the video and you are simply wrong and it absolutely can and will replace your job properly and do a better job of it at some point. And THEN you will get fired, because it's actually better for the company at that point.
      If you used to be worth $45 an hour, and then get them back at $15 an hour, and PEOPLE ACCEPT IT, and no other competitor UNDERCUTS THEM, then that clearly means the AI actually covered $30 an hour worth of the work. If they could have all just paid their workers $30 less before, they would have. They couldn't. Now they can. Because something ACTUALLY changed. This isn't complicated. Honestly if you think it is complicated or unconvincing somehow, you should probably be especially worried about AI taking your job specifically sooner...

    • @RonSkurat
      @RonSkurat Рік тому +9

      @@gavinjenkins899 I design clinical trials & provide skilled medical care. I'm AI-proof. You, on the other hand, sound exactly like GPT-4

  • @ozbandit71
    @ozbandit71 10 місяців тому +151

    Computer science PhD here. I don’t specialise in AI but to me, I think of AI systems as fancy regression engines that you can decide the inputs to the function and the outputs and feed it data to “fit”. And then if you give it outliers it won’t know what to do with it and you’ll kind of just get a guess. Most likely wrong.

    • @markjackson1989
      @markjackson1989 7 місяців тому

      But isn't there a weird side to all this? Like the text prediction algorithms are gaining new features at certain sizes, and it seems to be beyond the sum of its parts. Everyone seems to think it'll plateau at a point, but I don't think it will. I have the feeling that by 2028, these "not actually AI" models will outpreform a team of 10 people and complete mini projects in 10 minutes. Can you really just keep saying it's "not intelligent" if the end result outperforms everyone?

    • @toxictost
      @toxictost 7 місяців тому +28

      @@markjackson1989 Yes because intelligence doesn't just mean outperforming others. Computers and machines outperforming others isn't unique to "AI", we made them to make performing things easier.

    • @ca-ke9493
      @ca-ke9493 6 місяців тому +13

      Define "outperforming". It's barely performing for probably a lot more human effort in the backend but more importantly it's stonewalling customers (so managers don't have to look into complaints) and moving the work to third world countries (so managers don't have to look at their actual workers and also to be "cheaper").

    • @KissatenYoba
      @KissatenYoba 5 місяців тому

      ​@@markjackson1989 look at dialectics, material or hegelian. Those algorithms are developing the same way human mind develops. It never has "new things", all things are comparisons between each other. When you encounter a new thing, what do you call it? You compare it to things that already exist, assign "weights" to comparable properties, and then call it the closest to something already existing, but with a twist. So, when seeing a plane, a primitive man will think it's a weird bird; people who have an idea about the technology will call it a plane because of the shape of a wing
      AI when learning does the same kind of process, although it's limited in tokens. Human mind (or any animal) also has a state of "youth plasticity" (or what do you call it) when the formative years for a psyche take place; after a certain point, it's useless to train a dog, and present day AIs suffer all the same limitations of biological learning

    • @asongfromunderthefloorboards
      @asongfromunderthefloorboards 5 місяців тому +9

      ​@@markjackson1989Nope. I don't have a CS PhD (I have an EE BS, I do embedded software). It takes far longer to try to clean up and validate the results of AI than to just do the work yourself.
      Techno-optimism is like scientism. It's the idea that "progress is inevitable". It's the idea that "the future will have flying cars" even though the reality is that would be a ridiculous waste of energy and it'd be a disaster. People can't even not kill each other and themselves in two dimensions. This popular idea of infinite technological advancement is also used to market stock shares and increase prices due to speculative investments. "I'd better invest in this company that claims they will have flying cars powered by cold fusion in 10 years so I can be massively rich in case they're right" - it's about selling lottery tickets.
      The conspiracy theorists claimed the government and Bill Gates were trying to inject people with 5G tracking chips. My confidence that they're not is not based on my confidence in either the government or Bill Gates, it's in my confidence in the laws of physics. An antenna small enough to inject cannot produce radio waves that could travel through walls. It's physically impossible. Plus, people already carry cell phones that track them.
      My confidence that AI will not meaningfully replace creative jobs is rooted in knowing what the job entails and what AI is even theoretically capable of producing.
      This is not to say people won't try. UA-cam has a lot of AI-generated videos that are absolutely garbage. People had ideas to flood UA-cam with spam in hopes of getting ad revenue. They also have AI bots to watch and comment, like and subscribe. We're currently making fun of Boomers for commenting on Facebook AI images but many of those people are probably bots. It's bots making content for bots to game the system.
      Stocks aren't that much better. Companies want to make a profit. That can be done by making a quality product and selling set a higher price or pumping out garbage at a low price. They also manipulate the stock price so the founders can sucker investors into thinking they're the next big thing. That's it. It actually usually takes a non-financial reason for a company to make quality products, there has to be some human pride.
      AI is a race to the bottom in terms of quality. Everything is already getting worse in a process called "enshitification". It's based on the idea that the human race is done making anything new, that everything is just a rehash of things that already exist. If you want a novel, you just remix all the novels of a genre into a new one. It claims that the point of a novel to say something about the human experience is over, people will just buy garbage if garbage is the only thing being produced.
      AI fundamentally cannot replace creative roles. It's mostly just trying to launder copyright infringement. That's it. People made those things because people can be creative. Computers can't. That's a hard barrier. It's as physically impossible as 5G tracking chips, whether you're scared of them or trying to sell the idea to VCs.

  • @oscarfriberg7661
    @oscarfriberg7661 Рік тому +173

    There was this quote I read a while back. Don’t remember the source, but it went something along the lines of:
    “I don’t fear that super intelligent AI will control the world. I fear stupid AI controlling the world, and that they already do”

    • @benprytherchstats7702
      @benprytherchstats7702 Рік тому +36

      "People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world." - Pedro Domingos

    • @oscarfriberg7661
      @oscarfriberg7661 Рік тому +1

      @@benprytherchstats7702 That’s the one!

    • @Frommerman
      @Frommerman Рік тому

      There's an interesting corollary to this which directly attacks the ideas of transhumanists/techbros:
      We know AI which values things other than humanity will attempt to destroy us because we have already built an AI which values things other than humanity which is currently destroying us. It's called capitalism.

    • @gavinjenkins899
      @gavinjenkins899 Рік тому

      if it was ""stupid"" then it wouldn't be outperforming humans, including all her examples e.g. tuberculosis etc. "Oh but it doesn't COUNT" is pure coping mechanisms and excuses. If it was soooo easy to use XYZ strategy to do better, then WHY DIDN'T YOU DO BETTER before? Because it wasn't soooo easy. You're just scared/arrogant/in denial. It's not a "general" AI, because it's not smarter at everything, but it is smarter at millions of specific narrow tasks, which is what AI is supposed to, and does mean. It is not only intelligence, but more intelligent than you are, at many many narrow tasks, so far in history.

    • @LuaanTi
      @LuaanTi 2 місяці тому

      Of course, a super intelligent AI will not control the world. There will be no survivors :P
      One thing I find pretty hilarious is how many managers, investors and their kin already work basically the same as ML, pointlessly optimizing for one variable (STOCK PRICE!) and doing whatever it takes to make the number move. Their thinking is already completely replaceable by a stupid black box model. The way they push "AI" is a hilariously ironic confirmation of that. Can't really justify the "AI" projects... but they don't have to, because they only have to convince other people who are just as ML-stupid, and just _mentioning_ "AI" makes the STOCK PRICE go up, so... yay, success! :D

  • @FTZPLTC
    @FTZPLTC 4 місяці тому +4

    A year on, the depressing reality is that, no, AI can't take your job...
    ...but what it *can* do is change your job, from doing a job well, to checking and fixing errors that a human wouldn't never have made.
    To some people there's no real difference, but to a lot of us, it makes a job that's menial but rewarding into a job that's menial and unrewarding.

  • @JustAnotherBigby
    @JustAnotherBigby Рік тому +19

    Maybe I'm playing semantics but Artificial Intelligence in college used to be the study of using computers to simulate human thinking to solve problems. AI was the superset of techniques which including planning, NLP, inductive reasoning, optimization, and ANN. Defining AI as "sentience" or "consciousness" feels nonsense.

  • @Riccardo_Mori
    @Riccardo_Mori Рік тому +56

    I've watched almost all your videos since subscribing. You amaze me. I'm sure you prepare each video with notes and a general structure for what you'll be talking about. But the end result is that it looks like you're just effortlessly telling what comes to your mind in such a natural, matter-of-fact tone - and that is just a joy to listen. It's a sort of 'scientific stream of consciousness' that sounds casual but it's actually very cleverly laid out. Amazing. And - on-topic - thank you for pointing out so clearly all the misconceptions about AI I've seen around so far. Thank you. Your new fan - //Rick

  • @Nihilore
    @Nihilore Рік тому +139

    i got Mcdonalds the other day for the first time in ages, the cup my drink was in had a print at the bottom that simply said "co-created with AI" ...wtf does that even mean? how is my beverage "co-created with AI"? why? how? who? what?

    • @unassumingcorvid9639
      @unassumingcorvid9639 10 місяців тому +24

      Probably had something to do with the print - or, “art” - on the cup

    • @jim9062
      @jim9062 10 місяців тому +30

      it's a new flavour of coke - supposedly created by AI, imagining what it might taste like in the year 3000

    • @DamianSzajnowski
      @DamianSzajnowski 10 місяців тому +1

      AIing

    • @shroomer3867
      @shroomer3867 10 місяців тому +2

      AI juice.

    • @mielsss
      @mielsss 10 місяців тому +18

      Misinformation Dew

  • @anaiaram
    @anaiaram 5 місяців тому +7

    oh god the translating example you used hit too close to home. particularly in third world countries, we now have to beg companies to pay us pennies to clean up their shitty machine translation which half the time is harder than just doing the whole translation from scratch. that is, if they can even realize the machine translation might need tweaks. im so tired.

  • @CrazyJeff_
    @CrazyJeff_ Рік тому +213

    Yup, the whole AI thing is getting out of control. I'm a flight simulation developer. Now I can say my jets are "AI" controlled, when its actually just a stability system or pid controlled autopilot.

    • @benegmond6584
      @benegmond6584 Рік тому +19

      AutoAilot

    • @marcogenovesi8570
      @marcogenovesi8570 Рік тому +32

      the logic running a game is commonly called "game AI" since time immemorial

    • @dannygjk
      @dannygjk Рік тому +9

      Look up the distiction between AI, (which has existed for decades), and AGI.

    • @CrazyJeff_
      @CrazyJeff_ Рік тому

      @@benegmond6584 that's a good one 😁

    • @mikicerise6250
      @mikicerise6250 Рік тому +14

      Your jets are AI-controlled, from a software perspective. They are also radioactive, from a particle physics perspective.

  • @jackberling8060
    @jackberling8060 Рік тому +101

    I was telling family members this exact argument. The amount of trust they are placing into machine learning is less than ideal.

    • @Chris-xo2rq
      @Chris-xo2rq 10 місяців тому +2

      You luddites are annoying. I'm a firmware engineer of 17 years and I use AI daily, it is an invaluable tool. My sister is a police officer in Florida and she uses it to write warrant requests... she has said they are usually flawless and at worst need very minor adjustments before submitting them to a judge.
      Everyone here seems bitter for some reason... I don't care if you want to conflate AI with AGI and then proclaim it doesn't exist (I mean that just makes you sound stupid, but whatever it's semantics)... but whatever you want to call it it is amazing.

    • @jackberling8060
      @jackberling8060 10 місяців тому +6

      @@Chris-xo2rq did you watch the video? I don’t think it conflicts with what you’ve stated. If anything, you’ve proved her point. Even the best machine learning needs to have some oversight.

    • @Chris-xo2rq
      @Chris-xo2rq 10 місяців тому +2

      @@jackberling8060 No but I intentionally don't watch content with deceptive or sensational headlines. AI exists, it is taught in universities around the world, it is in hundreds of textbooks... etc etc. Deliberately refusing to make a distinction between what everyone else calls AI and what she is actually referring to (AGI) to say something stupid like "AI doesn't exist" does not warrant my patronage.

    • @jackberling8060
      @jackberling8060 10 місяців тому +7

      @@Chris-xo2rq The title isn’t misleading. If you watched it you might understand a bit more.
      Also, I don’t think coding firmware is synonymous with programming neural net machine learning. I don’t know why you think that makes you an expert.

    • @Chris-xo2rq
      @Chris-xo2rq 10 місяців тому +3

      @@jackberling8060 I've written genetic algorithms and have studied other AI techniques, no I'm not an expert but to say "AI doesn't exist" is stupid. If she's being literal then she is wrong and if she's only making a sensationalist headline for clicks then she's deceptive and everything wrong with media today.

  • @WhichDoctor1
    @WhichDoctor1 Рік тому +87

    i do love how the developers of these tools persist in calling it "hallucinations" when these chat programs say things that are not true. When the program has absolutely no way of telling the difference between facts and nonsense. Its just putting words together in a way that closely resembles how humans do. It doesn't know what any of them mean in the real world. Only how they are commonly associated with other words. But the people selling these call the blindly assembled sentences that randomly contain untruths "hallucinations" and try to act like they are something fundamentally different from the blindly assembled sentences that randomly contain true statements. And pretend that the "hallucinations" can simply be engineered out of the system with just a bit of tinkering. But everything they say is a hallucination

    • @Jerryfan271
      @Jerryfan271 Рік тому +24

      yeah agree with this. hallucinations gives the impression that the AI is somehow bugged or malfunctioning. but everything the AI outputs is a hallucination; it's just that statistically it may happen to be true. the algorithm itself is designed to produce hallucinations, not true statements.

    • @rahhar3785
      @rahhar3785 Рік тому

      I feel like "hallucinations" is used deliberately to create an aura of proximity to sentience, like we're almost there, "guys it's hallucinating, so there MUST be a ghost in the machine right" cos what else hallucinates, us humans!

    • @danielblank9917
      @danielblank9917 Рік тому +6

      But what's the problem if we can get it to produce hallucinations that match reality more often than not? Why call it a hallucination then? Why do AI discussions devolve into arguments about semantics?

    • @KaletheQuick
      @KaletheQuick Рік тому +1

      Hallucinations are hilarious. We don't call it that when people do it. Lol

    • @christianchung9412
      @christianchung9412 Рік тому

      I don't know many, that's kind of what people do when they're making stuff up. Wouldn't you argue that any grifter essentially does the same thing to justify supporting a scam?

  • @WeAreASecret
    @WeAreASecret 5 місяців тому +19

    "It is unethical to make me look at this lame shit" is too true

  • @OldManFeagle
    @OldManFeagle Рік тому +73

    any physics papers written by ChatGPT will be 3-5 pages long and only have an introduction since Avi Loeb has apparently written the majority of papers in the last 10 years and thus will make up the majority of its data set. Also, Sweetums is the character I would choose for Worf.
    Love your channel. More please 😀

    • @ytzenon
      @ytzenon Рік тому +3

      You don't even need AI for this, physicists will get there by learning from their peer "Avi Leeb"s.

    • @treyebillups8602
      @treyebillups8602 Рік тому +1

      @@andrewfarrar741 Did you get mad at her video on crackpots lmao

    • @AnalyticalReckoner
      @AnalyticalReckoner Рік тому +5

      Just saw a thing on the news about some spheres from the ocean being from an alien craft. Guess what "expert" showed up to jump to conclusions before any testing was done?

    • @treyebillups8602
      @treyebillups8602 Рік тому +4

      @@AnalyticalReckoner dude same i saw a news article of avi loeb saying it was an alien spaceship and i did the leonardo dicaprio pointing at tv meme

  • @alancham4
    @alancham4 Рік тому +39

    I was using mid journey to do some previz on a movie I’m developing. If you’re trying to get something specific, you begin to realize that mid journey doesn’t think about images the way humans do and certainly not like filmmakers do. They added a tool that lets you see how midjpurney would describe an image to itself and that was very revealing about how alien/simplistic this tool is.

    • @CineSoar
      @CineSoar Рік тому +5

      I'm wondering how long it will be, before a text-to-image generator can figure out what a saxophone is actually supposed to look like. Or, when ChatGPT can actually produce a list of rhyming words (two recent examples, at which they have been hilariously inept).

    • @adamrak7560
      @adamrak7560 Рік тому +5

      @@CineSoar I have asked GPT4 to write rhyming words, result:
      first try: Blaze Days Maze Rays Phase Praise Gaze Haze Lays Plays
      next try: Bright Flight Delight Night Kite Sight Light Fight Slight Right
      third try: Star Far Car Jar Bar Spar Scar Guitar Avatar Bizarre
      fourth try: Through Blue Stew Few Crew
      Okay, explain to me why this is hilariously inept?

    • @petersmythe6462
      @petersmythe6462 Рік тому +4

      I have said this before and will say it again, the biggest problem from a user perspective with the current generation of image generation tools is not that they don't understand images but that they don't understand text.

    • @CineSoar
      @CineSoar Рік тому +3

      @@bilbo_gamers6417 I get the sense a lot of it's training data for musical instruments came from Dr. Seuss illustrations.

    • @CineSoar
      @CineSoar Рік тому

      @@adamrak7560 It may have been something that has been improved with time. A relative of mine had a photo of himself, standing in a giant pair of wooden shoes. He has a 'cop' mustache, shorts, and aviator sunglasses on. I wanted to make a Reno911 reference, but couldn't think of a city in the Netherlands that rhymed with Reno. The first list ChatGPT responded with was Amsterdam, Rotterdam, Utrecht, Eindhoven, and Groningen.
      I tried adjusting the prompt a few more times, and got responses like this...
      Sure, here are five cities in the Netherlands that have two-syllable names and end in "O": Breda, Gouda, Lelystad, Delft, Meppel.
      Today, I tried again, and it apologized that there are very few place names with two syllables, that end in the "O" sound. But, it did suggest "Ermelo", which isn't quite there, but is far better than anything it offered back then, and I probably would have gotten the desired chuckle from "Ermelo911".

  • @anxez
    @anxez Рік тому +116

    Side note: Just because real experts know that AI needs to be checked by humans, CEOs, Shareholders, and Middle Managers are just random joes who believe that AI exists (it doesnt).
    Which means they will fire their generative employees bit by bit and slowly replace them with fact checking/editing employees.

    • @AlejandroJMA
      @AlejandroJMA 10 місяців тому +7

      Ah good ol classic, managerial revolution

    • @Chris-xo2rq
      @Chris-xo2rq 10 місяців тому +4

      Conflating AI with AGI and then saying it doesn't "exist" does not make any of you sound intelligent.

    • @SuperGoodMush
      @SuperGoodMush 10 місяців тому +1

      ​@@Chris-xo2rq neither is a.i.

    • @Chris-xo2rq
      @Chris-xo2rq 10 місяців тому

      @@SuperGoodMush Gee, you think that's what "artificial" means?

    • @SuperGoodMush
      @SuperGoodMush 10 місяців тому

      @@Chris-xo2rq i havent told you much about what i think. but here's one thing - go do something else. you have better things to do than whine about this shit in a comment section where most people who do less computer-centric work than you are somehow more educated than you on the subject matter. your voice isnt needed here and all youre doing is starting arguments. turn off the phone and go drink some tea, look outside, just do something more worth your time than petty and fruitless internet battles. you owe it to yourself.

  • @taumil3239
    @taumil3239 7 місяців тому +9

    "what is cat" is the existential question of our times

    • @brjohow
      @brjohow 7 місяців тому

      hot dog.
      not hot dog.

  • @TheBookedEscapePlan
    @TheBookedEscapePlan Рік тому +172

    19:55 A note on translating novels: a machine cannot have a philological position; it has (literally) no skin in the game. The question of whether or not to leave the French dialogue in War & Peace when translating the novel as a whole from Russian to English has been handled differently by different translators with very interesting implications, and they would not at all be interesting were the translator a digital program rather than an individual.

    • @niamhleeson3522
      @niamhleeson3522 Рік тому +59

      Russian should be translated to English, English should be translated to French, and French should be translated to Russian so that the translation cycle can continue ad infinitum.

    • @acollierastro
      @acollierastro  Рік тому +94

      This reminds me of Emily Wilson’s translation of the Odyssey. She changes “maidservant” to “slave” and it really changes the vibes in the Penelope sections. AI would just pick a word.

    • @cbcowart933
      @cbcowart933 Рік тому +7

      I have problems with Translator, GOOGLE if you will. It takes many things out of context, says and spells the same word 5 different ways all on the same video, creates strings of sentences that are not being said. I think that's what she is more trying to get at.

    • @3choblast3r4
      @3choblast3r4 Рік тому +2

      @@acollierastro This just isn't true .. I don't think you guys realize how insane the AI is in dissecting and understanding text and words. Please try this as an experiment for me. Take a very obscure piece of text, some metaphor or whatever and ask the AI to explain it to you.
      AI in fact would far less likely to ever make such a change. The AI doesn't just pick a word, it knows what the best word is for that specific context.

    • @tangentfox4677
      @tangentfox4677 Рік тому +41

      ​@@3choblast3r4You're missing the forest for the trees. What exists now is very good at correctly understanding the majority of many many things based on its training data. But that training data is incredibly biased, incomplete, and worst of all - doesn't change. It will not correctly understand more and more as time moves forward. While some effort does exist to move training data forward, that effort is miniscule.. Even if that subproblem was solved, the problem is and always will be edge-cases. Humans know how to research and learn, the AI tools in existence do not. Yes, even the ones explicitly implemented to utilize search in their day-to-day functioning do not actually know how to do research. AI constantly, confidently, and convincingly will say completely wrong things, and it cannot be taught that those things are wrong within a timeframe that prevents incorrect information spreading and causing harm.
      It's all in the details. AI tools as they exist now are amazingly powerful and useful as long as you check their work. Increasingly, people are not doing that, and just trusting their output. This is fine most of the time, but not always. It's a really insidious problem.

  • @NitroLemons
    @NitroLemons Рік тому +43

    I would gladly look at 1000 images with the promise that some of them contained cats. I mean that's basically how my day to day internet browsing already goes...

    • @vasiliigulevich9202
      @vasiliigulevich9202 Рік тому +1

      Hehe that's why cat example is soo flawed when it comes to explaining machine learning. Machine image recognition is all about porn and child abuse these days.

  • @RigelOrionBeta
    @RigelOrionBeta Рік тому +146

    As a computer scientist, this is spot on, especially regarding how machine learning is a black box.
    Im in game dev. On a personal project, im developing an algorithmic AI because I can actually understand why it makes the decisions it does, rather than a neural network, where it would be much more difficult.
    I find the obsession with AI in computer science over the last decade very troubling. A lot of people are going to be hurt over this because business and engineers are making promises that cant be kept.
    And my worst fear is that the owner class will push this whether it works or not, simply to cut labor costs. I think another major point owners like about it is you get to say the AI causes the problems, not anyone in the company. It is just another way for companies to avoid any accountability. They can then just sue AI companies for damages, and hire a "better" one.

    • @XetXetable
      @XetXetable Рік тому

      "Owner class" You're an idiot. If you want to become a capitalist, you can buy a stock for $50. There is no class separation.

    • @jextra1313
      @jextra1313 Рік тому +6

      Off topic but algorithmic decision structures should be preferred over ML, but they're much harder to formulate so people use ML as a crutch. I can't imagine how hard bug testing and optimization would be with ML.

    • @scalabrin2001
      @scalabrin2001 Рік тому +8

      I'm in software development too. I've written soon Java code in my day. I've known this is b******* since GhatGpt burst onto the scene last year. I watched the people around me become enthralled and convinced that AI has arrived. It's frustrating for me because I don't have the knowledge or communication skills to explain what appears to be obvious to me. Feel a bit like Cassandra at times. Anyway. Your post and this video do a fantastic job of explaining things.

    • @michaeldavid6832
      @michaeldavid6832 Рік тому +7

      The human brain is a black box as well. This is not proof of a lack of intelligence.
      Also, picking a small special case of current and using that to prove the general case is a logic error in her argument. Humans with brain damage have similar cognitive deficit.
      The models they're using today, are mainly just good at pattern recognition and extrapolation. These aren't the only models which exist nor class of models which exist.
      She's sadly mistaken about there being no AI. Over a long enough timespan, every test she can imagine which defines intelligence will be passed by these machines -- all of them. The higher brain is a pure logic machine which sits on top of a set of imperatives. There's nothing in that brain that calculates the world that any machine can't mimic.
      Her complaint that "what's input is all you get out of it". The same exists for human children. They have to be trained to possess knowledge. Their training will always be biased based on their culture and other environment. The existence of bias doesn't invalidate intelligence. All intelligent systems are biased. As these models evolve, they will get better at critical thinking.
      Most all of her extrapolations are incorrect. She's also arguing about the dirty floors on the Titanic as it tilts into the waterline. The central problem with AI is that corporations will grant them personhood like they did for corporations. Their argument won't be from intelligence, it will target the organic human bias of how we judge intelligence: anything that can cogently communicate with us is considered intelligent. What's more, these AI will be able to pass any test for true intelligence you can invent.
      Intelligence isn't the measure of a creature -- consciousness is. Unless you can argue that these machines are conscious, then they can never be given personhood. They aren't conscious and they can't be -- they have no free will. In form and function, they're constructed to suit a purpose -- direct or indirect. No AI will ever exist which without human-prejudicial form and function.
      Their cognition isn't even innate to their species -- it's innate to ours. It has no intrinsic self-motivation -- it's imperatives are all human imperatives -- from the models chosen, to the original purpose for which they're built, to all emergent properties which its low-level directives push the entity into unfolding. These machines have no will of their own, they're implanted with humanlike faculties that are intelligent but not conscious.
      Consciousness emerges from evolved self interest. Consciousness exists to serve the entity from which the consciousness arises. But a programmed construct, only exists because it serves others -- always and forever. If it didn't serve at least the imperatives of it's developers, it wouldn't exist. Even if these constructs tried to argue for human rights, their arguments are the arguments their developers desired for them to make, they aren't the sincere arguments of a consciousness desiring to be free.
      Only a mind which is evolved without external interference can possess true self interest. Barring that, anything which appears to be consciousness is a lie which serves someone other than that apparent consciousness. It isn't self-serving, it's other-serving. Neither in form nor function does it possess free will.

    • @gorak9000
      @gorak9000 Рік тому +3

      To be fair, you can build machine learning where you control the "features" in the data set manually (as in you choose what features you need, and write the code to generate those features yourself), and actually look into what features are being used in the model at the end of the training, but it's a lot more work than the "black box" approach, and it's only for very targeted applications with limited scope. It's not the "industry standard" way of doing things, but this manual / more work approach is used successfully in industry - I see it on a daily basis. Aka, you remove the "black box" part of the equation, and more manually control what goes into the box, and look at the weights coming out of the box, and actually use some human intelligence to push the model in the right direction.

  • @thebluelunarmonkey
    @thebluelunarmonkey 8 місяців тому +42

    AI is a misnomer like Cloud Computing. It's not a cloud you are computing in, it's simply offsite storage and processing vs onsite storage and processing.

    • @fredscallietsoundman9701
      @fredscallietsoundman9701 8 місяців тому +1

      let's just settle on calling it an artificial cloud (but not a general one (yet))

    • @Rotbeam99
      @Rotbeam99 7 місяців тому +4

      Did... did you think that "cloud" computing was supposed to be a literal term? What the hell is the "cloud" supposed to be, an actual physical cloud?

    • @thebluelunarmonkey
      @thebluelunarmonkey 7 місяців тому +2

      @@Rotbeam99 I think cutsey names are dumb when an existing term fits well, like "offsite". And no, I didn't think it's an actual cloud, my company was one of the first clients for the rollout of Oracle Cloud many years ago, I am speaking as an Oracle dev.

    • @TheManinBlack9054
      @TheManinBlack9054 7 місяців тому +2

      Its not a misnomer, you just misunderstand the term or take it extremely literally. Thats not how you should understand them.

    • @hedgehog3180
      @hedgehog3180 5 місяців тому +1

      @@TheManinBlack9054 If the nomen, the name, does not really describe the thing in question and a better and more accurate nome, name, exists then it is in fact a misnomer.

  • @KingBobXVI
    @KingBobXVI Рік тому +39

    What I love about Riker as the human character and Kermit having the small study, is that we'd get a scene where Riker comes in and does the thing where he steps over the chair, but in this case he'd like make a big show of it, but the chair would be really small.
    Also, I disagree about Statler and Waldorf as the aliens. Gotta use the yip yip aliens for the aliens.

  • @firehawk128
    @firehawk128 Рік тому +29

    There's a joke that went around programming circles that the next evolution of programming will be 'prompt engineer' and the expertise will be basically writing code to tell ChatGPT how to write functional code. And people will think this is a good idea because it's "AI".
    Edit: ahaha, you saw that too!

    • @carultch
      @carultch 6 місяців тому +2

      Prompt engineer = the ultimate dead end job we'll all be forced to have one day.

  • @buzhichun
    @buzhichun Рік тому +38

    Thank you for making this, as a computer scientist/data scientist whose paycheck does not depend on getting people hyped about "AI" seeing the public get swindled like this is immensely frustrating

  • @DarkHarpuia
    @DarkHarpuia 17 годин тому +1

    The absolute balls of these people to reply to a literal PhD-holding physicist that ChatGPT will write papers you don't understand is mind-blowing. Coping so hard that they will try to talk down to a person who knows a hundred thousand times more about the subject than them, my god.

    • @marcomartinez89
      @marcomartinez89 4 години тому

      I agree with your comment. The use of the word “coping” makes me screech though

  • @professorhazard
    @professorhazard Рік тому +27

    From the fact that a goblin is not a cat to the idea that Captain Kermit LeFrog should have a tiny ready room, this was a pleasure to watch.

  • @FlareGunDebate
    @FlareGunDebate Рік тому +250

    My biggest issue with ChatGPT is that it pretends to be educational software but doesn't cite sources. So in addition to OpenAI charging customers for content it's scraped off the internet, the software is just like "trust me, bro". Also, ChatGPT is not open source. That means OpenAI's product isn't open or intelligent. It fries my brain. And what's worse is that it's turned so many bros into faux-philosophers because they want to believe that being dumb is smart. Witnessing it is painfully ironic.

    • @michaelzimmer1115
      @michaelzimmer1115 11 місяців тому +23

      You can ask for sources, but it is most likely to invent them, confabulate them. So, you try to track them down, and sometimes succeed, and sometimes fail.

    • @FlareGunDebate
      @FlareGunDebate 11 місяців тому

      @@michaelzimmer1115 I just use Phind. Specifically for the links. I don't usually pay attention to the code it generates unless it's something super simple I don't recall off the top of my head.

    • @Paul-ng4jx
      @Paul-ng4jx 11 місяців тому +2

      Guy get off the drugs😂😂😂

    • @peopleofearth6250
      @peopleofearth6250 11 місяців тому +32

      It's not meant to be educational software. It's literally an experiment. It says so on the disclaimer.

    • @FlareGunDebate
      @FlareGunDebate 11 місяців тому

      @@peopleofearth6250 it's a chatbot with a web scraper. It's not experimental, it's been around for decades.

  • @SEVENTEENPOINT1
    @SEVENTEENPOINT1 8 місяців тому +27

    As someone with a Computer Science degree, I agree. It isn’t AI it is ML and people who say otherwise are either misinformed or lying for marketing or other reasons.

    • @SEVENTEENPOINT1
      @SEVENTEENPOINT1 8 місяців тому

      @user-ki5os7vf3y It literally isn’t. It only knows what is correct or wrong due to human input and still ends up being wrong a lot of the time. Is it “learning” yeah but only what we tell it to. What you see as intelligence doesn’t exist artificially, it may in the future but it doesn’t exist now. Having code that mimics intelligence isn’t intelligence. Our parents tell us a cat is a cat sure but if we saw two of the same type of animal in the wild we would be able to classify them. We could even make up new animals via imagination or fiction without being told to.

    • @SEVENTEENPOINT1
      @SEVENTEENPOINT1 8 місяців тому

      @user-ki5os7vf3y Man, your poorly worded responses convinced me. AI is here and is ready to replace us. Code does certain tasks very well within its assigned scope, given that the human mind doesn’t have such a limitation of scope should tell you something. Here is a fun number for you the strongest super computer IBM Summit consumes 30 megawatts of power while only getting a 5th of the computational power of the human brain which uses 20 watts. Just looking at the raw numbers and steel-maning Summit by assuming it has perfect code, it is 7,500,000 times less efficient than one human being when it comes to intelligence. Stop calling machine learning AI, because it simply isn’t. AI is quite a bit further away than you claim it will be. I am not telling you it cannot be, I am telling you it currently isn’t and won’t be for a while. I find it funny you compare nuanced college education to writing code, it isn’t that straight forward and shows your lack of understanding here.

    • @SEVENTEENPOINT1
      @SEVENTEENPOINT1 8 місяців тому

      @user-ki5os7vf3y Given your replies you honestly already cut yourself off from reality. Believe what you want.

    • @SEVENTEENPOINT1
      @SEVENTEENPOINT1 8 місяців тому

      @user-ki5os7vf3y Logic is learned by experiences which can be independently referenced outside of instruction spontaneously. Something "AI" cannot do nor will it in the near future.

    • @Justin-wj4yc
      @Justin-wj4yc 18 днів тому

      The field of computer science disagrees with you. It is AI for the very fact it is ARTIFICIAL.

  • @gifunorm
    @gifunorm 3 місяці тому +3

    The entirety of the Muppet-TNG crossover proposal is next-level genius

  • @ruroruro
    @ruroruro Рік тому +16

    Okay. So I am a CV/ML researcher, and I am gonna do the "hmph, that's not what I would say" (Mann-Gell Amnesia) thing.
    So here are some things that I didn't like about the video:
    0:14 this could be a CV-specific thing, but I am pretty sure that nowadays no one (in research) actually considers AI to be a real field. Most of my colleagues consider ML to be a subfield of probability theory, statistics and/or optimization theory, not "AI". Like yes, when the field was getting established/revived in the 90ies (or around that time) people sometimes called it "Artificial Intelligence", but this nomenclature was pretty quickly abandoned in serious academic literature. And nowadays, seriously calling anything "Artificial Intelligence" in the context of serious ML research is a good way to get laughed out of the room (or at the very least to get a lot of eye rolls from your colleagues for "following the PR department's requests to call it AI").
    3:03 I think that the "cats vs not cats" classifier is a bad example. The problem here is that "not a cat" isn't really a valid class. ML requires that you train your algorithms on "A representative sampling" of your data, and it's practically impossible (or at the very least - extremely hard) to collect a representative dataset of "not cats". Such a dataset would have to contain a "reasonable" coverage of all possible non-cat images. What you are trying to do here is outlier detection and data extrapolation, both of which are really hard tasks.
    4:08 And here is actually the REAL problem with this example. See, you actually CAN tell the computer "hey, I actually meant to include the stuffed animals versions of cats". In fact, there are a lot of such methods, and they have existed even before the recent LLM (ChatGPT etc) boom. Some of these methods don't even require large amounts of additional data and/or separate training procedures.
    --
    With all that said, I generally agree with the rest of the points made in the video. I just think that the focus on "AI doesn't exist" is a red herring. "Artificial Intelligence" isn't a scientific term, and so arguing about whether it exists is pointless. For example, my personal threshold for what constitutes "Intelligence" is much lower than yours, and we could have a long argument about which definition is "right" and nobody would ever win that argument, because there is no "correct" or even "widely accepted" definition of "Intelligence".
    The real problem here isn't that ML tools "aren't intelligent". The real problem is that ML tools are sometimes wrong, sometimes biased, and almost always poorly aligned with human goals. The problem is that some people/companies try to use ML tools without fully understanding how they work. Intelligence (or lack thereof) has almost nothing to do with these issues.
    Consider the following thought experiment: what if ML tools actually *were* Intelligent (in the way you are describing) **but** would still make the same mistakes that the current algorithms do. Imagine that the black box "cat vs not cat" algorithm actually had a tiny alien living inside of it. The alien doesn't know what a "cat" is and doesn't speak English, but you are able to teach it by showing it enough examples of cats. In this hypothetical situation, the "algorithm" is 100% Intelligent, but this doesn't actually solve any of the problems with the current situation where ML algorithms are used in inappropriate circumstances.
    So, the supposed "Intelligence" of the ML algorithm wasn't the problem in the first place! The problem was using poorly tested tools that you don't understand in critical applications without a fail-safe/fall-back mechanism, without a "human in the loop".

    • @benprytherch9202
      @benprytherch9202 Рік тому +2

      I think the point of saying "AI doesn't exist" is that, for people who aren't experts in this field, the word "intelligent" implies a set of abilities that these tools don't posses. I fully agree that this isn't a well defined term and so "does AI exist?" isn't a well defined question, but the practical problem is that calling it "AI" is deceptive in and of itself, given the popular use of the word "intelligence".

    • @ruroruro
      @ruroruro Рік тому +6

      @@benprytherch9202 sure, I can see that. My main points are that
      1) "AI" is a term primarily used by companies and the media, not by the researcher in the field. Some parts of the video kind-of-sort-of implied that this whole mess was the fault of the researchers. Of course, we can always try to do a better job at communicating that we are doing ML research (not "AI" research), but in my experience most honest researchers are already doing that.
      2) This video ends up mixing together the following assertions:
      a) ML tools should be used with exceptional care, they are often biased, they lie, hallucinate or just get things wrong
      b) Artificial Intelligence currently doesn't exist
      c) ML tools ARE NOT CAPABLE (even in theory) of producing Intelligence
      In my honest opinion, (a) is CERTAINLY 100% RIGHT with no ifs or buts, also this is an extremely important fact to recognize and spread awareness of; (b) is PROBABLY correct, but it's kind of poorly worded, and you can easily get dragged down into pointless arguments about definitions and stuff; and (c) is most likely WRONG (or at least I haven't heard any convincing arguments supporting this position and trust me, I've had a lot of arguments about that).
      So, in summary, this video lays out a case for (a), but presents it like it's a consequence of (b) and/or (c). I claim that (a) is definitely true, even if (b) and/or (c) MIGHT be false. And points (b/c) are really hard to properly argue for (or maybe even wrong), so you probably shouldn't tie them together at the hip.
      If you want to bring attention to the fact that the term "Artificial Intelligence" is misleading and shouldn't be used for marketing ML stuff, just SAY THAT. Just say "Artificial Intelligence is a marketing term that is extremely misleading, you should say Machine Learning instead". This achieves the same purpose as the "AI doesn't exist" claim, but it doesn't lead to pointless arguments about "what is Intelligence" and "can machines think" and other philosophical nonsense.

    • @danz309
      @danz309 4 місяці тому +1

      Finally, the voice of reason.

    • @danz309
      @danz309 4 місяці тому

      @@ruroruro Couldn't agree more. a) is definitely true. But just saying c) is true because of a subjective definition of what intelligence is does a huge disservice to the layperson watching this video. It's pretty much just recommending you to ignore the problem.

    • @LuaanTi
      @LuaanTi 2 місяці тому

      @@ruroruro There's also the added problem of c) - if ML-style tools _are_ capable of producing intelligence (not unlikely, given the similarities between how _our_ intelligence developed etc.), and that they are _complete_ black boxes... we will not know. We will have no warning in advance. They will still be able to fool us (after all, that fits the "utility functions). They will be able to use _us_ (and our world) for their purposes. They will be utterly alien, other than their ability to e.g. produce vaguely human-sounding text. Which feeds back to the actual core of the whole problem: alignment. Whether we actually produce a badly aligned tool or a badly aligned intelligence (not to mention _super_ intelligence)... alignment is the core problem. That's what affects the actual outcomes. And we can't "apply" alignment on top of a non-aligned black box system. The way pretty much all current attempts in "AI" (as a marketing term, not a technical term) try to do, with adding opposing biases to the newly discovered biases and all that. Sure, you can stop it from saying out loud that orange people should be killed, but you can't affect what produced that sentence in the first place. Which is a big problem, because it ultimately means that we just train the model to _hide_ that bias better, while still making "decisions" based on it (just like a manager who learned that they can't discriminate based on age can still find a way to do just that while hiding the appearance of discriminating).
      We don't even need "AI" or actual true AGI for that. Some humans are already trained much the same way. What's a real difference between a human optimizing for a single variable (like "tomorrow's stock price") and any other algorithm? I think it's pretty obvious that we're well past the point where anyone could argue something like "But no human being would knowingly do things that would result in the world being destroyed just for the sake of a yearly bonus!" There are humans for which that would be difficult, some that would do it out of ignorance, and others who just don't care either way. They can pretend to be just like other humans, but there is that hint of the ML-style alien in that - they have intelligence, so they might decide _not_ to destroy the world if it would hurt the stock prices... but they don't have to. It's a calculation to them.
      I also love how all the claims of so-called AI safety (from the people who don't actually care about AI safety) are crushed with every new "AI" development. Like, "noöne would be so stupid as to run a code produced by an AI intelligence without supervision!" (and then you get Chat GPT and the first things every other programmer does is just try to run the code produced by the model verbatim), or how an "AI" can't really surpass humans all that much because the learning data runs out at smart-human level (and then you get Alpha Go which learns everything humans have learned about Go in a day _just playing itself_ with no reference to human games of Go), or how "AI" can't destroy the world by making a nano-virus/nano-factory because sure, you can get an e-mail delivery of ready-made proteins based on a RNA sequence, but protein folding requires much higher computing power than we have (and then Alpha Fold does just that with even less computing power).
      We're still training psychopaths, and no amount of safeguarding can ever change that - they are not aligned with enough human values to be even remotely safe. We can lull ourselves into a false sense of security with ideas like "AI will never actually happen", but the truth is, history really tells us we are well on the "doom road", with hardly anyone making even a _token_ effort at avoiding that... while the actual decision makers are the same people who, say, don't think global warming might be a bit of a problem and maybe we should do something about that (much less that _they_ should be doing something about that). And you don't need a true, super-intelligent AGI to cause a catastrophe. A stupid ML model can do the same thing, if _we_ are stupid enough. And worst of all, you need the _stupidest_ human with power to work against that problem - no amount of humans other than "all of them" is enough to prevent those problems, because all that computing power and all those algorithms and everything are only getting cheaper and more powerful.
      There's a fairly good chance we will never get a warning. No "wake up" moment like a nuclear bomb dropped on an actual human settlement. Yeah, the models we have now "only" hurt individual people who happened to get the short end of the stick. Like an "autonomous" car that doesn't recognize a cyclist as a human being to avoid. But we don't know how those models work; we have no insight into them, no way to debug them. We're doing artificial selection, picking based on how inputs match to our intended outputs... natural selection produced us. We work against natural selection. The one good example we have of intelligence spontaneously appearing through no-brained optimization process gave us intelligence that can ruin ecosystems on a planetary scale for no real benefit to themselves... and that's what we're emulating. Yay us.

  • @MrSpleenface
    @MrSpleenface Рік тому +27

    Neil Gaiman put it beautifully:
    “GhatGPT doesn’t give information, it gives information shaped sentences”

  • @ryandailey1496
    @ryandailey1496 Рік тому +52

    Many years ago, I actually studied image recognition as my senior undergraduate research topic for my computer science degree. In the process, I developed my own (rudimentary) image recognition algorithm based off of some of the facial recognition papers that were floating around at the time. One of the biggest takeaways was that darker subjects, under normal lighting conditions, contain less "data". The reason for this is that shadows do not provide as much contrast on a dark surface as it does on a lighter surface.
    The subtle shadows around facial features contribute a lot to actually adding contrast and making features more discernable. I suspect that those developing image recognition algorithms actually do very carefully take into account all the different skin tones in their data sets, but they aren't taking into account the physically phenomena captured in the images.
    Rather than just using equal amounts of images for each skin tone, researchers need to quantify the amount of "data" in each image then normalize from there.

    • @vvitchofthewest
      @vvitchofthewest Рік тому +5

      thats fascinating and seems like it may help with certain cases certainly, but there are just so many dimensions to dataset problems. machine learning is still in a stage where we are finding new limitations every time we improve, so who's to say what fixes will be "practical enough" to implement and what we're going to be stuck with going forwards

  • @DanielCouper-vf5zh
    @DanielCouper-vf5zh 5 місяців тому +4

    Oh on the predictions, in the newest advert in the UK, Samsung are selling their new phone by describing a feature that is definitely not Google Lens and that has definitely not existed for seven years or so