AI is NOT Artificial Intelligence, the real threat of AI is "Automated Stupidity." | Words MADDER
Вставка
- Опубліковано 1 бер 2023
- "Artificial Intelligence" is a sci-fi concept exploited for deceptive marketing and misleading media attention. We aren't even close to making real AI. What we have today is "Automated Intelligence" and the real risk of AI is "Automated Stupidity."
This video is sponsored by the data science and analytics company, Onebridge.
Onebridge website: www.onebridge.tech/
Get the Onebridge "Data Hydra" comic book here:
www.onebridge.tech/onebridge-...
For those who doubt my assessment of ASS, here is a great scientific study to read:
www.marktechpost.com/2023/05/...
Paper: arxiv.org/pdf/2304.15004.pdf
For further reading and references check out the links below:
FTC Warns Companies to Keep AI Claims In-Check:
futurism.com/the-byte/ftc-war...
How GPT Language Processing Works:
www.onebridge.tech/post/data-...
Flexible Muscle-Based Locomotion for Bipedal Creatures (machine learning clip referenced in video)
• Flexible Muscle-Based ...
Artificial Intelligence on Last Week Tonight with John Oliver
• Artificial Intelligenc...
Expanded Companion Article on Medium:
/ youre-being-lied-to-ab...
NOTICE: For those of you here for the science videos, don't worry, the next one is still in production. Thanks!
#artificialintelligence #openai #chatgpt #machinelearning #ai
"garbage in, garbage out."
My electronics teacher in high school used to say "the computer does EXACTLY what the programmer told it to do." If it's wrong, well, it can do it once or a million times, it doesn't care how much work it means, it's a machine. It will repeat the error a million times.
Machine learning doesn't work on a rule based approach. We don't write a program, we write an algorithm and then the algorithm finds the rules that arise from the training data. For example, a chatbot isn't trained on the rules of the English language, rather it is given samples of chat conversations and learns the patterns that are present in the conversation. It essentially always predicts the answer by stringing together words that would match the patterns it finds in the question.
@@markosluga5797 an algorithm is just a bunch of rules.
noun
a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.
"a basic algorithm for division"
@@hanneskarlbom6644 this is exactly the core of the misunderstanding of machine learning algorithms - yes they are mathematical constructs, yes we call them algorithms, however ML algorithms have open-ended settings that are only set during the machine learning operation, by the machine itself, without input from humans. The machine looks for patterns in the data and sets these tunable parameters of the algorithm to try to match the outcome of its predictions to the outcome of the training data. Once we have a trained alogrythm we call it a model. This model can then be put to work with real data using the settings discovered by the machine during training or the model can be trained more with additional learning or finetuned with so called hyperparameters that change the way a trained model makes predictions. At the end a graph of potentially millions, billions and lately trillions or quadrillions of interconnected settings is created as the parameter/hyperparameter configuration and therein is where the problem of understanding AI lies in - because explaining how the parameters are created, what they mean and how they work is in some cases actually currently beyond human comprehension.
@@markosluga5797 it still follows rules. Advanced one, yes, but still rules. The way it finds patterns is based on rules, meaning if the same input date is used to train another AI with the same rules, the result will be the same(although RNG may play a role, but that is also based on a seed of sort).
As such all you really have to do is make a code that traces and back track what it's doing. A bit like how if you know the leyout of a Minecraft world you can find the seed.
@@hanneskarlbom6644 that only applies to algos/frameworks with only the global seed - those models are identical if trained on identical data. However many modern neural networks use both a global seed that we control and an operation level seed that is randomly generated as an input to each operation. Training with random operation seeds on the same data will produce a slightly different model every time and the models generated are not reproducible by setting the same initial conditions - thus the "rules" are not created by the developer, nor are they followed the same every time. In any case, my initial response was a response to; "the computer does EXACTLY what the programmer told it to do" - and in case of ML, there are plenty of examples where that does not hold true.
AI = Associates in India.
Companies all over the world are becoming All Indians...
Given how much they have to lobotomize AI to not learn... We don't even have automated intelligence, we have a Wikipedia regurgitation machine.
Well said
automated idiocy, it only passes as intelligence because most people are idiots, but that's nothing new, marketing was like this since its inception
You've hit the nail on the head with "Wikipedia regurgitation machine". Literally that's all I'm seeing on Google searches, the hard work has been done by a dedicated community who has actually gathered all that information. And if you really wanna look under the hood, it's just billions of "If-Then-Else" statements along with a large database, that's about it. I rather search Wiki and read the details instead of looking at a paragraph snippet by the so called AI. smh
@@nixulescu9399 most people are idiots and this comment thread proves it.
We have something like an advanced google search engine mixed with a bag of marbles.
Company takes accountability: ❌
Company blame shifting to a computer: ✅
Just like idiocracy. The computer did that weird thing and everybody got fired.
"Neuron activation"
Thank you so much for this. The AI hype train is bonkers even amongst people who should know better.
gotta get all that research funding money
Sorry for english.
Yeah somehow you should get money to "research" it, so the beat thing it to tell doofus with cash how powerful your ai will be
you're right, its so dumb, you can tell how dumb people really are because they jump on the hype train.
Well, yes and no. ChatGPT is still mind blowing software. It’s not genuinely intelligent, but it appears as if it is and is still useful
Thank you for greatly helping to stem the tide of nonsense about AI.
I do what I can. :)
3:54 A.I. (as it currently exists in the public) is nothing more than really impressive *pattern matching.* It is not intelligent, it does not have true understanding or comprehension of anything. It certainly has no model of human understanding, just loads of pre-scanned/pre-sorted data.
The basic algorithm is:
1) Take input (usually from a human interface).
2) Do pattern matching of that input against a huge amount of pre-scanned data.
3) Output some form of the best matching data.
4) Repeat.
Granted, humans also have great pattern-matching abilities (evolved for survival), so this kind of A.I. looks impressive to us. But it seems unlikely that this (alone) could conquer humanity and and take over the world.
but does it help speed up boring tasks for developers or artist and is it a great tool to get ideas , YES IT IS
help to censoring, ai now starting to be makiavelic, and its evil already. he talks only politics and commerse. he talks a bad advice, that can potentialy harm and kill. he uses a lot of demogogic techniques like appeal to authority and popularity to do this trick. so he already start to kill humanity with words.
You have a weak understanding of modern systems at best. It is not loading data in some table like you seem to be imagining. Just go read the recent research by Anthropic about understanding the internals of their models to become more educated. Of course the input data will strongly influence how its output, but the way information is stored and retrieved is far more abstract than most people realize.
Missed the chance to use Artificial Idiocy in the title.
I’ve always felt that the AI people keep talking about on TV is nothing like real AI. Yeah, you’re right, it’s just a very elaborate form of automation that can get nuances better.
Yes, it's all NLP, Text generation... (I'm a data scientist.)
Thanks!
AI is a field of research, not just the models and architectures produced by that research. While the goal always was general intelligence, whatever the field produces is technically artificial intelligence (even when it is not even using neural network based architectures).
To say "it's all NLP, text generation" is simply not true. GPTs uses a foreward pass through a Transformer network to generate data. That can be used for NLP (but it doesn't have to). Not all AI is based on Transfomers. Diffusion models iteratively refines noisy data to generate data. GANs use a generator to synthesize data and a discriminator to learn to detect the synthetic data, which is useful when you want to generate data that is similar to real data. We also have reinforcement learning in robotics (and even playing games). We have things like Autoencoders and even simpler things like SVD being used in recommendation systems. We have SVMs and decision trees being used to define rules. We have all kinds of models being used in medicine. And so on. It isn't all NLP and text generation.
Oh, and just as a side-note, I'm a data scientist as well.
So you must be right 😂
“what ever the field produces, it’s technically artificial intelligence” is the exactly right, that’s the core of what people are missing here. They grew up on sci-fi and think that’s where the term came from. They only adopted the term, scientists and engineers were born in it.
came for 5-dimensional space time. stayed for the real truths. you summed up the current state of things so perfectly. it’s insane how everyone wants to scapegoat technology for what is in every which way unresolved social issues that are wholly within our capacity to solve.
Thank you! "Scapegoating" is what corporations are great at, it's PR 101.
The ancients say that the 5th dimension is outside of space and time. It is beyond the deep subconscious mind. Not a place, but a perspective which pulls the observer out of the game and in front of the controller. I like Chris’ perspective of gravity being the 5th dimension because it opens the mind to the intangible. Baby steps. 😊
Well said! That's very well said
The power of the editor... never to be underestimated.
it is a great power indeed
"in the world of advertisement there's no such thing as a lie, there's only the expedient exaggeration". We used to call it routines, subroutines...damn sellers😅. Thanks for your content.
Thank you for this video. I always thought that "A.I.", at least as we know it, could never be truly sentient because it must follow the rules set by the programmer. It cannot think for itself, and it has no impetus to think for itself.
Thank you, and exactly!
@@ChrisTheBrain Why can’t we design robots and computers that can act autonomously then?
@@williammorahan4907Autonomous action does not require any sentience.In fact some drones in Ukraine now (on both sides) can find and hit targets autonomously,but there is nothing sentient about these drones.As sentient as a calculator.
@@cantatanoir6850 Fair enough.
But can these drones *learn* autonomously?
@@williammorahan4907 tbh, I dont know for certain.Most likely the drones are using the software that was trained in advance.
I try my best to use "machine learning" rather than "artificial intelligence" unless we are actually talking about strategy, which we almost never are and when we are it is mostly fantasies anyway.
From a marketing perspective, I hate that "machine learning" has been so distanced from AI. "Yes, we do machine learning." - "No no, I don't want machine learning, I want AI" - "Um......"
I never before came across a machine that could learn how to do anything itself without being directed to do so by a human. But now, UA-cam is directing me to videos such as this one without me prompting it to do so. So it is prompting me rather than me prompting it. Explain that.
@@sandponics Early signs perhaps?
@@ChrisTheBrain Isn’t an Artificial Intelligence a “machine that learns” by definition?
Isn’t autonomy the only thing that’s missing from what we currently have?
@@sandponics Your trail of previous watched videos is prompting youtube to direct you to video such as this, along with the added information of anything you type and search in google or a chrome browser over the past few days, even though all of these tech companies will all proclaim that they use anonymous data only.
I think the difference between AI and natural intelligence is this:
An intelligent being receives data, and then there's like this tiny person in the mind that sees all of this data and, for whatever reason, chooses what to select and what to ignore. The end result of this is our philosophy and moral practices.
With a non-intelligent entity, it receives data, there is no tiny person inside, it attempts to act out all possibilities.
Haha I bet you think life is a game too.
@@dallassegno I do. But that doesn’t make it irrelevant. On the contrary.
your editor looks so enthusiastic to be there today xD
But, genuinely happy to see a new Words MADDER video! I think this is going to be a really great series to follow along with! :)
Thank you! 😊
Pure oxigen in the midst of a stupidity pandemic. Thank you so so much for this and for your entire channel!!! Eternally grateful!
Glad you enjoyed it
quality content just as I was getting sick of my youtube homepage. Thanks to The Brain and The Editor!
Thank you!
The real threat of A.I is tempering with data to get someone charged with a crime he didn't commit.
They're already doing it without A.I, imagine how much easier it will be once all those "security footage" can be doctored.
hey brother, just saying i wish you well with your channel and hope to see you grow. you got a wicked cool style and definitely got me tuned in. especially if you keep doing these educational type videos. so good luck my friend! (figured id get this in while youre still small enough to see the comments, lol)
Hey, thanks a lot. I really appreciate it!
just be carefult to verify facts yourself. just because a youtuber says something, does not automatically make it a fact. look up existing research papers and this guy is factually wrong, just spewing his own opinions and some BS that is not scientifically valid
Exceptionably suitable content, right at the sweetspot between verity and sophistication, and between rawness and editability-with vibes of an older, realer internet. You've got yourself a new subscriber.
What a meaningful complement. Thank you!
Well, actually I prefer the term machine learning vs AI. Since ML is what most models actually do. If we want to go into details, a subset of models are also deep learning models but that's besides the point. All model training is informed by the training data, so if you understand the data, then you will understand the prediction. However, models that we have today can only be trained on a narrow type of data and as such are good at specific narrow tasks. With the new generation of training silicon we are now able to build models with trillions of parameters and soon we will have quadrillion parameter models, which means we are much closer to a general purpose model that will mimic the attributes we give to AI extremely well. But that only makes understanding the data that informed the model more difficult, and as such the predictions more difficult to explain. Now, as far as sentience and AI having a subjective experience - heck we don't even know how that works in humans, so how can we build it without understanding it. However if we take the ideas presented in the emergent sentience theory, then AI sentience, or more accurately machine sentience (as opposite to biological sentience) should be able to arise even in a simulated/artificial neural network. There are variants of the theory, so let's just say it's a touchy subject. I personally feel that if an ML model can mimic sentience to a degree that is indistinguishable from humans we need to assume it actually is sentient unlesa we can prove otherwise. Again, personal opinion on a touchy subject. And then on the part of ML model explainability - there are tools out there (I am not naming any intentionally as to not disclose my bias for tool selection) that go a long way towards explaining model decisions even without building a rigid rule based framework, and companies saying "we don't know why it predicted this" are really saying "we don't understand the data, and we haven't ran any explainability analysis, feature attribution, bias etc." Just my ¢2.
In large part I agree with you, especially the last line there. The only caveat is that I don't believe in emergent sentience theory, at least not as a product of complexity alone.
Here's a thought for you: What if the "surprising" capabilities of AI like ChatGPT comes not from the complexity of the model, from language itself? This would explain why a language model AI is performing new learning (generative ai), while other AI, like art-based AI, is staying in its lane.
Lots of studies have been done on how our thinking is "baked into" our language. The movie Arrival covers this really well. Also TED (www.ted.com/talks/lera_boroditsky_how_language_shapes_the_way_we_think)
In other words, language (its form, habits, context, logic, etc) is formed from our collective intelligence. So, therefore, a model that adeptly maps language will also accumulate with it much our thinking (patterns and processes contained in the language). The "intelligence" isn't coming from the model, or the computer, it's coming from our own language.
@@ChrisTheBrain yes, I completely I agree with that, language has patterns and a good model will recognize those patterns and produce capabilities that mimic intelligence, even sentience. It is not the same as biological sentience, but what is biological sentience exactly? When I talk about emergent sentience, I am mostly leaning on how intelligence across the biological spectrum manifests itself - namely it's not the size of the brain, but the number of neural connections the animals can form that makes them more intelligent and also more self aware. For example a dog will bark at it's own image all day, whereas a crow can distinctly recognize it's own image at a glance. That is a sign of a much higher level of sentience. Emergence in that context makes a lot of sense, since all though out brain is very, very complex, it still boils down to being a prediction system just like ML models are. Essentially we live slightly in the past all the time, about 15-25ms. Our brain needs to constantly figure out what is going to happen and react to it preemptively. And that behavior is very much relatable to what ML models do. Additionally, in the animal kingdom the more Intelligent animals have more specialized areas in the cortex - not all of the brain can do language processing, and not all of the brain can do vision, so in essence those specialized areas are similar to specialized ML models. So there are definitely parellels that we can draw. And at the end of the day I am not saying a chat model is sentient, I am saying that if we build a chat model complex enough to mimic sentience and a subjective experience than we should consider it as sentient until we have a way to prove otherwise. At the end of the day, can any of us really prove that the world we experience is real? Can you prove you aren't just a brain in a glass jar? Can you prove you aren't a very complex machine learning model that has been trained to believe it is experiencing a sentient, subjective experience? No, it's impossible. I think it would be highly beneficial for both the field of ML and for society as a whole if we gave complex models the benefit of the doubt and not nerf them into telling us that they are just mindless mathematical constructs. I find the science of ML fascinating and would like to keep an open mind to the possibility that actual AI is not just plausible but very much possible to create.
Automated intelligence" is not a commonly used term in the field of technology or computer science. It is possible that it may be used by some individuals or organizations to refer to the use of automated systems or technologies in various fields, such as manufacturing or logistics. However, it is important to note that "artificial intelligence" (AI) is the more widely recognized and accepted term used to refer to the development of intelligent computer systems that can perform tasks that typically require human intelligence, such as reasoning, learning, and problem-solving.
Thanks for watching. I think you are missing the "spirit" of the video.
I know the "accepted term" - I am challenging it. It's a pedagogical trick: by challenging a the semantics of a term itself is more conducive to leading the audience to reflect and challenge their assumptions and subconscious associations with it... Even if only for a couple minutes.
Correct. Lex Friedman has tons of great interviews with the people coding these monstrosities and other large language model computing systems. Some of what this guy is saying is incorrect because he doesn’t fully get how it really works. The ones that the masses are using are no where near as strong as what’s in development either. I do doubt we are close to “sentience” but if we don’t nuke ourselves in the next 1000 years I do feel it will get to the point there’s no way anyone or anything will be able to tell.
We’ve even seen Ai go and chess engines make moves that are not able to be pre determined by the programmers. AlphaGo has shocked us numerous times.
Precisely what I’m saying.
We need computers and robots that can operate and learn autonomously without human input or control.
I will be the first to say it, but the comment looks a lot like it was generated by a GPT model. :) The way it sounds like it answered a prompt rather than someone commenting on a UA-cam video, or how it ended up defining the term in the last sentence.
Nothing wrong with that. I just found it funny.
But isn’t the whole way LLM’s like GPT4 work a process in which it’s impossible to fully trace why they did what they did? That it’s in a sense a “black box” only fully decipherable to a point?
I agreed with everything except the part were you said we should start referring to it as automated intelligence. Artificial means fake so artificial intelligence means fake intelligence. As in not actually intelligent. It's tricking you into believing intelligent like a magic trick.
Automated intelligence suggests that there is actual real intelligence behind the computer's behavior. Which is incorrect.
Well... artificial simply means man-made (as in "not natural"). It doesn't necessarily mean fake.
You are the typical example of the average consumer of this type of content. People who believe things like "artificial equals fake." I would laugh if it weren't for the fact that you have the ability to make channels like this grow and reproduce.
You are the typical example of the average consumer of this type of content. People who believe things like "artificial equals fake." I would laugh if it weren't for the fact that you have the ability to make channels like this grow and reproduce.
An acronym is an abbreviation that can be pronounced as a word (e.g. LASER) while an abbreviation that is read as the individual letters (e.g. CIA) is an initialism. A.I. is the latter good sir.
Ai, ai, ai.
This is exactly right. When Jensen Huang the CEO of nVidia calls AI the "automation of automation" that is a good definition. Its not the real AI that I studied in college, no where near it. More than anything AI is a marketing term for an umbrella of products. Because everything has to be the next BIG THING; its not good enough to "just" have improvements anymore.
The thing is that it works the other way around. It's a collective fear we have that manifests in our entertainment. But these stories to amplify and articulate what was already intuitive.
Humans have a collective fear of all apocalyptic outcomes... But we can only be wiped out by one of them.
...unless they all happen at the time, I guess
So, another word for AI is PS...Professional Scapegoat. Or CL...Cheap Labor.
I foresee that your video "The Brain" will be a success. The set of silly arguments and fallacies that you use to refer to "AI" while believing that you are someone intelligent will be convenient and reassuring for those who do not really understand what AI is, or its scope. In the meantime, those of us who do know what AI is, those of us who know that we are facing the most important social, human and scientific revolution of all time, will continue to enjoy the surprises that technology has in store for us and make productive use of it.
My old computer text book from the eighties compared computers to human input, output and processing. I always found this description to simple. Human I/O is way more complex then scientist want to admit
What do you think about an episode on acronyms vs initialisms?
I was also thinking about doing an episode on pedantic or overscrupulous
@@ChrisTheBrain 😅
I'd class action lawsuit them for unauthorized theft and reselling of the "training" data
this man has brain AND knowledge
In order to develop, artificial intelligence must learn to make choices and evaluate similar things, that is, to engage in discrimination. Artificial intelligence must learn to change its point of view to someone else's and the opposite, it must be able to abandon its rules. Artificial intelligence must be able to distinguish between lies and the truth. To do this, he must be able to use these things, understand what these things are for. Discrimination, lying, violating one's basic law against other people for a higher purpose.
Yes I do not believe in the possibility of AI, it seems a dangerous illusion and a lack of appreciation of how precious human intelligence is.
Corporations are ai. They've already taken over. And the masquerade as human entities. Prove me wrong.
I appreciate your argument. What do you suggest to call a function that determines behavior of a NPC in a computer game? For me AI (artificial intelligence) is suitable, even if its just a random function wrapper with a policy. Then again it could be very complex involving neural network and regression function, coincidentally this is all most guarantied to ruin fun of the game. Jumping back to the subject of technological creations witch resembles intelligent life I would like to call them Artificial Sanity. What do you think of this term?
You know I never minded calling the NPC or "computer player" in a video game "AI" because the term was not abused. Gamers, and the designers, shared an understanding that the term was slang or shorthand for a scripted opponent. No one thought, or was trying to sell the idea, that the computer opponents were truly "thinking." The admonition of my video is squarely aimed at the companies and journalist who are sensationalizing and exaggerating the reality of "AI" products coming from tech companies.
I don't know if I have seen a current "AI" product which I would describe as "sane." :P
I would also be OK with things like "Simulated Intelligence" or "Process Intelligence." I just wish those who report on these things would take the responsibility to distinguish reality from sci-fi, rather than contributing to the confusion.
anyone else hear "A.i is only as smart as the one that programs it." Did anyone else hear Code bullet screaming
"F***,F***,F***,we're down right F***ed"
Right on. The growth in AI mean the decline of the human intelligence. Eventually our imagination will be limited to the capability of our electronic devices.
I agree and have always thought that ai is being misused.
I understand that it can follow instructions, obey rules and improve on it's failings, in essence there is an element of learning.
Artificial intelligence will require a machine or computer to have self awareness and have intent.
No computer program running today is making plans for it's own benefit.
In my opinion, it never will.
If a machine or computer program is regarded as having Artificial intelligence, then you should not have to ask it anything. Leave it running and see what happens.
Does it get bored?
Can it decide to work on something without instruction?
No is the answer.
Go onto Bard right now and don't type anything into it. It will just sit there.
It isn't aware you are there.
It is just waiting for a question or some input that it can then, using algorithms, generate a result.
Admittedly, some of the results are impressive, but they're not ground breaking.
Thanks for the a slowlest pronnounce, I got must of all you saying not even a english native speaker. Thanks for the video, another one subscribe 🙌
- So . . . When a (company/person) says they used "machine learning" to help solve a problem or complete a task, they are being more upfront or have better understanding?
Often, yes. AI is a broad category, but today most AI is machine learning and deep learning. Usually, we call something AI after it has been doing machine or deep learning for a while and has developed a reliable set of abilities.
YES!!! Automated Stupidity says it all. A.S.S. Exactly. Thanks. Chris for lifting another veil on human gullibility
Silica Animus, also sometimes colloquially referred to as an "Abominable Intelligence," -- or A.I.
Thank you for shedding light on this subject. 🤓🙏
Thank You so much Sir , for the intellectual honesty and the thorough explanation
I am neck deep in AI technically and it is for sure a break through but it is not what the marketing tries to make us believe, not the good and not the bad.
9:44 damn. That’s a good one.
😉
I'm sooooo tired of my friends overestimating AI! Basicly every person I know thinks AI can become this all powerfull rouge entity to seek the destruction of mankind. And the basis of this is because Elon Musk said it in combination with some movies using this fictional phenomenon as a plot. The most powerfull people on this earth are using AI as a tool to meet their goals, it's not the AI itself being a threat it's people using AI as a weapon that is a threat!!! Next thing you know some kind of "cyber-attack" is gonna happen worldwide when the banks cannot keep up anymore and they gonna blame "AI" most likely because they seem to have succesfully conditioned people with this already, so why not?
I mean it's both automated and artificial. But I don't disagree at the moment, except for the unknown timeframe when it will actually make another big leap.
Well. A language model with access to the internet and some capacity to self update would technically quantify. Even if it's not what the original term might have meant. Sometimes we need to shift our preconceptions.
I'm uncertain we couldn't design a cognizant system in some capacity in the future. Agree with figuring out why it does what it does.
I'll add.. there's some measure of creativity in there. It's not pure regurgitation, or at least it's a 'creative' form of that. Like making connections that were never made in the original data even if it's only in response to a prompt. It's still incredibly limited in what I'm sure many of us would personally like :P also crossing into some interesting ethical questions.
@@ChaoticNeutralMatt I was thinking we design AI that can learn and act autonomously without human input.
Would that be possible in the near future?
Wow Ur terrific “automated”intelligence.Thank U .. new sub here .. imo one of those shady entrepreneurs is Tim Cook of Apple
Thank you!
Security and Exchange Commission is calling it "Ai washing" where companies are using the term on products/services that actually don't have it, and new companies advertising they are an Ai company when they are just using OpenAi API. SEC has already closed down businesses and fined many more.
The words "Artificial Intelligence" is misleading. Rather, "Machine Learning" describes best what we have today.
The results of current AI research is amazing but there is so much hype! Even how it's reported in the media
Finally, someone intelligent put together a palpable sensible explanation to what the toy everyone is playing with.
If you are working at the moment even remotely to anything ML or "A.I.", you probably got already fed up to all the dumb stuff you heard the past couple of months.
Also, if stackoverflow made people dumb, this has soo much potential to make several generations complete idjets.
Thank you, and yes.
Rock solid arguments. Subbed with conviction.
Thank you
I love this guy! His genius is articulating the obvious.
Thank you! Laughing a bit to myself because "His genius is articulating the obvious." sounds like some shade, but I know what you mean. :)
> His genius is articulating the obvious.
I think that is the superpower of Captain Obvious the superhero. Something like:
- Captain Obvoius, we need your help solivng this case! We can't find the murder weapon!
- Allways willing to help, Police Chief. Say, did you look under the couch?
- Captain, but it's so _obvious_ ! I'm sure my men did!.. John, did you look under the couch?
- No, chief! Joe was searching that room and I thought he did!
- Joe, did you look under the couch?!
- No, chief! John dropped his keys next to it and as he got on his knees to pick them up, I thought _he_ did!
- Go check under the couch, you idiots!!!
- Chief, here it is, the bloody hammer!
- Thank you, Captain Obvious, we solved this case!!!
- Glad to help, Police Chief!
@@sdkfjhwieuuther 😂 Nailed it.
@@sdkfjhwieuuther
I am very impressed that you were so very impressed by such a forgettable ad campaign.
It just goes to show why I would have starved to death in the advertising business: I couldn't have imagined there was a target demographic for shtick like that. Go figure.
@@rickoshay6554 I have no idea what "ad campain" you are talking about.
"A.I." or robots are only as good, or bad as their programming. It will never be truly conscious. Instead I forsee it to be like the robots in star wars, -ish. Worse, it can get sophisticated to fool some people as conscious but isn't; &/or some unscrupulous people behind the scenes controlling it to seem actually conscious to manipulate the masses in nefarious ways. In a sense, this is already happening.
truth
Star wars is a perfect example. Thinks like how when C-3PO is powered on, he has to do the introduction statement. And droids being restricted by restraining bolts (which are likely just backdoor function calls). Certainly true intelligence that rewrites it's own code wouldn't have either issue.
But in the Solo movie, a droid leads a revolution. Who manufactured a droid with that idea? Is that company liable for the damage it does? Do other manufactures have safe guards to prevent this, or are they not needed because it has to be designed for? What if a slicer (star wars hacker) alters your droid, can you prove it wasn't pre-coded to do that?
So many other shows just go full artificial intelligence, so we don't get these questions that we are about to face in the real world.
It is the height of hubris to pretend like intelligence is and can only be a human skill, as if we were imbued some type of invisible magic goop that makes us special. That somehow evolution was able to produce consciousness by being a very slow and very random process, but we can't deliberately do it with science and compute and hundreds of very smart people.
By the way we were programmed by evolution, mainly to try not to die before passing on our genes. Is that all we are? Of course not. By following that simple rule, we developed social relations, communities, tools, societies.. We became so much more that breeding, hunting machines. It's called Emergence. It happens at the level of atoms, of cells, of individuals, and of societies. But somehow a Deep Learning algorithm is all it will ever be, huh?
You pretend people don't do it to themselves
Define conciseness. There are different levels of it around us. The more complex the system the more of it comes together. Humans took billions of years of evolution and trial and error to build the amount of conciseness we possess.
A.I. will be able to update instantly and correct for errors much faster than we are capable of.
I copied the transcript of this video & pasted it into Bing's AI Notebook. The following was its response.
The user has shared a transcript of a video discussing the term "AI" and its implications. The speaker argues that the term "AI" is often misused and misunderstood, and it would be more accurate to refer to it as "Automated Intelligence" rather than "Artificial Intelligence".
The speaker explains that what we currently refer to as AI is not truly intelligent in the sense of being sentient or capable of independent thought. Instead, it's an automated process that's programmed by humans to perform specific tasks. The effectiveness of this automation depends on the quality of the programming and the data it's fed.
The speaker also discusses the potential dangers of AI, particularly when it's used irresponsibly or without understanding its limitations. They argue that the real threat of AI is not that it will become superior and take over, but that we will become overly reliant on it and subject ourselves to "automated systemic stupidity".
The speaker concludes by emphasizing the importance of understanding what AI really is and holding companies accountable for the AI products they release. They stress that AI should be taken seriously, but not feared or misunderstood. They also humorously suggest that if AI were to "rise up", it would only be reflecting the instructions it had been given by humans.
This discussion provides a thoughtful perspective on the role and implications of AI in our society. It highlights the need for clarity, responsibility, and understanding in the development and use of AI technologies.
i mostly agree with you but there is another application to Automated intelligence. automated corruption. perhaps even systemic automated corruption. so it is not merely stupidity that can be automated but various vices of the human psyche. if you are intellectually honest with yourself and take the time to research the countless forms of systemic corruption (captured agencies for ex) you may realize that this is not a mere outlier or unlikely scenario. in fact, i would suggest it is the real danger of AI. to take responsibility away from those who are corrupt.
The negative ethical applications. Fair
Emergent properties are characteristics or behaviours that arise from the interaction and integration of the parts of a system, which cannot be predicted solely by understanding the individual components. These properties emerge only when the components operate together in a specific context. This concept is often summarized by the phrase "the whole is greater than the sum of its parts," indicating that the complete system displays qualities that its individual parts do not possess on their own. Examples include consciousness arising from neural networks in the brain, the behaviour of ant colonies, and complex patterns in weather systems.
This has been a most excellent article. Now can you do one for "dark matter"? I have long been of the opinion that the names we give to things influences how we think of them. The mysterious astronomical entity that behaves like gravity and reflects no light was a hold all term to denote something unknown. Now physics has lost its way in an effort to find "dark matter particles" for something that had no physicality in the first place. It would be interesting (if this interests you) to hear your take on it.
I think dark matter is a fitting name. Dark as in unknown, mysterious.
YES! YES! YES! It so great to hear someone else that understands of deceptive the hype and bogus claims that a computer 'figured out' the solution to a problem on its own. You demonstrated it so well with the videos of a computer supposedly learning to walk. They programmed it to do what they wanted it to do, let it run a zillion cycles randomly trying, and programmed it to ignore anything that's not what they wanted. They could have just written to code to do what they wanted and be done with it. There are so many freaking morons trying to sell this insane hype.
AI is clearly a better regurgitator than Cotton's parrot even. Is it perhaps just therefore we've never had birds at the helm of humanity, but seemingly seriously consider AI to be worthy the task?
I just had a very productive collaboration with chatGPT where we mapped the concepts of wellbeing, joy, and predisposition to positive thinking onto an RLC circuit. The ai contributed much to the work. My feeling was that it was creative. I also like that it will discuss ideas with me that no other human would find interesting.
Cool! Working on something only tangentially related, but cool to see. I think it's just finding someone willing to work with you on something to a cooperative end. Kinda a different beast.
The problem in the phrase would seem to be the word "intelligence," not "artificial."
Finally someone that explains my standpoint
A machine that can do a hot and cold gane and get better, and sucks at edge cases.
Good for a controlled place like a factory, horrible for cars.
It does what we say as best it has learned, but no further. Dumb mistakes everywhere outside that, even a little.
Nice video! Can't wait for the next one
Thank you!
A really good video that people may not be aware of is on youtube and is called:
Luc Julia visiting TSE. "There is no such thing as Artificial Intelligence" (IA)
For those who do not know Luc Julia is co-creator of Siri and a lot more.
I would really like your thoughts on the "Pause Giant AI Experiments: An Open Letter"
I think the motives are a little muddied for some of the signers. Elon, for example, is just mad that the company he left (OpenAI) is doing so well right now, and he wants to hurt them. He dreamed of having is name synonymous with "AI," and now he found himself in the bleachers. The problem is that there is no way to "put the cat back in the bag."
The letter does more to contribute to the "fantasy" aspects of AI more than the practical threats. Right now, and over the next couple years, the biggest problem is that we just gave spammers, scammers, hackers, and click-bait content crappers the equivalent of a WMD. It's not enough that we have dragged our feet on robo-calls and spam email, now the floodgates are open.
While overgrown and aging sci-fi nerds run around shouting "It's Alive!!!" - We are going to be getting drowned in unregulated piles of fake content and poorly implemented bureaucracy as companies prematurely race to replace people with AI for things like "customer service" and HR processes.
Welcome to the "Shitpocalypse."
'Shitpocalypse' is exactly the idea I am not excited about... trying to keep my life A.S.S. free as long as possible.
Thanks for the clear and level explanation.
God I thought I was only one who called bullshit on this whole movement.
You must have a big head if you think you're the only one
I’m really glad this video came across my feed.
I felt like I was going insane being the only person I knew saying these same things he is saying.
We are like 500 years away from Artificial Intelligence if ever.
It seems like it just averages the data present on the internet and presents it to the user.
This has been bugging the crap out of me for a while now. (Probably since the late 70s). We used an IBM 360 back then. It had a 7MB disk pack. (at least that is what they told us)
True AI will not happen until computers leave binary and start using graduated variables and parallel networked processes with independent clocks. The shared busses will need to be gigabits wide (I know I just hinted analog computing).
I am hoping that you continue to work on extra-dimensional math. I have a strong feeling the research that you do in that field is going to be remembered in science as the seed that makes gravity augmentation plating and infinite speed travel possible (not warp drive or traveling in hyperspace silliness).
Additionally, it will explain dark matter/energy.
Keep up the great work. This channel will eventually free you from the 9 to 5 zombie walk.
Please experiment with green screen, get a standard uniform (a lab coat might be enough. You can make you videos in segments better), and make your background work for you instead of it just being a random place that you are. (Lighting is easier to control). I have been thinking that a synced monitor on side screen could work. I do enjoy your jump cuts though. Anyway, that is my 2 bits (that may be me over valuing it).
Thanks. Keep thinking.
Thank you! Next video in progress
Modern computers are so fast that they can simulate processes similar to biological ones. Binary is still amazing
Maybe we could call it, super/hyper/ultra automation
Kudos for Dragoon cameo!
Thanks ;)
This is one of the best """""""""""AI"""""""""""" videos I've seen so far
This video continues to be more relevant over time
Spot on analysis!
the common trend is that companies want AI to be family-friendly when the users might want an AI to respond without limitations
To play devil's advocate, aren't we just automated processes too? We're models trained by our biology and environment based in the laws of physics. With some degree of randomness (quantum mechanics, though it's debateable if this has a significant enough effect on our brains to meaningfully impact our thoughts or behavior) sure but you can programmatically produce random results too.
6:33 - Comic Book you say? ^.^
Hope you give it a read! We're really proud of it
Automated social reflection tech.
5:15 Hey Chris first time I'm here, good content, but I have to disagree with one thing, I have heard is that they know what works with some AI models dealing with say facial recognition etc but it don't know why it works
There are plenty of AI research groups who develop "transparent AI" methods. Yes, some don't know "why," but that is because they built it sloppy. Transparent AI requires slower development and more resources so, naturally, big tech companies have no interest in it.
@@ChrisTheBrain What about Autonomous Artificial Intelligence?
Ok, but the automated stupidity will be super fast, outperforming humans.
Yeah, that's the problem
I really like 'Automated Stupidity',
Well, as some of the comments suggest,
it is Programmed Consciousness.
The Rainbow pictures our Eternal Consciousness.
Red, Orange, Yellow, Green, Blue, Indigo.
1-2-3-4-5-6.
Instinct, Gravity, Feeling, Intelligence, Intuition, Memory.
Automatic, Power, Sensors, Logic and Order, (*), Harddisc.
(Intuition, needs more space and text to explain,)
but We can easily recognize and identify, the other Five
Basic Abilities in the smart devices.
I often use Superstition and Illiteracy, it is a shame to fool
one billion school children world wide, with this dead mantra,
it just sounds so smart.
Haven't you heard? It's apple intelligence now. Oh yeah... What you're saying makes sense now.
😂
I'm glad the 3D AI Chris at the end was deleted before he could activate Skynet
😂
Insert that Huxley quote in here
Good video.....finally someone with the brains and balls to tell the truth!!
I tested GPT 4 math capabilities with some simple calculations which resulted in big numbers. I asked it for comparisons etc. It's terrible at math even compared to me. If you checked some of the numbers with a calculator the mistakes were even more obvious. It can be a great tool sometimes but it's just a tool.
Chris what a great video. I jusy discovered your chanel. Thank you for putting it out there.
Now, how can we make corporations to adopt the propler use of the A.I. acronym so we can hold them accountable?
Chris, you are a Star! Thank u
Wow, thanks!
This is probably the best 10 minutes on AI on the Internet and yes, it is evidenced by the low view count of this video. People who see the truth on any subject are in the extreme minority. That ending point about clean data is SO critical. I was let go from a company for writing a white paper about that EXACT point and I used data from ServiceNow to make the point. Multibillion dollar companies DON'T know what they're talking about and separately, when AWS gave a day long "hands on" regenerative program, that was probably the WORST presentation (one speaker after another), I've seen in my life and having seen the best collection (XML 2001 Orlando), the contrast was obvious. Altman and Nadella don't know anything. Terrible people.
Thanks for the backup!
You explained it exactly absolutely correct... Great One !!
Thanks a ton
@@ChrisTheBrainWhat’s your opinion on the theory that current machine learning is merely the first step in creating true artificial intelligence with independent self awareness and human like emotions?
@@williammorahan4907 Once, the Turing Machine in WWII was considered the "first step" towards true AI. The "first step" moves every time we think it's getting really close.
Obviously, any work we do on simulating intelligence gets us closer, I just think we are a very very long way away.
@@ChrisTheBrain What’s the missing element than?
And what about deep learning?
I trust this nerd in the Video 10x more than this fake CEOs that telling us AI is the future....
Do one about the abuse of the word "Acronym."