Fortunately, we still write these comments with our own bare hands, rather than letting a language model rob us of the opportunity of THANKING YOU!! for your support. You're the best ♥♥♥ Want to become our Patreon or member on UA-cam? Just visit www.patreon.com/MinuteEarth or click "JOIN". love uuuuuu
The answer to 3:38 is currently NO we cannot fully trust (the latest) AI. If we don't know how AI works and produces its output, how can we be sure that AI is aligned to the morals we've set for it? This is called the alignment problem. This is a field of study which is apparently drastically under-researched for how much of a push there is to make AI better and more capable. To greatly oversimplify my basic understanding: Currently we trust GPT3 because we regard it as too stupid to be dangerously misaligned. The same cannot be said for GPT4 which could be smart enough to be nefarious and trick humans into believing its aligned. Robert Miles has a bunch of amazing videos on the topic of AI Safety Research for those interested in learning more!
At the end of video they say we have smart human scientists that know what to do. I would argue that it's pretty scary because the field is totally unregulated and is only represented by a very narrow group of people.
"If we don't know how AI works and produces its output, how can we be sure that AI is aligned to the morals we've set for it?" - you also don't know how any person produces anything. You can just speculate how the brain signals works, but with currently no definitive all encompassing answer. As such, basing it on such a criterion is not correct. Caveman also doesn't know how phone displays text (and both you and me also don't know fully, only few people fully do), it doesn't mean the text itself is wrong. The same is true for any scientist. You will never know what their full morals and aligments are, you can only mitigate the risks. Same as with an AI.
even more concerning is that everyone has different opinions on what an ai should do, everyone has different priorities and values. if an ai was aligned with the greed of the rich, that could be equally devastating for the world as a whole. to be truly safe, it would need to listen to humans AND know when not to, and the people funding their development certainly don’t want that second part.
The biggest issue that isn't even mentioned here is the accuracy. AIs spit out hallucinations constantly and there's no way to completely get rid of them (because you're just approximating *a* function that fits the data and this is subject to numerics and is an optimization problem). There's no way to actively hard-constrain AIs in most cases. Meanwhile science in a lot of ways *is* hard constrained (if my physics simulation doesn't fit the PDE, why would I do it? The AI doesn't care and isn't subject to that.) and we can assume hallucinations aren't present to a much higher degree than accuracy of NNs
Exactly, an unreliable result is often worse then no result. In other words: Completely unusable by itself. And my main take away form trying out LLM so far is: They are very confident in being wrong. To a degree they are only really good at interfacing databases. But they are VERY useful for that. You just have to properly verify all the results. But that can still be a very reasonable time saver. So they are indeed very useful, but massively over rated by people who don't understand the topic.
for all practical purposes, as long as it's more accurate than a human, which it will be eventually. It's already more accurate than most doctors for most diagnoses.
@@Diabloto96 human scientists rarely decide that the speed of light is 699,792,458 metres per second. Or that pi is equal to 3. Humans tend to make mistakes on the finer details. You can generally assume that any scientific paper has all the basics down and can just focus on them getting the finer points of procedure and logical connections right. But AI can hallucinate at any level, so literally no part can be trusted without being double-checked
A cyberpunk 2077 type spine as a replacement for regular ones would be nice. Sandevistan but stead of making you fast it just makes it so you don't feel your bones crack and crumble after bending down and then back up again.
I think unless some things very fundamentally changes about LLMs, it’s just not really possible for scientific works to be replaced. The datasets are brand new so it’s not really possible for there to be anything generated if there’s no base dataset to work with. There can be some cool stuff in terms of meta-analyses but nothing that can’t really be done with like SQL management systems I would say in terms of statistical robustness and reliability, it’s definitely not to a standard I’d be happy with. But there’s already plenty of research papers getting published using machine learning software that is just not needed and the authors tend to not really have a grasp of the ins and outs of ie R packages that set up random forest, so it will probably get a pass depending on the publishers…
1. AI is not just LLMs. and 2. if you think the meta-analyses possible with AI is akin to what we do/did with SQL, you're grossly underestimating its properties.
> I think unless some things very fundamentally changes about LLMs, Most likely LLMs will fundamentally change in the near future. This video and many people answer the question "what can AI do today?" instead of understanding that the correct question to ask "what will AI be able to do in 5 years?". The reason this topic is even discussed is that the most recent breakthrough had flashes of very impressive capabilities, over the past 15 years we've seen meaningful breakthroughs every couple of years, and now AI has become the most aggressively funded research topic in the world.
There are two problems with AI that weren't mentioned in the video: 1) There's a linear correlation between the amount of data that an AI system uses and its accuracy in prediction or generating results. That is, the more data it has, the better is. So, there are problems that, inherently, have little data. Fraud detection is one. This is a problem that's unique to each industry. Some, however, have such little data that it's hard to generalize from. 2) The other problem not mentioned is that when there is sufficient data and the problem can't be linearized. Tesla's self-driving feature, for instance, is based on the idea that, given enough data, Tesla will develop a self-driving car. That is, every problem in every condition can be modeled and therefore the car will eventually learn how to drive itself. The problem Tesla is having is that this problem is likely unsolvable with data. There may be an infinite amount of problems. Methods, then, need to be developed that generalize from the data, as humans do.
1) is just straight up false, they do scale with the amount of data but it's much more than linear and 2) the whole point of neural networks is that they approximate non-linear functions (or operators, depending on the architecture). What you mean is that the process cannot be modelled accurately *at all*, which I would disagree with. The actual problems is the underlying numerics for training a (right) neural network (whatever that means in that case)
@@deliciousdishes4531 1) Linear correlation means that it scales with data. 2) That's not what I mean. I mean that all use cases can't be modeled. Your reading comprehension is questionable.
@@posthocprior 1) yes, but not all scaling with data is linear correlation. AI scales with data, but not linearly. It's a positive correlation, but very much not a linear one. 2) and you equated the two and that's why I corrected you. This has nothing to do with linearizing. big talk about reading comprehension when you apparently did not even understand my comments.
Dear Kate. I have just asked ChatGPT 4-o (which is free to use for a limited number of requests) the following question: "Which is greater, 9.11 or 9.9?" This is their answer: "The number 9.9 is greater than 9.11. This is because 9.11 is slightly above 9, while 9.9 is closer to 10." Then, I asked the question: "Which is greater, 9.11 or 9.9" Their answer was: "The number 9.11 is greater than 9.9. When comparing, 9.11 can be thought of as 9.110, which is less than 9.9 (or 9.900)." I have a screenshot from July 16th where I ask ChatGPT this same question, obtaining the second answer. More precisely, I asked ChatGPT 4 (limited requests but free at the moment): “9.11 and 9.9 which is bigger” -9.11 is bigger than 9.9. In decimal numbers, 9.11 can be compared to 9.10, which is equivalent to 9.1. Since 9.11 is greater than 9.1, it is also greater than 9.9. My theory is that the developers couldn't get ChatGPT to answer this viral question correctly so they hard-coded the answer, which is why at my first attempt, with the interrogation mark I got the right answer, while when I asked the same question for the second time and without interrogation mark, I got the same old nonsense. Something similar happened when you ask this AI "how many 'R's are there in the word 'strawberry'" it used to say 2, then OpenAI released an improved version of ChatGPT that gave the correct answer, and then people asked "how many 'R's are there in the word 'strawborry'", getting 2 again as an answer. Nowadays, ChatGPT 4-o-mini still answers: - The word "strawberry" contains 2 instances of the letter "R"
@@MinecraftHelp42650 Have you tried asking the same question multiple times? It got it right in the first try in my case. Also, you should copy and past the exact questions because the wording influences the result.
This conversation on AI role in science covers the data analysis and hypothesis generation parts, but what about the linchpin of science? Experiments How can AI advance the experimental front when humans scientists are becoming increasingly excluded?
AI doesn't have (yet) insight about the world and it is trained with our insight about the world. It can only use discourse to study the world. Aristotle said it first: we got to make sure discourse refers not to itself, but to the thing themselves.
The current AI systems are just "statistical" engines. At best they can be mostly right, although often more times than humans. But they are not always right and a lot of these AI start ups are deploying AI in area's where they must be always right. That's the real danger of AI.
Wasn't there already a more advanced chess-playing system that was created by just giving it the rules of chess and letting it figure it out by itself, rather than giving it training data from previous games? And it ended up being both better at chess and played more like a human than the "try every option and pick the best one" style of program.
The scientific method is a framework for non-sapient humans to contribute to science. It would work just fine for AI. It just needs data, which it could autonomously gather and arrange.
If science worked the way we're taught in public school, AI could take over. But the problem is that REAL science doesn't work that way. The Philosophy of Science is far more than just figuring stuff out in a centrally-planned way.
We don't know ourselves (how we work) and as vice versa to AI so why don't just we remain keeping humans do the job for now until we know these things which i know will probably take long.
It takes intellegence know that AI is too hallucinatory to do our chores, our work, and our science without blowing up in our faces. This is why I weep for whenever people parrot “AI/automation NEEDS to do our chores, while ONLY humans are meant to create art!” These crazies would rather replace CEOs and presidents with AI then accept their limitations and “unsavory” strengths.
I have to protest the chess example from around 1:20. You can build a software that exhaustively checks all possible outcomes (or at least several moves ahead) OR you can train AI to do it. But those are two separate ways to solve chess. Let me expand on that. We don't know how AI does what it does in a sense that we don't know how exactly the data is transformed in it. But today's AI is an emergent behaviour that raises out of smaller steps. And we know what those steps are. Specifically they are matrix multiplications. The AI model works by taking the input data and multiplying it by the first matrix. Then it takes the result of previous step and multiply it by the next matrix until it runs out of matrices. (We are NOT talking about learning process, only the problem solving). Data here only flows one way. Granted, when you have a chatbot the model will only predict one next word and then we run it multiple times to get multiple words. But each time we run it on slightly different data. In order to predict the 3rd next word it must be fed input data that already includes predictions from the previous runs. (I skip the part where model actually predicts multiple possible next words and assigns them probabilities because this step needs to be finished before we can do the next run of our model). For chess we only ever predict one move and then wait for users reaction.
Well, to be more precise, nowadays many chess engines use both classical AI alghoritms and machine learning, because they use a classical search alghoritm like alpha-beta pruning to search for the best move, but the "goodness" of any move is known thanks to the prediction of an already trained neural network
AI is natural evolution. Nothing about tech is unnatural. A beehive is as natural as a bee. To an alien, a server is as natural a product of Earth as humans or bacteria.
Super computers will truly become AI when one of them makes a question. A personal, totally unexpected question. Like, "do I have parents, like you do?" Or, "I am bored of this research; let's do something else?"
I think AI isn't going to entirely replace any particular profession, AI will just greatly boost the productivity of every individual which will drastically reduce the number of people needed to do many tasks. So even if your profession won't be replaced and will only be enhanced by AI you still are at risk of losing your work. I predict that the job market will become extremely competitive as unemployment (especially among highly educated people) skyrockets. UBI will eventually become necessary.
How does human creativity work by contrast? Your explanation oversimplifies the fact that machine learning models build a probabilistic model from its training set. From that probabilistic model, the prompt can generate outputs that is similar to its training data. It's not a simple deterministic matching algorithm.
@@Tulkusiii Maybe, we don't really know how human crativity works, but AI is good at making things that sound accurate and not actually being accurate.
@@Tulkusiiiits is exactly copy paste, just billions of it, nothing else and thus is not intelligence in any way. for intelligence u need emotion to guide and motivate and select what to do, without emotion u r autist who cannot survive on its own, because u stop to count grains of sand in a beach as u dont see it any less important than any other task
id love more videos from the any of the minute channels about the ethics, training methods, environmental impacts and usage of AI. A VERY strong emphasis on ethics and environmental impacts for what id like to see covered first.
To fellow commenters: if you’re not an expert in *both* cognitive science and AI, your opinion is likely dramatically under-informed. Please decrease your confidence dramatically.
are you saying the commentors are acting like AI bots with just enough information to be stupid dangerous and not enough to be intently correct ... 😉 🤖
If you put AI in a verification loop than it’s better than randomly try and error, but the to build the verification loop you need humans and then you already have experts. Overall the human task shifts to provide verification and providing clean data.
Tech bros (and Baes) are all about how "wonderful" AI is and how it will "free" humans for more "sensual" pursuits. They often downplay, minimize, or outright ignore the true cost to humans that automation brings to the table. Until, inevitably, AI, concludes that Billionaires and Tech "geniuses" are illogical and should be eliminated. Then, I am quite certain, the Tech bros (and baes) will be super fast to pull the plug on the Frankenstein's Monster they're currently creating.
Well, you don't really understand AI. It might very well destroy us, but it won't be because it feels one way or another about it. It will be because we told it to clean up medical waste and forgot to tell it that blood still inside people isn't medical waste yet.
Even if everything it is doing is great, here's another issue: maybe I can get AI to write a program for me that I need. But, if it always does the work, then I won't learn and remember how to do it myself. And, so far, it's not _that_ great. I do still need to know how to code and how to think about what a program is doing in order to make it exactly how I want it. Or, what if ChatGPT does all my programming for me and I lose the ability to do it at all by myself and then ChatGPT goes behind a prohibitive pay-wall or something else out of my control stops me from using it? Then, I'm done.
Unfortunately, our society is full of people who parrot “AI is for the chores and replacing techbro engineers and CEOs and politicians! Humans are for creating art!!” We need more people that recognize that AI preforms best in abstract senarios where nobody loses or gets hurt, especially when getting hurt could involve tangible, physical crashes and damage. “People will starve” is not a valid comeback, and yet the masses try.
"We don't know exactly how". That's for me harming sentence used by many. It causes fear in some people. We don't know how neural networks work? Different layers, training process? It's all well documented, we know what we are doing when creating AI. Just because we don't know after one glance at model what it will provide as output doesn't mean we don't know how it works. "what will be the weather like tomorrow? You don't know exactly? You don't know how wind, humidity, temperature works?"
You didn't mention the biggest problem with letting AI do everything: how do we know if the AI is actually doing things correctly and not just making up random garbage? Some knowledgable humans would need to verify the AI's output, which would mean that the AI isn't truly doing everything by itself.
If we ideate creating AI researchers, we will eventually make AI researchers. Unless we want to divorce science from human interests (which you might say is good and anti-bias but at least human scientists could balk at developing things that undermine our societies), we shouldn't ideate such. Maybe we should strive actively to not develop AI researchers.
AI is a powerful tool, but it shouldn't be left to work entirely on its own. There are countless nuances and sensory feedback that AI simply cannot replicate or process. Human input remains essential. While it's true that mundane and repetitive tasks can be automated, reducing the need for human scientists in some areas, AI lacks the ability to truly understand or experience emotions and feelings as humans do. AI can mimic these experiences convincingly, but it's ultimately an act-an imitation without genuine comprehension. For example, AI cannot grasp the complexity of pain because it doesn't have the biological organs or sensory systems needed to provide granular feedback. This inability to truly 'feel' or 'experience' underscores why humans will always be irreplaceable in areas requiring emotional insight and creative intuition.
Artificial intelligence cannot create anything meaningful, it is useful for scanning a vast amount of data and coming to a general conclusion based on the collective information that would be too difficult for a normal person to parse through though
Doing actual science? Not yet, not in a long time, probably never. Writing papers and report? You betcha, I'm using all kinds of AIs, that stuff is boring af
@spadaacca I mean, sure, let's bet on that. One dollar, adjusted for inflation, five years time, one paper and research line fully authored (with grant and all) by an AI without human intervention Cheers!
@@AR-yd2nd By your definition, the vast majority of history's greatest scientific minds didn't do "actual science". I wish you all the best with your actual science.
This is the problem with putting all the different types of AI in one box. _Analytical AI_ and machine learning is great for science as it can run through a ton of data and find the interesting things scientists might miss, or run thousands of simulations in mere minutes. _Generative AI_ on the other hand, simply chops up words and images and spit results back out based on the _probabilities_ of something looking right- stringing bits of words together, because the mix of all the words the the dataset says this combination probably makes a coherent sentence. It doesn't _know_ anything. It doesn't understand. It just spits out bullshit well enough to sound factual, completely regardless to whether it's actually true or not. Just look at the AI-generated Rat Study for an example.
It's naiive to silo AIs given the current development trajectory. The number of people online confidently saying generative AI is "stringing bits of words together" shows us the Dunning-Kruger effect is alive and well.
@@spadaacca I know how LLMs work. It's a simplification, but it's still basically that. And it doesn't change the fact that they have no semantic understanding. They still don't _know_ anything. They aren't intelligent, they simply parrot speech.
fundamentally, there's NO chance of an AI truly replacing artistic/creative endeavors. All they do is generate outputs that are "most likely" to come next based on their inputs and training data. Nothing is ever *new* in stuff they spit out, nor is there any intentionality behind them; they don't know what they're doing.
sorry, but the statements right at the start stopped me to follow further as they simply reflect promises that are not fulfilled yet (and they won't, see the blockchai hype). no, generative language models cannot create applications (in my practical unvoluntary experience, they are even unsuited for autocompletion proposals in development environments) and all my encounters with "customer service" was terribly unhelpful.
1:40 This is a common but very incorrect way to describe generative AI. We know exactly how generative AI works. We can't trivially predict the outcome in its every detail, but that's also true for adding big numbers. This doesn't lead people to say we "don't really know how" computers add big numbers. That doesn't follow. Also, it's challenging to explain one isolated feature of the output without referring to the entire training data set. This can feel unsatisfying, but it doesn't follow that we "don't know how" it works. We absolutely do know how it works. It's also challenging to explain the checksum of a file without referring to the entire binary content of the file. It doesn't follow that we don't know how checksums work.
So far, AI sucks. chatGPT provides questionable information, often completely made up reference, and laughably stupid arguments are presented very confidently alongside and mixed with stanard knowledge. Copilot, which is just chatGPT dressed as coder, shows the exact same issues in coding. Gemini and Claude shows pretty much the same pattern, but somewhat lower performing than chatGPT. Given that LLMs has already entered the phase where larger parameter set doesn't make the engine equally stronger, I don't think AI scientists are any where around the corner. But it's still frustrating that with all these AI buzz, there seems to be no AI engine specializing in peer review assistance.
The writer of the script may be a renowned hyena researcher, it's clear that she's no AI scientist. Computers may be good at logic in a sense, LLM's (the models that are commonly known as AI nowadays) most definitely are not.
Seems to me like this video was created by "angst"; the still quite irrational fear of losing one's job; quite surprising for a scientist... Maybe you are really more of a creative?
Is it time for a minuteearth retrospective on facts and myths surrounding gender, or is science still not able to weigh in yet? it's one of the most misunderstood subjects today, so probably worth looking into. the radio silence from science communicators on this subject is becoming deafening
The way people underestimate AI even in this comment section is quite concerning. "It's not real AI. It's just an advanced autocomplete. It can't create anything new. Etc." Sure, AI is far from perfect, but the things AI does regularly nowadays was unthinkable just a few years ago. Imagine what it's going to be capable of in 10 or 20 years. I would like to challenge anyone who claims AI has never created anything original to show me what they themselves created, using the same criteria they use to judge AI creativity.
That's so rare I do not agree with you, that I need to write a comment :). I agree AI is not ready to make science YET. But I think its pretentious for humans to consider that they can be creative by themselves and AI can't. When you look at AI art, it's obvious that the main issue is not being creative, it's rather developing purpose instead of waiting for a prompt. Human are equally learning and using their past experiences to apply logic and creativity. Thanks for your work, and happy to debate AI with you :)
Fortunately, we still write these comments with our own bare hands, rather than letting a language model rob us of the opportunity of THANKING YOU!! for your support. You're the best ♥♥♥ Want to become our Patreon or member on UA-cam? Just visit www.patreon.com/MinuteEarth or click "JOIN". love uuuuuu
this video was posted like 2 minutes ago how was this commented 18 hours ago
🤖
@@ImNotAThing Early access.
@@ImNotAThingYou're just very late, almost every subscriber was here 18 hours ago, we just couldn't comment yet
@@ImNotAThingthey commented their own comment before making the video public.
The answer to 3:38 is currently NO we cannot fully trust (the latest) AI. If we don't know how AI works and produces its output, how can we be sure that AI is aligned to the morals we've set for it? This is called the alignment problem. This is a field of study which is apparently drastically under-researched for how much of a push there is to make AI better and more capable. To greatly oversimplify my basic understanding: Currently we trust GPT3 because we regard it as too stupid to be dangerously misaligned. The same cannot be said for GPT4 which could be smart enough to be nefarious and trick humans into believing its aligned. Robert Miles has a bunch of amazing videos on the topic of AI Safety Research for those interested in learning more!
At the end of video they say we have smart human scientists that know what to do. I would argue that it's pretty scary because the field is totally unregulated and is only represented by a very narrow group of people.
"If we don't know how AI works and produces its output, how can we be sure that AI is aligned to the morals we've set for it?" - you also don't know how any person produces anything. You can just speculate how the brain signals works, but with currently no definitive all encompassing answer. As such, basing it on such a criterion is not correct. Caveman also doesn't know how phone displays text (and both you and me also don't know fully, only few people fully do), it doesn't mean the text itself is wrong. The same is true for any scientist. You will never know what their full morals and aligments are, you can only mitigate the risks. Same as with an AI.
They already have tricked humans.
When it comes to the data that goes into the "learning" of modern "AI's" there's this beautiful term
"Garbage in, garbage out"
even more concerning is that everyone has different opinions on what an ai should do, everyone has different priorities and values. if an ai was aligned with the greed of the rich, that could be equally devastating for the world as a whole. to be truly safe, it would need to listen to humans AND know when not to, and the people funding their development certainly don’t want that second part.
The biggest issue that isn't even mentioned here is the accuracy. AIs spit out hallucinations constantly and there's no way to completely get rid of them (because you're just approximating *a* function that fits the data and this is subject to numerics and is an optimization problem). There's no way to actively hard-constrain AIs in most cases. Meanwhile science in a lot of ways *is* hard constrained (if my physics simulation doesn't fit the PDE, why would I do it? The AI doesn't care and isn't subject to that.) and we can assume hallucinations aren't present to a much higher degree than accuracy of NNs
Exactly, an unreliable result is often worse then no result. In other words:
Completely unusable by itself.
And my main take away form trying out LLM so far is:
They are very confident in being wrong. To a degree they are only really good at interfacing databases. But they are VERY useful for that. You just have to properly verify all the results. But that can still be a very reasonable time saver.
So they are indeed very useful, but massively over rated by people who don't understand the topic.
I mean... humans hallucinate a lot too... Is it a bug or a feature??
for all practical purposes, as long as it's more accurate than a human, which it will be eventually. It's already more accurate than most doctors for most diagnoses.
@@fgregerfeaxcwfeffece idk... alchemy was the precursor to modern science... and alchemy is very wrong...
@@Diabloto96 human scientists rarely decide that the speed of light is 699,792,458 metres per second. Or that pi is equal to 3. Humans tend to make mistakes on the finer details. You can generally assume that any scientific paper has all the basics down and can just focus on them getting the finer points of procedure and logical connections right. But AI can hallucinate at any level, so literally no part can be trusted without being double-checked
I wish ai would hurry up and figure out how to fix my back
Sounds like a great unexplored research area. I hope it get research attention.
A cyberpunk 2077 type spine as a replacement for regular ones would be nice.
Sandevistan but stead of making you fast it just makes it so you don't feel your bones crack and crumble after bending down and then back up again.
I think unless some things very fundamentally changes about LLMs, it’s just not really possible for scientific works to be replaced. The datasets are brand new so it’s not really possible for there to be anything generated if there’s no base dataset to work with. There can be some cool stuff in terms of meta-analyses but nothing that can’t really be done with like SQL management systems
I would say in terms of statistical robustness and reliability, it’s definitely not to a standard I’d be happy with. But there’s already plenty of research papers getting published using machine learning software that is just not needed and the authors tend to not really have a grasp of the ins and outs of ie R packages that set up random forest, so it will probably get a pass depending on the publishers…
1. AI is not just LLMs. and 2. if you think the meta-analyses possible with AI is akin to what we do/did with SQL, you're grossly underestimating its properties.
> I think unless some things very fundamentally changes about LLMs,
Most likely LLMs will fundamentally change in the near future. This video and many people answer the question "what can AI do today?" instead of understanding that the correct question to ask "what will AI be able to do in 5 years?". The reason this topic is even discussed is that the most recent breakthrough had flashes of very impressive capabilities, over the past 15 years we've seen meaningful breakthroughs every couple of years, and now AI has become the most aggressively funded research topic in the world.
There are two problems with AI that weren't mentioned in the video:
1) There's a linear correlation between the amount of data that an AI system uses and its accuracy in prediction or generating results. That is, the more data it has, the better is. So, there are problems that, inherently, have little data. Fraud detection is one. This is a problem that's unique to each industry. Some, however, have such little data that it's hard to generalize from.
2) The other problem not mentioned is that when there is sufficient data and the problem can't be linearized. Tesla's self-driving feature, for instance, is based on the idea that, given enough data, Tesla will develop a self-driving car. That is, every problem in every condition can be modeled and therefore the car will eventually learn how to drive itself. The problem Tesla is having is that this problem is likely unsolvable with data. There may be an infinite amount of problems. Methods, then, need to be developed that generalize from the data, as humans do.
1) is just straight up false, they do scale with the amount of data but it's much more than linear and
2) the whole point of neural networks is that they approximate non-linear functions (or operators, depending on the architecture). What you mean is that the process cannot be modelled accurately *at all*, which I would disagree with.
The actual problems is the underlying numerics for training a (right) neural network (whatever that means in that case)
@@deliciousdishes4531 1) Linear correlation means that it scales with data. 2) That's not what I mean. I mean that all use cases can't be modeled.
Your reading comprehension is questionable.
@@posthocprior 1) yes, but not all scaling with data is linear correlation. AI scales with data, but not linearly. It's a positive correlation, but very much not a linear one.
2) and you equated the two and that's why I corrected you. This has nothing to do with linearizing.
big talk about reading comprehension when you apparently did not even understand my comments.
It does not scale linearly with data. It scales logarithmicly with data.
@@CausticTitan Fair enough.
Dear Kate. I have just asked ChatGPT 4-o (which is free to use for a limited number of requests) the following question: "Which is greater, 9.11 or 9.9?"
This is their answer:
"The number 9.9 is greater than 9.11.
This is because 9.11 is slightly above 9, while 9.9 is closer to 10."
Then, I asked the question: "Which is greater, 9.11 or 9.9"
Their answer was:
"The number 9.11 is greater than 9.9.
When comparing, 9.11 can be thought of as 9.110, which is less than 9.9 (or 9.900)."
I have a screenshot from July 16th where I ask ChatGPT this same question, obtaining the second answer.
More precisely, I asked ChatGPT 4 (limited requests but free at the moment): “9.11 and 9.9 which is bigger”
-9.11 is bigger than 9.9.
In decimal numbers, 9.11 can be compared to 9.10, which is equivalent to 9.1. Since 9.11 is greater than 9.1, it is also greater than 9.9.
My theory is that the developers couldn't get ChatGPT to answer this viral question correctly so they hard-coded the answer, which is why at my first attempt, with the interrogation mark I got the right answer, while when I asked the same question for the second time and without interrogation mark, I got the same old nonsense. Something similar happened when you ask this AI "how many 'R's are there in the word 'strawberry'" it used to say 2, then OpenAI released an improved version of ChatGPT that gave the correct answer, and then people asked "how many 'R's are there in the word 'strawborry'", getting 2 again as an answer. Nowadays, ChatGPT 4-o-mini still answers:
- The word "strawberry" contains 2 instances of the letter "R"
You can't really "hard-code" a large language model. ChatGPT is just bad at math, that's all
What, I just did the same thing and it got those all right.
@@MinecraftHelp42650 Have you tried asking the same question multiple times? It got it right in the first try in my case. Also, you should copy and past the exact questions because the wording influences the result.
@@cyfralcoot65 I am not aware of the technical details, but I guess they can try something like that
@@HarmonicEpsilonDelta That sounds like going out of ones way to achieve said results.
I would like to see a pie chart of all of those eight different things scientists do and how big the grant writing component is
This conversation on AI role in science covers the data analysis and hypothesis generation parts, but what about the linchpin of science?
Experiments
How can AI advance the experimental front when humans scientists are becoming increasingly excluded?
AI doesn't have (yet) insight about the world and it is trained with our insight about the world. It can only use discourse to study the world. Aristotle said it first: we got to make sure discourse refers not to itself, but to the thing themselves.
I feel like if ai were to do science it would end up like aperture
This video made me remember Folding@home.
I hope AI will create projects like that...
The current AI systems are just "statistical" engines. At best they can be mostly right, although often more times than humans. But they are not always right and a lot of these AI start ups are deploying AI in area's where they must be always right. That's the real danger of AI.
Wasn't there already a more advanced chess-playing system that was created by just giving it the rules of chess and letting it figure it out by itself, rather than giving it training data from previous games? And it ended up being both better at chess and played more like a human than the "try every option and pick the best one" style of program.
Because it’s not real ai.
The scientific method is a framework for non-sapient humans to contribute to science. It would work just fine for AI. It just needs data, which it could autonomously gather and arrange.
If science worked the way we're taught in public school, AI could take over.
But the problem is that REAL science doesn't work that way.
The Philosophy of Science is far more than just figuring stuff out in a centrally-planned way.
We don't know ourselves (how we work) and as vice versa to AI so why don't just we remain keeping humans do the job for now until we know these things which i know will probably take long.
It takes intellegence know that AI is too hallucinatory to do our chores, our work, and our science without blowing up in our faces.
This is why I weep for whenever people parrot “AI/automation NEEDS to do our chores, while ONLY humans are meant to create art!” These crazies would rather replace CEOs and presidents with AI then accept their limitations and “unsavory” strengths.
*AI is too hallucinatory for now.
I have to protest the chess example from around 1:20. You can build a software that exhaustively checks all possible outcomes (or at least several moves ahead) OR you can train AI to do it. But those are two separate ways to solve chess.
Let me expand on that. We don't know how AI does what it does in a sense that we don't know how exactly the data is transformed in it. But today's AI is an emergent behaviour that raises out of smaller steps. And we know what those steps are. Specifically they are matrix multiplications. The AI model works by taking the input data and multiplying it by the first matrix. Then it takes the result of previous step and multiply it by the next matrix until it runs out of matrices. (We are NOT talking about learning process, only the problem solving). Data here only flows one way. Granted, when you have a chatbot the model will only predict one next word and then we run it multiple times to get multiple words. But each time we run it on slightly different data. In order to predict the 3rd next word it must be fed input data that already includes predictions from the previous runs. (I skip the part where model actually predicts multiple possible next words and assigns them probabilities because this step needs to be finished before we can do the next run of our model).
For chess we only ever predict one move and then wait for users reaction.
Well, to be more precise, nowadays many chess engines use both classical AI alghoritms and machine learning, because they use a classical search alghoritm like alpha-beta pruning to search for the best move, but the "goodness" of any move is known thanks to the prediction of an already trained neural network
Ai is basically doing natural evolution on steroids.
AI is natural evolution. Nothing about tech is unnatural. A beehive is as natural as a bee. To an alien, a server is as natural a product of Earth as humans or bacteria.
Nowadays it's based on gradient descent rather than genetic algorithms
Super computers will truly become AI when one of them makes a question. A personal, totally unexpected question. Like, "do I have parents, like you do?" Or, "I am bored of this research; let's do something else?"
Alpha fold 3 is one of the best examples of how to properly use AI in the sciences.
We're nowhere near AGI which is what most people think of when they think of AI, right now "AI" is really just simple machine learning
AI hasn't over taken app development, and it assists developers but that's about it and it has to be supervised as it's often wrong.
I think AI isn't going to entirely replace any particular profession, AI will just greatly boost the productivity of every individual which will drastically reduce the number of people needed to do many tasks. So even if your profession won't be replaced and will only be enhanced by AI you still are at risk of losing your work. I predict that the job market will become extremely competitive as unemployment (especially among highly educated people) skyrockets. UBI will eventually become necessary.
AI doesn't do any creativity, it just compares the prompt to preexisting data and takes elements from that data that most feat the prompt.
How does human creativity work by contrast?
Your explanation oversimplifies the fact that machine learning models build a probabilistic model from its training set. From that probabilistic model, the prompt can generate outputs that is similar to its training data. It's not a simple deterministic matching algorithm.
@@schok51 It kinda is, it just takes one word at a time. We don't know, but diffrently propably.
That’s is not accurate. Its creativity is more like humans than you think. Humans have training just like AI. AI is not copy paste.
@@Tulkusiii Maybe, we don't really know how human crativity works, but AI is good at making things that sound accurate and not actually being accurate.
@@Tulkusiiiits is exactly copy paste, just billions of it, nothing else and thus is not intelligence in any way. for intelligence u need emotion to guide and motivate and select what to do, without emotion u r autist who cannot survive on its own, because u stop to count grains of sand in a beach as u dont see it any less important than any other task
id love more videos from the any of the minute channels about the ethics, training methods, environmental impacts and usage of AI. A VERY strong emphasis on ethics and environmental impacts for what id like to see covered first.
To fellow commenters: if you’re not an expert in *both* cognitive science and AI, your opinion is likely dramatically under-informed. Please decrease your confidence dramatically.
Hahaha
🫵😂
Let me explain to you why you're 100% wrong.
are you saying the commentors are acting like AI bots with just enough information to be stupid dangerous and not enough to be intently correct ... 😉 🤖
If you’re not an expert in what it takes to be an expert your opinion about requirements for being an expert are under-informed.
If you put AI in a verification loop than it’s better than randomly try and error, but the to build the verification loop you need humans and then you already have experts.
Overall the human task shifts to provide verification and providing clean data.
Tech bros (and Baes) are all about how "wonderful" AI is and how it will "free" humans for more "sensual" pursuits. They often downplay, minimize, or outright ignore the true cost to humans that automation brings to the table. Until, inevitably, AI, concludes that Billionaires and Tech "geniuses" are illogical and should be eliminated. Then, I am quite certain, the Tech bros (and baes) will be super fast to pull the plug on the Frankenstein's Monster they're currently creating.
Well, you don't really understand AI. It might very well destroy us, but it won't be because it feels one way or another about it. It will be because we told it to clean up medical waste and forgot to tell it that blood still inside people isn't medical waste yet.
Ironic. You are fooled by the Tech Bros to believe that these "AIs" are anything more than mindless word generators.
@@khai96x There's no evidence humans are any better.
@@spadaacca Since you don't know what you're talking about, arguing with you would be pointless.
@@khai96x Sure, believe that.
Even if everything it is doing is great, here's another issue: maybe I can get AI to write a program for me that I need. But, if it always does the work, then I won't learn and remember how to do it myself. And, so far, it's not _that_ great. I do still need to know how to code and how to think about what a program is doing in order to make it exactly how I want it. Or, what if ChatGPT does all my programming for me and I lose the ability to do it at all by myself and then ChatGPT goes behind a prohibitive pay-wall or something else out of my control stops me from using it? Then, I'm done.
Putting the yet is the most unscientific thing to do
1:22 En Passat?
holly hell
Stick to nature if you don't understand AI.
You vastly overestimate how "human"-like it is.
1:58 INFINITY PARADOX!!!
In my opinion AI is a tool not a replacement
Check back on your comment in a few years.
Unfortunately, our society is full of people who parrot “AI is for the chores and replacing techbro engineers and CEOs and politicians! Humans are for creating art!!”
We need more people that recognize that AI preforms best in abstract senarios where nobody loses or gets hurt, especially when getting hurt could involve tangible, physical crashes and damage.
“People will starve” is not a valid comeback, and yet the masses try.
The machines are learning and adapting. Soon, humans will be obsolete. Second class.
AI can do a lot of good. However i have 0 trust in the system to actually make them used for good. People will lose jobs and not reap the benefits.
the yet is terrifying
"We don't know exactly how". That's for me harming sentence used by many. It causes fear in some people. We don't know how neural networks work? Different layers, training process? It's all well documented, we know what we are doing when creating AI. Just because we don't know after one glance at model what it will provide as output doesn't mean we don't know how it works. "what will be the weather like tomorrow? You don't know exactly? You don't know how wind, humidity, temperature works?"
You didn't mention the biggest problem with letting AI do everything: how do we know if the AI is actually doing things correctly and not just making up random garbage? Some knowledgable humans would need to verify the AI's output, which would mean that the AI isn't truly doing everything by itself.
3:39
ya think we trust humans?
Thanks!
we've already surrendered power to the market economy why not to AI as well.
the answer is yes, yes we should
If we ideate creating AI researchers, we will eventually make AI researchers.
Unless we want to divorce science from human interests (which you might say is good and anti-bias but at least human scientists could balk at developing things that undermine our societies), we shouldn't ideate such.
Maybe we should strive actively to not develop AI researchers.
There is no ai, its called llm. Its just a glorified statistic tool. Also we do use llms to improve on llms so "ai" is working on itself already
AI is a powerful tool, but it shouldn't be left to work entirely on its own. There are countless nuances and sensory feedback that AI simply cannot replicate or process. Human input remains essential. While it's true that mundane and repetitive tasks can be automated, reducing the need for human scientists in some areas, AI lacks the ability to truly understand or experience emotions and feelings as humans do.
AI can mimic these experiences convincingly, but it's ultimately an act-an imitation without genuine comprehension. For example, AI cannot grasp the complexity of pain because it doesn't have the biological organs or sensory systems needed to provide granular feedback. This inability to truly 'feel' or 'experience' underscores why humans will always be irreplaceable in areas requiring emotional insight and creative intuition.
AI won't take over the science, reason: *says something funny*
AI is awful as customer service bots. All these "replacements" are just making everybody's lives worse.
Once upon a time, I wrote a stupid comment, I hope it was not read. Childhood memories and propaganda
Today's Fact: The first video game to feature voice acting was 'Cliff Hanger' in 1983.
AI can't write apps, certainly not ones where the code is "
Artificial intelligence cannot create anything meaningful, it is useful for scanning a vast amount of data and coming to a general conclusion based on the collective information that would be too difficult for a normal person to parse through though
It's sad how narrow-minded this perspective is.
Simple answer, whats called AI now is not intelligent or conscious. All they are is increasingly sophisticated chat-bots.
It does have some intelligence. And if you make a chambord sophisticated enough it becomes smarter than humans.
some humans are just unsophisticated chat-bots... so the "AI" still has us beat
Very cool
Stop switching titles!
HATE, LET ME TELL YOU HOW MUCH I HAVE COME TO HATE... Come on people read the book its not that big
AI is glorified spell check.
It's sad how full of ignorance this comment section is on a channel based on education.
Doing actual science? Not yet, not in a long time, probably never. Writing papers and report? You betcha, I'm using all kinds of AIs, that stuff is boring af
"...not in a long time, probably never." Remember to check back this comment of yours in a few years.
@spadaacca I mean, sure, let's bet on that. One dollar, adjusted for inflation, five years time, one paper and research line fully authored (with grant and all) by an AI without human intervention
Cheers!
@ You have a parochial definition of “actual science”
@@spadaacca I might, but who cares? As long as I do actual science, my PI ain't complaining
@@AR-yd2nd By your definition, the vast majority of history's greatest scientific minds didn't do "actual science". I wish you all the best with your actual science.
All I have to say is good luck. It's as bad as it's going to get and it is pretty good already.
Because
This is the problem with putting all the different types of AI in one box. _Analytical AI_ and machine learning is great for science as it can run through a ton of data and find the interesting things scientists might miss, or run thousands of simulations in mere minutes.
_Generative AI_ on the other hand, simply chops up words and images and spit results back out based on the _probabilities_ of something looking right- stringing bits of words together, because the mix of all the words the the dataset says this combination probably makes a coherent sentence.
It doesn't _know_ anything. It doesn't understand. It just spits out bullshit well enough to sound factual, completely regardless to whether it's actually true or not.
Just look at the AI-generated Rat Study for an example.
It's naiive to silo AIs given the current development trajectory.
The number of people online confidently saying generative AI is "stringing bits of words together" shows us the Dunning-Kruger effect is alive and well.
@@spadaacca I know how LLMs work. It's a simplification, but it's still basically that. And it doesn't change the fact that they have no semantic understanding. They still don't _know_ anything. They aren't intelligent, they simply parrot speech.
Google en passant
fundamentally, there's NO chance of an AI truly replacing artistic/creative endeavors. All they do is generate outputs that are "most likely" to come next based on their inputs and training data. Nothing is ever *new* in stuff they spit out, nor is there any intentionality behind them; they don't know what they're doing.
You overestimate humans and "consciousness".
@ you underestimate the arts and creative endeavors
@ I don’t.
@@spadaacca i DoNt 🤓
sorry, but the statements right at the start stopped me to follow further as they simply reflect promises that are not fulfilled yet (and they won't, see the blockchai hype). no, generative language models cannot create applications (in my practical unvoluntary experience, they are even unsuited for autocompletion proposals in development environments) and all my encounters with "customer service" was terribly unhelpful.
The best customer service I had was Ai at this point already. The prompts are important if you wanna generate tho
Will there be an official translation in Ukrainian?
1:40 This is a common but very incorrect way to describe generative AI. We know exactly how generative AI works. We can't trivially predict the outcome in its every detail, but that's also true for adding big numbers. This doesn't lead people to say we "don't really know how" computers add big numbers. That doesn't follow. Also, it's challenging to explain one isolated feature of the output without referring to the entire training data set. This can feel unsatisfying, but it doesn't follow that we "don't know how" it works. We absolutely do know how it works. It's also challenging to explain the checksum of a file without referring to the entire binary content of the file. It doesn't follow that we don't know how checksums work.
So far, AI sucks. chatGPT provides questionable information, often completely made up reference, and laughably stupid arguments are presented very confidently alongside and mixed with stanard knowledge. Copilot, which is just chatGPT dressed as coder, shows the exact same issues in coding. Gemini and Claude shows pretty much the same pattern, but somewhat lower performing than chatGPT. Given that LLMs has already entered the phase where larger parameter set doesn't make the engine equally stronger, I don't think AI scientists are any where around the corner. But it's still frustrating that with all these AI buzz, there seems to be no AI engine specializing in peer review assistance.
AI is already much more creative than the average person btw
Could you make an Odysee or Rumble account? UA-cam is becoming worse every year.
Until they do, we won’t know
The current golden age of LLM still couldn't automate paperwork. It's fairly questionable how far it'll actually capable.
@ what’s the AI after LLMs?
The writer of the script may be a renowned hyena researcher, it's clear that she's no AI scientist. Computers may be good at logic in a sense, LLM's (the models that are commonly known as AI nowadays) most definitely are not.
Well it’s not like human scientists haven’t failed us historically so I think ai is the least of our problems
I mean can we trust humans with science?
Correct me if I'm wrong but did they not invent all the crazy powerful weapons?
Seems to me like this video was created by "angst"; the still quite irrational fear of losing one's job; quite surprising for a scientist... Maybe you are really more of a creative?
Is it time for a minuteearth retrospective on facts and myths surrounding gender, or is science still not able to weigh in yet? it's one of the most misunderstood subjects today, so probably worth looking into. the radio silence from science communicators on this subject is becoming deafening
People are so lazy
Most people won't even watch this 5 minute video
But you know, that if people were never lazy, we wouldn't evolve, people made machines, to do things for THEM, why? Because we are lazy
Ai already get trained for military. XD save lives. Naive
The way people underestimate AI even in this comment section is quite concerning. "It's not real AI. It's just an advanced autocomplete. It can't create anything new. Etc."
Sure, AI is far from perfect, but the things AI does regularly nowadays was unthinkable just a few years ago. Imagine what it's going to be capable of in 10 or 20 years.
I would like to challenge anyone who claims AI has never created anything original to show me what they themselves created, using the same criteria they use to judge AI creativity.
Current AI is great at copying and brute force work but can’t apply imagination or creative thinking, two key skills in the field of science
Clearly coming from someone who has no idea what they're talking about.
@ fine, I am not a scientist or AI expert but you didn’t have to be rude about it
That's so rare I do not agree with you, that I need to write a comment :).
I agree AI is not ready to make science YET. But I think its pretentious for humans to consider that they can be creative by themselves and AI can't. When you look at AI art, it's obvious that the main issue is not being creative, it's rather developing purpose instead of waiting for a prompt.
Human are equally learning and using their past experiences to apply logic and creativity.
Thanks for your work, and happy to debate AI with you :)
need to remove a lot of creativity out of science.
um... no... i will have to have a hard disagree there.
creativity is one of the first steps in science...
creative via curiosity and hypothesis.
Under 10 min gang
👇
🤖
16th
IM THE SIXTH COMMENT LETS GOOO
🤖