It can def anticipate. Just set some parameters around characteristics of anything humans would usually anticipate in any given environment...go from there. Pretty easy honestly
A bit of a yawn. I would like her to describe why LLMs don't have this rather than simply stating that it is the case. For example, what is moral reasoning? Is it possible that an LLM can simulate moral reasoning that is as good as a person's moral reasoning? And does that even matter? The psychologist Jonathan Haidt did a lot of work that showed that moral reasoning is post-hoc rationalisation and less important than our moral intuition.
@MachineLearningStreetTalk I've come around to agreeing with your stance on LLMs' lack of reasoning and understanding. Mostly thanks to professor Subbarao Kambhampati. Would be great to have him as a guest to add some scientific and technical backing to these arguments.
Such a weird strawman in which the lacking capabilities of AI has already been pre-defined to accentuate those seemingly 'uniquely' human capabilities.
Yet, being the keyword. Once quantum computing surpasses the neurons and synapses connections of the human brain who knows what the possibilities would be then.
Yes, it is limited in interpreting the nuances in human speech right now. All LLMs currently use text inputs, even those that have speech recognition. But that will change & it will become more intelligent. It will have the ability to look for subtle visual & auditory cues to anticipate the user's needs.
Sentences that start with the words "AI cannot..." tend to have a very short shelf-life these days.
I’m sick of hearing from these kinds of people
A Cat has NO language but it understands reality. WHAT ARE YOU ON ABOUT?
AI can't interpret reality ... next sentence ... nuances of human emotion.
I as a human cannot fly, that's because I don't have a beak.
Not a fan of Maria's blanket statements. Bold claims require bold evidence.
It can def anticipate. Just set some parameters around characteristics of anything humans would usually anticipate in any given environment...go from there. Pretty easy honestly
Be careful, I think it was the philosopher Justin Bieber who famously said Never Say Never
That's dump as fuck... Everything is math and can be calculated from emotion to sarcasm, let it a decade and AI will be much more efficient...
Dog or cat
Is she referring to CURRENT AI or a potential future ai?
A bit of a yawn. I would like her to describe why LLMs don't have this rather than simply stating that it is the case. For example, what is moral reasoning? Is it possible that an LLM can simulate moral reasoning that is as good as a person's moral reasoning? And does that even matter? The psychologist Jonathan Haidt did a lot of work that showed that moral reasoning is post-hoc rationalisation and less important than our moral intuition.
Super Agree😂
@MachineLearningStreetTalk I've come around to agreeing with your stance on LLMs' lack of reasoning and understanding. Mostly thanks to professor Subbarao Kambhampati. Would be great to have him as a guest to add some scientific and technical backing to these arguments.
...yeah and human are full of biases too. Maybe that's why AI "can't interpret reality", because reality is full of nuances which humans judge poorly.
Such a weird strawman in which the lacking capabilities of AI has already been pre-defined to accentuate those seemingly 'uniquely' human capabilities.
GPT 4 Omni
Even if you were right, it won’t be long now, people are just afraid of it
Yet, being the keyword. Once quantum computing surpasses the neurons and synapses connections of the human brain who knows what the possibilities would be then.
Yes, it is limited in interpreting the nuances in human speech right now. All LLMs currently use text inputs, even those that have speech recognition. But that will change & it will become more intelligent. It will have the ability to look for subtle visual & auditory cues to anticipate the user's needs.