Very good description that Gary Marcus makes of Large Language Models. Trying to develop concepts, as we humans do, by just determining what words are usually put together is, at best, naive.
@5:20: Regarding medicine and driverless cars, the important question is not whether GPT4 is better than GP3, or whether new driverless cars are better than their old version. It’s whether they’re better than the average human. Do AI-driven cars have a better overall safety record than that of humans? Do AI medical bots make fewer mistaken diagnoses than human doctors, weighted by severity (eg, a mistaken diagnosis that costs a life is weighted more than a mistaken diagnosis that causes minimal harm). This is main metric AI needs to surpass to be considered successful and useful.
Perplexity is an AI-assisted browser with access to the kind of links a regular browser would find-no ads. It can also access several models like OpenAI, Google, or Anthromorphic. Using Perplxity feels like using a great browser, not talking to a "person". In any case a great discussion, especially in the way that the hype raises a lot of money and drives the whole field "down the mountain" to find another way up.
Hi, I enjoyed this but I think you’ll find credible experts who disagree with some of your guest’s views, mostly in the area of LLMs being a dead end. I hope you’ll interview other voices.
This interview is not very good and one sided. The fact that I can ask AI to help me write code and it has. It is a TOOL right now and it is getting better and better. I can ask Chat GPT 4 a question and the question might sound simple, but it might get it wrong... Now I can ask GPT 4 the same question but this time formulate the question in such a way that it will help steer it to the correct answer and it does. This does not mean I know the answer before asking the question, it means i know enough on how to formulate the question in such a way that AI understands what I am asking... I have done this test and it works...As good as AI is it is still in its early stages of development.. Look at computers in the early 80s, look what 40 years has done to the personal computer! Take that an apply that to AI now!!!! We can't even begin to understand how powerful it can get!
filling my algorithm badly so that I can confuse them what i think, i never put my heart, mind and soul when I write so that things go just superficially.
the GZERO World episode with Gary Marcus. It is a timely discussion of a critically important topic. Marcus's point about large language models being "truly exceptional in their unreliability" is well-taken. Generative AI models are capable of generating incredibly creative and convincing text, images, and other types of content. However, they are also prone to producing inaccurate or false information. This is because generative AI models are trained on massive datasets of text and code, which may contain biases and errors. The proliferation of generative AI raises important questions about its impact on truth and society. For example, how can we distinguish between real and fake news articles when both can be generated by AI? How can we prevent the spread of disinformation and propaganda? How can we ensure that generative AI is used for good and not for harm? Marcus is right that effective global AI regulation is necessary to address these challenges. However, developing such regulation is complex and challenging. There is no one-size-fits-all solution, and different countries and regions may have different priorities. It is important to have a thoughtful and inclusive discussion about how to regulate generative AI in a way that promotes innovation while protecting society from potential harms. Here are some additional thoughts on the risks and impact of generative AI on truth: * Generative AI can be used to create deepfakes, which are videos or audio recordings that have been manipulated to make it appear as if someone is saying or doing something they never actually said or did. Deepfakes can be used to spread misinformation, damage reputations, and even interfere with elections. * Generative AI can be used to create synthetic media, such as images and videos that are generated by AI rather than captured by a camera. Synthetic media can be used to create realistic but fictional content, which can make it difficult for people to distinguish between what is real and what is not. * Generative AI can be used to automate the spread of misinformation and propaganda. For example, generative AI can be used to generate fake news articles or social media posts that are designed to mislead people. It is important to be aware of the risks of generative AI and to take steps to mitigate them. For example, it is important to be critical of the information we consume and to verify its authenticity before sharing it. We should also support the development of tools and technologies that can help us to detect and identify deepfakes and synthetic media. Finally, it is important to have a public conversation about the ethical implications of generative AI. We need to decide how we want this technology to be used and what safeguards we need to put in place to protect society from potential harms.
I've been working with chat got for a while. I feel that as I work with it it's a layered effect. Topics seem dry or incomplete at times. But that depends largely on how good I'm prompting. When I'm encouraging and prompting well the answers are also more stimulating and effectual. It is a neural network of language. So working in that scope and allowing it to fail and learn may develop surprising results that we can't foresee yet. I think the cost is getting prohibitive for average users and that is where governments will get involved to keep it level and not a have or have not technology (like so many others are). Thank you for this talk it was interesting.
6:34 random cut to a guy running terminal - seems to have nothing to do with Chat GPT or AI, but hey, it's a programmer using a computer so let's throw it in there to make this interview seem more legit :)
User 18:44 2023/09/29 I want to emphasize once again that neural networks are not "probabilistic statistical machines," but "logical machines." GPT4 20:08 2023/09/29 Indeed, neural networks operate based on mathematical expressions and principles of logic and mathematics. However, the design of neural networks, especially the concepts of deep learning, inherently has statistical, probabilistic properties, mainly to learn patterns from data. This is to deal with the uncertainty and variability in data. The concepts of "logical machine" and "probabilistic statistical machine" are not necessarily mutually exclusive. Neural networks attempt to capture statistical and probabilistic patterns inherent in data through weight adjustments and activation functions, but these processes are also conducted within a logical and computational framework. Therefore, neural networks can be viewed as logical machines that incorporate probabilistic elements. User 20:09 2023/09/29 What neural networks perform is not probabilistic statistical inference but logical inference. Neural networks learn (self-adjust) to make the inference work well. GPT4 20:14 2023/09/29 Neural networks operate differently from conventional logical inference and statistical, probabilistic inference. However, these differences don't simplify into a clear-cut dichotomy of being "logical" or "statistical". The learning performed by neural networks is understood as a process of extracting patterns from numerous examples and making predictions on unknown data based on these patterns, which certainly involves statistical and probabilistic elements. Neural Networks and Logical Inference Neural networks can essentially be considered collections of logic gates. Deep learning models serve as complex function approximators, and during this function approximation process, the models learn and extract the logical structures and relationships hidden in the data. Neural Networks and Statistical Inference Principles of probability and statistics are involved in the process of extracting patterns from learning data. The minimization of loss functions, stochastic gradient descent, and other learning algorithms deeply integrate concepts of probability and statistics. Conclusion Neural networks can be thought of as unique inference engines where logical and statistical elements intertwine. By this, the models learn from high-dimensional and complex data structures and make predictions on unknown inputs based on this learning.
Gary sells books that claim AI is dangerous, inaccurate, hallucinating, not self-aware, etc etc so he's not going say his books are wrong don't buy them.
Well that may be? but why would Ian bremmer who is a self-proclaimed corporate advisor specializing in geopolitics want to be responsible for putting ANY misinformation out there about ANYTHING? Especially since, one has to presume, his clients have millions of dollars on the line and depend on his word for information?
@das5395 I assume Ian Bremmer is dabbling in "viewer numbers", being a YT RockStar or he truly believes that AI is an "evil entity" at some religious level? - its usually something like "God is our true and pure lord and AI is summoning satan, so I will argue against AI sentience". Its my hypothesis - lol. I don't believe God is necessary for consciousness and that consciousness is a sliding scale up to human-level. So even spiders and chat bots have some degree of "consciousness". But the word consciousness is not a good word because it gets into spiritual realms. Its (consciousness) a fairly simple emergent property and not spectacular "magic". Look at Conway's Game of Life as an analogy to see how intelligence could emerge from a very very simple beginning. OK, not cross get, its my opinion not yours. @@ttacking_you
What does it matter if we climbed a high hill or the highest hill? The difference between the 2 places is relatively small. I would imagine the bugs on these 2 hills can't even tell the difference. Their tiny brains lack the capacity to understand the Universe too, but they are perfectly fine with it. Human brains are better in a way. We believe we are on the highest hill and that's that. Because Jesus.
Gary is the clown in the AI world . He made the wrong bets and became irrelevant. I am sure he is smart but bias can destroy intelligence in a heartbeat
Very good description that Gary Marcus makes of Large Language Models. Trying to develop concepts, as we humans do, by just determining what words are usually put together is, at best, naive.
@5:20: Regarding medicine and driverless cars, the important question is not whether GPT4 is better than GP3, or whether new driverless cars are better than their old version. It’s whether they’re better than the average human. Do AI-driven cars have a better overall safety record than that of humans? Do AI medical bots make fewer mistaken diagnoses than human doctors, weighted by severity (eg, a mistaken diagnosis that costs a life is weighted more than a mistaken diagnosis that causes minimal harm). This is main metric AI needs to surpass to be considered successful and useful.
Good point - Gary sells books that claim AI is dangerous and inaccurate so he's not going say his books are wrong don't buy them.
Great interview!
Perplexity is an AI-assisted browser with access to the kind of links a regular browser would find-no ads. It can also access several models like OpenAI, Google, or Anthromorphic. Using Perplxity feels like using a great browser, not talking to a "person". In any case a great discussion, especially in the way that the hype raises a lot of money and drives the whole field "down the mountain" to find another way up.
Hi, I enjoyed this but I think you’ll find credible experts who disagree with some of your guest’s views, mostly in the area of LLMs being a dead end. I hope you’ll interview other voices.
This interview is not very good and one sided. The fact that I can ask AI to help me write code and it has. It is a TOOL right now and it is getting better and better. I can ask Chat GPT 4 a question and the question might sound simple, but it might get it wrong... Now I can ask GPT 4 the same question but this time formulate the question in such a way that it will help steer it to the correct answer and it does. This does not mean I know the answer before asking the question, it means i know enough on how to formulate the question in such a way that AI understands what I am asking... I have done this test and it works...As good as AI is it is still in its early stages of development.. Look at computers in the early 80s, look what 40 years has done to the personal computer! Take that an apply that to AI now!!!! We can't even begin to understand how powerful it can get!
Would like to see a debate and/or real time analysis to understand what various flavors of AI can do well and cannot do well...
filling my algorithm badly so that I can confuse them what i think, i never put my heart, mind and soul when I write so that things go just superficially.
the GZERO World episode with Gary Marcus.
It is a timely discussion of a critically important topic.
Marcus's point about large language models being "truly exceptional in their unreliability" is well-taken.
Generative AI models are capable of generating incredibly creative and convincing text,
images,
and other types of content.
However,
they are also prone to producing inaccurate or false information.
This is because generative AI models are trained on massive datasets of text and code,
which may contain biases and errors.
The proliferation of generative AI raises important questions about its impact on truth and society. For example, how can we distinguish between real and fake news articles when both can be generated by AI? How can we prevent the spread of disinformation and propaganda? How can we ensure that generative AI is used for good and not for harm?
Marcus is right that effective global AI regulation is necessary to address these challenges. However, developing such regulation is complex and challenging. There is no one-size-fits-all solution, and different countries and regions may have different priorities. It is important to have a thoughtful and inclusive discussion about how to regulate generative AI in a way that promotes innovation while protecting society from potential harms.
Here are some additional thoughts on the risks and impact of generative AI on truth:
* Generative AI can be used to create deepfakes, which are videos or audio recordings that have been manipulated to make it appear as if someone is saying or doing something they never actually said or did. Deepfakes can be used to spread misinformation, damage reputations, and even interfere with elections.
* Generative AI can be used to create synthetic media, such as images and videos that are generated by AI rather than captured by a camera. Synthetic media can be used to create realistic but fictional content, which can make it difficult for people to distinguish between what is real and what is not.
* Generative AI can be used to automate the spread of misinformation and propaganda. For example, generative AI can be used to generate fake news articles or social media posts that are designed to mislead people.
It is important to be aware of the risks of generative AI and to take steps to mitigate them. For example, it is important to be critical of the information we consume and to verify its authenticity before sharing it. We should also support the development of tools and technologies that can help us to detect and identify deepfakes and synthetic media.
Finally, it is important to have a public conversation about the ethical implications of generative AI. We need to decide how we want this technology to be used and what safeguards we need to put in place to protect society from potential harms.
Excelent interview, thank you!
I've been working with chat got for a while. I feel that as I work with it it's a layered effect. Topics seem dry or incomplete at times. But that depends largely on how good I'm prompting. When I'm encouraging and prompting well the answers are also more stimulating and effectual. It is a neural network of language. So working in that scope and allowing it to fail and learn may develop surprising results that we can't foresee yet. I think the cost is getting prohibitive for average users and that is where governments will get involved to keep it level and not a have or have not technology (like so many others are). Thank you for this talk it was interesting.
I agree, good prompts make chatGPTs much better. They can negotiate their massive data piles more effectively and not get overwhelmed with data.
6:34 random cut to a guy running terminal - seems to have nothing to do with Chat GPT or AI, but hey, it's a programmer using a computer so let's throw it in there to make this interview seem more legit :)
User 18:44 2023/09/29
I want to emphasize once again that neural networks are not "probabilistic statistical machines," but "logical machines."
GPT4 20:08 2023/09/29
Indeed, neural networks operate based on mathematical expressions and principles of logic and mathematics. However, the design of neural networks, especially the concepts of deep learning, inherently has statistical, probabilistic properties, mainly to learn patterns from data. This is to deal with the uncertainty and variability in data.
The concepts of "logical machine" and "probabilistic statistical machine" are not necessarily mutually exclusive. Neural networks attempt to capture statistical and probabilistic patterns inherent in data through weight adjustments and activation functions, but these processes are also conducted within a logical and computational framework. Therefore, neural networks can be viewed as logical machines that incorporate probabilistic elements.
User 20:09 2023/09/29
What neural networks perform is not probabilistic statistical inference but logical inference. Neural networks learn (self-adjust) to make the inference work well.
GPT4 20:14 2023/09/29
Neural networks operate differently from conventional logical inference and statistical, probabilistic inference. However, these differences don't simplify into a clear-cut dichotomy of being "logical" or "statistical". The learning performed by neural networks is understood as a process of extracting patterns from numerous examples and making predictions on unknown data based on these patterns, which certainly involves statistical and probabilistic elements.
Neural Networks and Logical Inference
Neural networks can essentially be considered collections of logic gates. Deep learning models serve as complex function approximators, and during this function approximation process, the models learn and extract the logical structures and relationships hidden in the data.
Neural Networks and Statistical Inference
Principles of probability and statistics are involved in the process of extracting patterns from learning data. The minimization of loss functions, stochastic gradient descent, and other learning algorithms deeply integrate concepts of probability and statistics.
Conclusion
Neural networks can be thought of as unique inference engines where logical and statistical elements intertwine. By this, the models learn from high-dimensional and complex data structures and make predictions on unknown inputs based on this learning.
So A.I. is just autocomplete but a bit more advanced.
We got lied to.. there is no A.I. its a sales technique !!
best explanation of some of these concepts i've heard to date; well done Ian
Gary sells books that claim AI is dangerous, inaccurate, hallucinating, not self-aware, etc etc so he's not going say his books are wrong don't buy them.
Well that may be? but why would Ian bremmer who is a self-proclaimed corporate advisor specializing in geopolitics want to be responsible for putting ANY misinformation out there about ANYTHING? Especially since, one has to presume, his clients have millions of dollars on the line and depend on his word for information?
@das5395 I assume Ian Bremmer is dabbling in "viewer numbers", being a YT RockStar or he truly believes that AI is an "evil entity" at some religious level? - its usually something like "God is our true and pure lord and AI is summoning satan, so I will argue against AI sentience". Its my hypothesis - lol. I don't believe God is necessary for consciousness and that consciousness is a sliding scale up to human-level. So even spiders and chat bots have some degree of "consciousness". But the word consciousness is not a good word because it gets into spiritual realms. Its (consciousness) a fairly simple emergent property and not spectacular "magic". Look at Conway's Game of Life as an analogy to see how intelligence could emerge from a very very simple beginning. OK, not cross get, its my opinion not yours. @@ttacking_you
I heard it’s because it’s powered by a Magic 8-Ball (toy) 0:21 I guess it could be stimulants though 😂
Muito bom
What does it matter if we climbed a high hill or the highest hill? The difference between the 2 places is relatively small. I would imagine the bugs on these 2 hills can't even tell the difference. Their tiny brains lack the capacity to understand the Universe too, but they are perfectly fine with it. Human brains are better in a way. We believe we are on the highest hill and that's that. Because Jesus.
Considering how wrong Mr. Bremmer has been in the last ten year on China...
Russia's military fell apart in Ukraine(???) You lost your credibility with that one!
Gary is the clown in the AI world . He made the wrong bets and became irrelevant. I am sure he is smart but bias can destroy intelligence in a heartbeat