Great talk, but from 23:15 he makes some incorrect claims and extrapolations. LLMs are not all equal, and GPT etc are Transformers, which MIDJOURNEY is distinctly NOT. This is very important, and he completely ignores that distinction. Using a MIDJOURNEY image to try and make an argument that LLMs don't have real 'understanding' is the equivalent of using an image of a bicycle to proof formula one cars are prone to falling over. MiDJOURNEY is NOT a transformer model, it's a GAN model, using 2 Neural networks that compete (discriminator and generator) to create an image that is indistinguishable from a real image. His use of this example to explain why LLM"s don't really 'understand' things is an irrational comparison. GPT etc, are Transformer model LLM's and they are in fact VERY different from GAN's. They also do infact have capabilities for 'understanding' that are confounding us atm. They can, for example, as is proven by Max Tegmark and Wes Gurnee, shape 'mental models' of the world and the parts in it and how they are related. So it is absolutely possible that an LLM can understand what a Driving wheel is and it's relation to the car.
MJ is a diffusion model and there are self attention blocks for image generation as well as language representation in it. The "does it understand?" question applies for both MJ and Transformer
He said that some are debating AI understands and some are debating that AI doesn't understand. But the conclusion is no one knows clearly. You are among the first some
Insightful talk. Evans is good at presenting a full range of possibilities and acknowledging that how it'll turn out isn't known yet.
Brilliant, Benedict, especially the last six minutes. Three thumbs up from me.
Always loved Excel! Now I know better.
Shame that the audio has a lot of echo, would be great to have the direct feed from his mic to make it easier to listen to
7:59 For those who missed it, "Space Karen" refers to Elon Musk XD
What was the SLUSH title song this year at the beginning of the video?
😂😂😂 hilarious burn at
24:10
Great talk, but from 23:15 he makes some incorrect claims and extrapolations. LLMs are not all equal, and GPT etc are Transformers, which MIDJOURNEY is distinctly NOT. This is very important, and he completely ignores that distinction.
Using a MIDJOURNEY image to try and make an argument that LLMs don't have real 'understanding' is the equivalent of using an image of a bicycle to proof formula one cars are prone to falling over.
MiDJOURNEY is NOT a transformer model, it's a GAN model, using 2 Neural networks that compete (discriminator and generator) to create an image that is indistinguishable from a real image. His use of this example to explain why LLM"s don't really 'understand' things is an irrational comparison.
GPT etc, are Transformer model LLM's and they are in fact VERY different from GAN's.
They also do infact have capabilities for 'understanding' that are confounding us atm. They can, for example, as is proven by Max Tegmark and Wes Gurnee, shape 'mental models' of the world and the parts in it and how they are related.
So it is absolutely possible that an LLM can understand what a Driving wheel is and it's relation to the car.
Midjourney is a diffusion model (or several) Not GAN
MJ is a diffusion model and there are self attention blocks for image generation as well as language representation in it. The "does it understand?" question applies for both MJ and Transformer
He said that some are debating AI understands and some are debating that AI doesn't understand. But the conclusion is no one knows clearly.
You are among the first some
LLM’s don’t have understanding and it’s silly to say that they do. Maybe try creating a small one from scratch and you’ll quickly see the tricks.
good summary but learned nothing :/
Not an expert in dog intelligence I’m guessing