Separating fact from fiction in a world of AI fairytales - Jodie Burchell - NDC London 2024
Вставка
- Опубліковано 2 лис 2024
- This talk was recorded at NDC London in London, England. #ndclondon #ndcconferences #developer #softwaredeveloper
Attend the next NDC conference near you:
ndcconferences...
ndclondon.com/
Subscribe to our UA-cam channel and learn every day:
/ @NDC
Follow our Social Media!
/ ndcconferences
/ ndc_conferences
/ ndc_conferences
#ai #ethics #machinelearning #ml
If you've been remotely tuned in to the developments in generative AI over the past year, you've likely been inundated with news, ranging from claims that these models will replace numerous white-collar jobs to declarations of sentience and an impending AI apocalypse. At this stage, the hype surrounding AI has far surpassed the actual useful information available.
In this presentation, we’ll cut through the noise and delve deep into the current applications, risks, and limitations of these generative AI models. We will start with the early research endeavours aimed at creating an "artificial brain" and trace the path that has led us to today's sophisticated models. Along the way, we will address the misconception of mistaking these models for intelligent systems and shed light on the actual requirements for developing true artificial general intelligence, and see how far we seem to be from this goal. Moreover, we will highlight how an excessive focus on topics like the sentience of these systems has overshadowed the genuine issues associated with these models. By shifting our attention towards their real limitations, we will see how we can better maximise the potential of these exciting models.
Great talk, All claims are substantiated by evidence, which is rare in all the internet talk about ML bots called "AI". This talk is science, not opinion. I learned a lot.
Excellent! Very well reasoned and informative.
27:31 ChatGPT3.5 hasn't improved. Here's the latest attempt:
[...]
As you can see, the modified equation *11 * 4 + 13 * 8 = 148* does not result in the desired right-hand side of 106. Therefore, *it is not possible* to modify exactly one integer on the left-hand side of the equation to obtain a right-hand side of 106.
The coding assistance stuff has now been superseded, unfortunately. The rest of the talk is more or less correct. Snyk and others have concluded that while initial use looked promising, the results are worse than we thought at first. I would explore whether this has to do with the "ELIZA effect" in addition to the interesting idea that it is next to impossible to get a LLM (or indeed, an ANN in general) to have the facility to say to *remove* something. (This has been conjectured by others - the first is my idea; steal it if you want.)
But, in our industry, we do love to pour heaps and heaps of hype hype hype on absolutely everything we come up with.
Psychology is a sudo-science. Intelligence is a self-applied, self-defined term that is not verifiable.
Well, good job at convincing noobs that GPTs are neither the End of the World, nor are they The Second Coming.
I wasn't at all satisfied with most examples being so cherry-picked.
I would never trust a vendor demo, never mind screenshots, etc. Even there, they are riddled with problems - especially the mereotopological matters the speaker mentions. (Mereology is the study of parts and wholes.)
Well done at sharing your emotions.