Great talk, All claims are substantiated by evidence, which is rare in all the internet talk about ML bots called "AI". This talk is science, not opinion. I learned a lot.
27:31 ChatGPT3.5 hasn't improved. Here's the latest attempt: [...] As you can see, the modified equation *11 * 4 + 13 * 8 = 148* does not result in the desired right-hand side of 106. Therefore, *it is not possible* to modify exactly one integer on the left-hand side of the equation to obtain a right-hand side of 106.
The coding assistance stuff has now been superseded, unfortunately. The rest of the talk is more or less correct. Snyk and others have concluded that while initial use looked promising, the results are worse than we thought at first. I would explore whether this has to do with the "ELIZA effect" in addition to the interesting idea that it is next to impossible to get a LLM (or indeed, an ANN in general) to have the facility to say to *remove* something. (This has been conjectured by others - the first is my idea; steal it if you want.)
Well, good job at convincing noobs that GPTs are neither the End of the World, nor are they The Second Coming. I wasn't at all satisfied with most examples being so cherry-picked.
I would never trust a vendor demo, never mind screenshots, etc. Even there, they are riddled with problems - especially the mereotopological matters the speaker mentions. (Mereology is the study of parts and wholes.)
Great talk, All claims are substantiated by evidence, which is rare in all the internet talk about ML bots called "AI". This talk is science, not opinion. I learned a lot.
Excellent! Very well reasoned and informative.
27:31 ChatGPT3.5 hasn't improved. Here's the latest attempt:
[...]
As you can see, the modified equation *11 * 4 + 13 * 8 = 148* does not result in the desired right-hand side of 106. Therefore, *it is not possible* to modify exactly one integer on the left-hand side of the equation to obtain a right-hand side of 106.
The coding assistance stuff has now been superseded, unfortunately. The rest of the talk is more or less correct. Snyk and others have concluded that while initial use looked promising, the results are worse than we thought at first. I would explore whether this has to do with the "ELIZA effect" in addition to the interesting idea that it is next to impossible to get a LLM (or indeed, an ANN in general) to have the facility to say to *remove* something. (This has been conjectured by others - the first is my idea; steal it if you want.)
But, in our industry, we do love to pour heaps and heaps of hype hype hype on absolutely everything we come up with.
Psychology is a sudo-science. Intelligence is a self-applied, self-defined term that is not verifiable.
Well, good job at convincing noobs that GPTs are neither the End of the World, nor are they The Second Coming.
I wasn't at all satisfied with most examples being so cherry-picked.
I would never trust a vendor demo, never mind screenshots, etc. Even there, they are riddled with problems - especially the mereotopological matters the speaker mentions. (Mereology is the study of parts and wholes.)
Well done at sharing your emotions.