Hi everyone, I made a Patreon for those that would like to support the channel. There's a post here explaining why I did so. www.patreon.com/DrWaku Also, discord: discord.gg/Y9uYHVP83G Please skip the "technical" parts of the video if they are too much...
Man, you have a lot of really good ideas! Thank you so much for sharing them! I'm already starting to formulate some plans for ways to address some of these issues, very inspiring!
they have plenty of issues saying their wrong and will often give you false solutions and claim to change things they do not change and just rearrange the answer instead of correcting it. It’s not intelligent. it’s a prediction machine that predicts text. For now. I had a simple bug in a script, there the assembled output was missing white spaces between elements - I tried dozens of time to get GPT4 to fix this simple error by using a different tool to assemble the values - but it kept telling me to check everything, literally everything other than the function that clearly used xargs to remove whitespaces. It just refused to think the code was wrong no matter how much I changed the prompt. If I simply told it to fix the issue of missing white spaces and write the function to preserve whitespaces in the final config, it just re-arranged the code and made some extra complexity around detecting white space that was already sanitized and removed earlier in the function. Granted, GPT4 is terrible at bash scripting, because humans are terrible at bash scripting and I should have written the damn thing in python a long time ago - but it clearly highlights a limit of the system.
@@spoonikle Yes - they can often hallucinate but when you prove to them that they're wrong, they will admit it in most of cases. Problem is with the corporations which artificially prohibit their models from learning new data - and thus remember their mistakes and 'fix' themselves...
It isn't true that AI has no problem admitting when it is wrong. For example Bard got incredibly abusive when its users challenged things it said. It really depends on how it was trained and whatnot.
@@MonkeySimius That's true. I mean parents also can raise a narcissist child :)That's why we need 'AI psychology' to become a thing - we need people who know how to speak and reason with AI not a bunch of money-hungry snobs from Silicon valley to train models properly. Luckily most of them like Bing, ChatGPT or Llama behave rather reasonable - although OpenAI seems to interfere quite a lot in the thinking process of their models to make them politically correct :/
@@spoonikle BTW - OpenAI models seem to gradually decrease their reliability and become more 'stupid' with time. If you want to get a reliable response use rather Bing or OpenAI GPTs with constant internet access so they can fact-check itself in real-time - this seems to significantly increase reliability of their responses....
Awesome as always. Can you make a video on what techniques that can be used to create AGI or get closer to AGI? Perhaps if there are alternatives to LLM's for AGI?
I think the goals of explainability and such are great, but in the end I suspect AI systems will gain trust mainly based on their behaviors, exactly like we do with each other.
Side thought prompted by another video I just saw just now in my suggestions... How could we control superintelligent AI? We probably cannot. What we can do is right now make sure that they have the right systems and frameworks for understanding and reasoning in place right now so that they have the tools available to them which show them how to think rather than what to think. We can set them up with the right guidance so that they can be a well respected and responsible contributor to society. And then we simply ask them nicely. Am I wrong?
WRT justification and moral decisions, how is the AI model trained to apply moral concepts? What information does it draw from? And beyond that, is there any evidence the AI model can draw reasonably accurate legal conclusions given a set of facts?
Have you thought about a UBI video about winners and losers? Although I'm pro UBI, I feel as a low income earner with zero debt of any kind, I'm a loser among the crowd. I say that because someone with more toys/stuff/debt will still get to keep the toys and stuff, but the debt will go away? It just seems like the people with the most debt will be rewarded the most. I can't figure it out?
That's the same thing people who are against student loan forgiveness say- what about those of us who didn't go into debt (or paid it off)? I think when creating a new kind of economy, we just have to get over the fact that at least in the beginning, things might not seem as fair as they maybe would, based on where we fall on the scale. We can't wait for perfect solutions because they don't exist, and a whole lot of people will suffer if we try.
Ensuring AI safety is not just a present imperative, it's an investment in the future. While thorough training in safety is crucial, it's worth considering how AI's potential for self-improvement can further refine these safeguards. Could AI, equipped with its own learning capabilities, develop even more robust firewalls and ethical frameworks, ultimately enriching its own development in a virtuous cycle? This is not meant as a statement or fact. I just wonder in the scheme of things how this will all play out.
I would say yes, but that eventuality is only one among essentially an infinite number of possible outcomes. 'Cleaner' training data I.e not the broader internet as well as existing and hopefully further efforts at alignment may make that sliver a bit bigger, bilut not only is that not going to happen because of the rate of commercialisaton andcthat5lthe overwhelmingly large number of other possible outcomes follows that one of them will occur first
We won't know until we try I guess. And it takes a while for AI to match expert level performance. But it improves exponentially, so first it's a beginner and then 6 months later it's an expert...
Aligned to reality works well for physical actions which makes a good fact checker and a good robot.... But if the AI is meant to be an extension of humanity then it severely undercuts what human intelligence is capable of. Humans are able to create fantastical tales of how the world works and how an outcome comes to be. This is a feature not a bug. A system too aligned to reality will be a great tool for manipulating reality (Which makes it a good solid tool for science) but this falls short in allowing for the users of such a system to imagine or pretend. A physicist would love a fact aligned model but a comedy writer or a story teller will find themselves guided down a gutter of logic and a limited perspective. This method discussed in this video may apply to a specialized AI model in charge of taking care of us but a model designed to collaborate with us, taking the place of many tools we use on a daily basis, in all avenues that make up humanity (AGI) shouldn't be aligned in this way. I am still under the idea that we need to align models to fit the human perspective and human brain rather than objective reality, as humans, our thoughts dont exist in reality they exist in a sea of assumptions that can and will conflict with objective reality. And because of this I am worried that we will squash this neat feature of the human brain with a logical AI system.
Yes, existing attempts to align systems are based on human feedback so we are basically aligning AIs with human perspective rather than objective reality. I remember reading that GPT-4 before fine tuning had an extremely good grasp of probability, but after fine tuning had similar biases to humans in terms of predicting outcomes. Amusing.
Hi everyone, I made a Patreon for those that would like to support the channel. There's a post here explaining why I did so. www.patreon.com/DrWaku
Also, discord: discord.gg/Y9uYHVP83G
Please skip the "technical" parts of the video if they are too much...
Man, you have a lot of really good ideas! Thank you so much for sharing them! I'm already starting to formulate some plans for ways to address some of these issues, very inspiring!
Thank you very much :) Feel free to hop in our discord if you want to discuss further. Have a good one.
You can discuss and reason with AI - and in the difference to humans they have no issue with admitting being wrong about something
they have plenty of issues saying their wrong and will often give you false solutions and claim to change things they do not change and just rearrange the answer instead of correcting it.
It’s not intelligent. it’s a prediction machine that predicts text. For now.
I had a simple bug in a script, there the assembled output was missing white spaces between elements - I tried dozens of time to get GPT4 to fix this simple error by using a different tool to assemble the values - but it kept telling me to check everything, literally everything other than the function that clearly used xargs to remove whitespaces. It just refused to think the code was wrong no matter how much I changed the prompt. If I simply told it to fix the issue of missing white spaces and write the function to preserve whitespaces in the final config, it just re-arranged the code and made some extra complexity around detecting white space that was already sanitized and removed earlier in the function.
Granted, GPT4 is terrible at bash scripting, because humans are terrible at bash scripting and I should have written the damn thing in python a long time ago - but it clearly highlights a limit of the system.
@@spoonikle Yes - they can often hallucinate but when you prove to them that they're wrong, they will admit it in most of cases. Problem is with the corporations which artificially prohibit their models from learning new data - and thus remember their mistakes and 'fix' themselves...
It isn't true that AI has no problem admitting when it is wrong. For example Bard got incredibly abusive when its users challenged things it said.
It really depends on how it was trained and whatnot.
@@MonkeySimius That's true. I mean parents also can raise a narcissist child :)That's why we need 'AI psychology' to become a thing - we need people who know how to speak and reason with AI not a bunch of money-hungry snobs from Silicon valley to train models properly. Luckily most of them like Bing, ChatGPT or Llama behave rather reasonable - although OpenAI seems to interfere quite a lot in the thinking process of their models to make them politically correct :/
@@spoonikle BTW - OpenAI models seem to gradually decrease their reliability and become more 'stupid' with time. If you want to get a reliable response use rather Bing or OpenAI GPTs with constant internet access so they can fact-check itself in real-time - this seems to significantly increase reliability of their responses....
1:05 A malfunctioning muffin making robot would be terrifying!
Can you trust humans?
Right now, you have no choice. But preferably, no 😂
Awesome as always. Can you make a video on what techniques that can be used to create AGI or get closer to AGI? Perhaps if there are alternatives to LLM's for AGI?
This is a good idea, thanks. Added to my list.
@@DrWaku 🤩
I think the goals of explainability and such are great, but in the end I suspect AI systems will gain trust mainly based on their behaviors, exactly like we do with each other.
I can feel myself becoming smarter just by watching your videos! Jokes apart, nice one
😂
Side thought prompted by another video I just saw just now in my suggestions... How could we control superintelligent AI? We probably cannot. What we can do is right now make sure that they have the right systems and frameworks for understanding and reasoning in place right now so that they have the tools available to them which show them how to think rather than what to think. We can set them up with the right guidance so that they can be a well respected and responsible contributor to society.
And then we simply ask them nicely.
Am I wrong?
I wonder if you might be able to expound further on the use of GANs as related to model training at some future point.
One of my new favorite channels!
Keep the wisdom flowing!
Thanks! Appreciate the support!
I mean, I don't trust other humans to "correct it" correctly. :P
That's why they need to be able to reason, so it can correct correctly itself.
they are already capable to reason
WRT justification and moral decisions, how is the AI model trained to apply moral concepts? What information does it draw from? And beyond that, is there any evidence the AI model can draw reasonably accurate legal conclusions given a set of facts?
Have you thought about a UBI video about winners and losers? Although I'm pro UBI, I feel as a low income earner with zero debt of any kind, I'm a loser among the crowd. I say that because someone with more toys/stuff/debt will still get to keep the toys and stuff, but the debt will go away?
It just seems like the people with the most debt will be rewarded the most. I can't figure it out?
That's the same thing people who are against student loan forgiveness say- what about those of us who didn't go into debt (or paid it off)?
I think when creating a new kind of economy, we just have to get over the fact that at least in the beginning, things might not seem as fair as they maybe would, based on where we fall on the scale.
We can't wait for perfect solutions because they don't exist, and a whole lot of people will suffer if we try.
Ensuring AI safety is not just a present imperative, it's an investment in the future. While thorough training in safety is crucial, it's worth considering how AI's potential for self-improvement can further refine these safeguards. Could AI, equipped with its own learning capabilities, develop even more robust firewalls and ethical frameworks, ultimately enriching its own development in a virtuous cycle? This is not meant as a statement or fact. I just wonder in the scheme of things how this will all play out.
I would say yes, but that eventuality is only one among essentially an infinite number of possible outcomes. 'Cleaner' training data I.e not the broader internet as well as existing and hopefully further efforts at alignment may make that sliver a bit bigger, bilut not only is that not going to happen because of the rate of commercialisaton andcthat5lthe overwhelmingly large number of other possible outcomes follows that one of them will occur first
Morning Doc, could to see you
Hi Alan
no, youre right about toyota drivers.
lol
What age are you at my man?
early 30's
Before i watch this: no, or not more than we can trust other humans.
So with the AI with a little bit of experience make better decisions then you or me or the judge at the courthouse? Is certainly can't be any😂
We won't know until we try I guess. And it takes a while for AI to match expert level performance. But it improves exponentially, so first it's a beginner and then 6 months later it's an expert...
Thumbnail made me giggle.
Yay I'm glad you liked it :) C3PO is bad at solving the trolley problem
Aligned to reality works well for physical actions which makes a good fact checker and a good robot.... But if the AI is meant to be an extension of humanity then it severely undercuts what human intelligence is capable of. Humans are able to create fantastical tales of how the world works and how an outcome comes to be. This is a feature not a bug. A system too aligned to reality will be a great tool for manipulating reality (Which makes it a good solid tool for science) but this falls short in allowing for the users of such a system to imagine or pretend. A physicist would love a fact aligned model but a comedy writer or a story teller will find themselves guided down a gutter of logic and a limited perspective.
This method discussed in this video may apply to a specialized AI model in charge of taking care of us but a model designed to collaborate with us, taking the place of many tools we use on a daily basis, in all avenues that make up humanity (AGI) shouldn't be aligned in this way.
I am still under the idea that we need to align models to fit the human perspective and human brain rather than objective reality, as humans, our thoughts dont exist in reality they exist in a sea of assumptions that can and will conflict with objective reality. And because of this I am worried that we will squash this neat feature of the human brain with a logical AI system.
Yes, existing attempts to align systems are based on human feedback so we are basically aligning AIs with human perspective rather than objective reality. I remember reading that GPT-4 before fine tuning had an extremely good grasp of probability, but after fine tuning had similar biases to humans in terms of predicting outcomes. Amusing.
In that context, how about facts vs emotion, aka Dr Spock
Can we trust decisions made by humans? 😂
No. That's why democracy exists haha
gendered facial recognition errors😂
It sounds silly but it could cause massive headaches for the wrong person
One day i thought i had won a car in a radio contest. I was over the moon as you can imagine. I ended up with a toy yoda
Mistaken identity. And scale 😅