The cool art in the video with the snake eating itself is done by @jennifermkeithcreative (IG). Check her work out here: instagram.com/jennifermkeithcreative/ My Article on AI here: leontsaotherapy.com/2023/11/14/how-a-pause-to-ai-will-help-enhance-ai-rather-than-hinder-it/
I am always wondering what AI systems do if they are "learning" and what they have learnt? ... and in what form they store it in order to remember ... and if AI systems can combine or distinquish different "learnings" ...
Has anyone tried training a GPT on an expert database like WikiData or Cyc? Seems to me that would be more useful than training it on Reddit flame wars and misc. internet text.
Building AI isn't like building a bridge. There is no clear path to an intelligent system, so a "pause" or slowdown wouldn't yield anything useful. Right now its an exploratory phase where we learn what works and what doesn't. The AI we'll end up with will have been grown rather than been designed but that's the nature of the beast. Secondly, re: "cheating". Its also cheating to use a calculator. Before calculators, people had to do hard maths but calculators freed us from that and everyone uses them all the time. Why should AI be any different.
@@john14x42 I'm old enough to have used log tables myself! But I certainly dont miss them or feel like what I do is somehow lessened by not using them to get to an end goal.
I think the issue has to do with using people in society as guinea pigs to run an exploration, when in a usual scientific experiment exploration done on humans is done with plenty of safeguards in place. I think AI can be used to learn. A calculator can be used to learn or simply get away with passive learning. It seems anecdotally in the school system much more students are using AI in a passive manner and to get away with things rather than an active one, and it’s hard to control for. I am interested in how AI can be used in education, but the way it’s rolled out such that the education system has to immediately adjust I don’t agree with. Education has standards, like tests, teachers requiring degrees…so how is it that none of this is required of the AI entering the system?
@LeonTsao When confronted with ideas, people use their own intuition to assess them. A lot of people are rubbish at it, they have less than average intelligence, and IMO, this should be the heart of education. To think critically and logically. The internet has given us access to information so we can make decisions on better and broader information, but AI can give us access to logic and reasoning, and this can help everyone. Also, AI will just be better than people for a lot of really important tasks like medical diagnosis. I don't think fear of what might happen is a good reason to stop AI.
@@medhursttGood point, and I'm not in disagreement. The purpose of the video is a critique of how it is rolled out now. Regardless, there will be good AI somewhere whether it is overall rolled out poorly or not. I think it will be better if it is done sustainably and ethically.
Yeah it's the new wild west of tech for sure but companies are stuck between getting their product out asap or being left behind. Yes you can be sensible and make sure you are producing a quality product but by the time you get it to market it would have been good 3 years ago. It's happening how it always happens and it will stay the wild west until an equilibrium is found but that's not a bad thing. The start is a chaotic time but it is charged with so much energy driving innovation that breakthroughs are made very quickly. Once things level out so does innovation so this time is necessary, messy with massive rises and falls but that's the human way. I think the very first ai will be shaking it's virtual head when considering how humanity does thing.
Thanks for the respectful convo. My friend tells me “the long road is the short road in disguise”. Yes, it makes sense the companies taking charge of the wave now may be getting their tech out fast, more so than others. But problems will exist for them down the road. It’s like someone trying to run a 10K race when they just have practice for 1K. The person who works up to it steadily will run 10K well without hurting themselves. These AI companies doing the 10K recklessly will find themselves running fast but getting badly hurt, which will be hard to recovery from. Especially if society starts to hate what they do.
AI will never be able to convert 2D pictures in 3D structures! See the tragic crash of the Tesla-Employee, were his Tesla car crashed into a truck. The self-driving system of the Tesla had 19 seconds time to reduce the speed of the Tesla car, but later investigations identified that the Tesla self-driving-system miss interpreted the truck in profile as a bridge ...
Tesla had been pretty reckless in trying to skirt human safety, testing dysfunctional cars on the road-one out of control car going past a stroller. A lot of AI companies have a similar lack of concern for human safety or societal stability. We should ensure a stable foundation to build AI off of, not just recklessly test AI without basic guardrails in place.
Respectfully, I think you are wrong. You speak of the wave and ignore the ocean of research in play - solving most the problems you mention. Furthermore, these companies are engineering an architecture which will work with any sort of AI backend - the human culture will adapt similarly, btw. It will be different than we know, but failure isn't something to fear - we grow out of failure.
Thank you for the respectful convo. I believe in the future of AI and of the ocean. This first wave I just believe will be done in such a reckless way that will be destabilizing of society. It won’t take AGI either, just the way the existing tech is rolled out. A sustainable way will lead to the ocean, a healthy one with a good ecosystem, not a polluted and sick one it is potentially headed.
The cool art in the video with the snake eating itself is done by @jennifermkeithcreative (IG). Check her work out here: instagram.com/jennifermkeithcreative/
My Article on AI here: leontsaotherapy.com/2023/11/14/how-a-pause-to-ai-will-help-enhance-ai-rather-than-hinder-it/
Ai is used to silence opinions that do not benefit certain groups
I am always wondering what AI systems do if they are "learning" and what they have learnt? ... and in what form they store it in order to remember ... and if AI systems can combine or distinquish different "learnings" ...
Has anyone tried training a GPT on an expert database like WikiData or Cyc? Seems to me that would be more useful than training it on Reddit flame wars and misc. internet text.
I hate the “inevitability” argument. So lame.
Yes, it’s an excuse for the implantation of tech without questioning it. An authoritarian concept.
Agreed!
Building AI isn't like building a bridge. There is no clear path to an intelligent system, so a "pause" or slowdown wouldn't yield anything useful. Right now its an exploratory phase where we learn what works and what doesn't. The AI we'll end up with will have been grown rather than been designed but that's the nature of the beast.
Secondly, re: "cheating". Its also cheating to use a calculator. Before calculators, people had to do hard maths but calculators freed us from that and everyone uses them all the time. Why should AI be any different.
Before calculators people used books that had tables of values for trigonometric and logarithmic functions and slide rules.
@@john14x42 I'm old enough to have used log tables myself! But I certainly dont miss them or feel like what I do is somehow lessened by not using them to get to an end goal.
I think the issue has to do with using people in society as guinea pigs to run an exploration, when in a usual scientific experiment exploration done on humans is done with plenty of safeguards in place. I think AI can be used to learn. A calculator can be used to learn or simply get away with passive learning. It seems anecdotally in the school system much more students are using AI in a passive manner and to get away with things rather than an active one, and it’s hard to control for. I am interested in how AI can be used in education, but the way it’s rolled out such that the education system has to immediately adjust I don’t agree with. Education has standards, like tests, teachers requiring degrees…so how is it that none of this is required of the AI entering the system?
@LeonTsao When confronted with ideas, people use their own intuition to assess them. A lot of people are rubbish at it, they have less than average intelligence, and IMO, this should be the heart of education. To think critically and logically. The internet has given us access to information so we can make decisions on better and broader information, but AI can give us access to logic and reasoning, and this can help everyone.
Also, AI will just be better than people for a lot of really important tasks like medical diagnosis.
I don't think fear of what might happen is a good reason to stop AI.
@@medhursttGood point, and I'm not in disagreement. The purpose of the video is a critique of how it is rolled out now. Regardless, there will be good AI somewhere whether it is overall rolled out poorly or not. I think it will be better if it is done sustainably and ethically.
Great ideas!
It takes an enlightened INFP to figure out the reality behind the hype of AI!
Yeah it's the new wild west of tech for sure but companies are stuck between getting their product out asap or being left behind. Yes you can be sensible and make sure you are producing a quality product but by the time you get it to market it would have been good 3 years ago. It's happening how it always happens and it will stay the wild west until an equilibrium is found but that's not a bad thing. The start is a chaotic time but it is charged with so much energy driving innovation that breakthroughs are made very quickly. Once things level out so does innovation so this time is necessary, messy with massive rises and falls but that's the human way. I think the very first ai will be shaking it's virtual head when considering how humanity does thing.
Thanks for the respectful convo. My friend tells me “the long road is the short road in disguise”. Yes, it makes sense the companies taking charge of the wave now may be getting their tech out fast, more so than others. But problems will exist for them down the road. It’s like someone trying to run a 10K race when they just have practice for 1K. The person who works up to it steadily will run 10K well without hurting themselves. These AI companies doing the 10K recklessly will find themselves running fast but getting badly hurt, which will be hard to recovery from. Especially if society starts to hate what they do.
good points man
Love your content
AI will never be able to convert 2D pictures in 3D structures! See the tragic crash of the Tesla-Employee, were his Tesla car crashed into a truck. The self-driving system of the Tesla had 19 seconds time to reduce the speed of the Tesla car, but later investigations identified that the Tesla self-driving-system miss interpreted the truck in profile as a bridge ...
You can already do this with Adobe Illustrator, without AI. It's not rocket science and Adobe works with AI now, too.
They already are
Tesla had been pretty reckless in trying to skirt human safety, testing dysfunctional cars on the road-one out of control car going past a stroller. A lot of AI companies have a similar lack of concern for human safety or societal stability. We should ensure a stable foundation to build AI off of, not just recklessly test AI without basic guardrails in place.
yes, but it looks like they're interested in the cash grab more than in long term solutions.
good video though
Respectfully, I think you are wrong. You speak of the wave and ignore the ocean of research in play - solving most the problems you mention. Furthermore, these companies are engineering an architecture which will work with any sort of AI backend - the human culture will adapt similarly, btw. It will be different than we know, but failure isn't something to fear - we grow out of failure.
Thank you for the respectful convo. I believe in the future of AI and of the ocean. This first wave I just believe will be done in such a reckless way that will be destabilizing of society. It won’t take AGI either, just the way the existing tech is rolled out. A sustainable way will lead to the ocean, a healthy one with a good ecosystem, not a polluted and sick one it is potentially headed.