Thank you especially for your commentary about moving on to new approaches and tools even if you invested a big chunk of your life in the old ones. Anyone with a long career in technology should be encouraged by that. Hopefully some of the lessons learned from old ways will translate though!
This concept really resonates with me. I just came across this new study which aligns with the non-language-first concept. Look up: "Toddlers learn to reason logically before they learn to speak, according to a study by UPF"
Regarding grounding and morality: Could a grounded experience be necessary for an intrinsic understanding of morality? Or is the "suffering" of negative reward in RLHF sufficient? As I understand it, the foundation of morality is desiring the welfare of others, which requires an understanding of welfare yourself. Edit: I understand that we're very far from teaching AI & robots morality directly ... but small children often seem pretty far from it as well, although I think we have some supportive instincts and built-in brain structures.
Simulators are like sending children to school. You give the model a head start using simplified problems before they have to start learning from the 'real world'.
I love Prof. Malik so much. He is a friggin legend.
Thank you especially for your commentary about moving on to new approaches and tools even if you invested a big chunk of your life in the old ones. Anyone with a long career in technology should be encouraged by that. Hopefully some of the lessons learned from old ways will translate though!
Incredible interview.
This concept really resonates with me. I just came across this new study which aligns with the non-language-first concept.
Look up: "Toddlers learn to reason logically before they learn to speak, according to a study by UPF"
Has the podcast stopped?
Regarding grounding and morality: Could a grounded experience be necessary for an intrinsic understanding of morality? Or is the "suffering" of negative reward in RLHF sufficient? As I understand it, the foundation of morality is desiring the welfare of others, which requires an understanding of welfare yourself.
Edit: I understand that we're very far from teaching AI & robots morality directly ... but small children often seem pretty far from it as well, although I think we have some supportive instincts and built-in brain structures.
Fine motor control is acquired later in children, true, but you have to consider that the nerves that allow it mature later.
Simulators are like sending children to school. You give the model a head start using simplified problems before they have to start learning from the 'real world'.
Interesting to hear of the debate when AlexNet came out. Wonder what side Yann was on that debate….
🙏
Happy parenting, hypocrite Dmitry.