Thanks for the insightful talk! I love the clarity at 18:50 of seeing the LLM going through training, with so many skills implicitly demanded by the next token prediction task.
I like your concept of scaling: 1) identify the modeling assumption or inductive biases that bottlenecks further scaling 2) replace it with a more scalable one. Example: letting the model learn it's own representations is a more available approach.
Somehow Hyung Won Chung's talk is always very abstract and purely at a meta level. He doesn't talk about specific LLM techniques or anything like that, but goes all into the fundamental intuition behind the scaling 👍
"teach him taste of fish and make him hungry"🤣
Love that extra layer to that analogy.
😂
I enjoyed every bit of this talk.
Thank you very much for this goldly video sir .Was Definitely worth spending 35 mins on it .
Thanks for the insightful talk!
I love the clarity at 18:50 of seeing the LLM going through training, with so many skills implicitly demanded by the next token prediction task.
such an incredibly information dense talk, thank you!
One of the best talks on UA-cam right now!
I like your concept of scaling:
1) identify the modeling assumption or inductive biases that bottlenecks further scaling
2) replace it with a more scalable one.
Example: letting the model learn it's own representations is a more available approach.
very insightful, thanks.
좋은 강의 무료로 올려주셔서 감사합니다!
Somehow Hyung Won Chung's talk is always very abstract and purely at a meta level. He doesn't talk about specific LLM techniques or anything like that, but goes all into the fundamental intuition behind the scaling 👍
probably cause if he would get into trouble otherwise haha
This talk is gold
형원님, 정말 멋지십니다. 구독하고 항상 응원할게요!! 앞으로도 많은 영상 부탁드려요
멋지십니다 형원님!!😊
"No amount of bananas can incentivize monkeys to do mathematical reasoning" lol
좋은 강의 감사합니다. 끊임없는 배움의 해체(unlearning)에 대해 이야기하신 것이 세상을 보는 눈을 트이게 한 느낌이네요. 잘못된 공리를 바탕으로 구축된 직관과 아이디어가 해체되어야 한다는 이야기를 크게 생각해본 적이 없었으니까요.
Great point: The Bitter Lesson article is the single most important writing in the field of AI. 😳
Which article? by Rich Sutton? Thankyou.
谢谢这个分享~
thank you for sharing!
referenced talk:
John Schulman - Reinforcement Learning from Human Feedback: Progress and Challenges
ua-cam.com/video/hhiLw5Q_UFg/v-deo.html
멋있어요! 일이 많아서 힘드실거라고 생각이 듭니다. 건강도 챙기시길 바래요
yep - we should follow this principle in architecture too!
my goat
cool, thx u for sharing
It is suprised to know that you majored in mechanical engineering of your phd degree. How can you make such a big move?
++
"MIT EI seminar" reads like "With egg seminar" when you are German, super weird title.... Is it about breakfast? 😂
No Q/A?
very shallow perspective .
Propagandic