Hearing one of the ML community's rockstars share such an honest perspective on the struggles we likely all recognize is refreshing and motivating. Thank you for sharing this!!
Simply fabulous presentation! I love the thematic connection between the career advice of changing approach to alter outcomes, and the clever tweaking of the model to significantly change its output!
It is narrow when ... all of them are trying to hire the same kind of people, with the same rigid rubric. Can not agree more on this, we call this "内卷" in chinese.
Regarding a minor point around 8:45 mark -- I don't think that conference paper decisions are *that* correlated. Sure, strong papers get in, terrible papers get rejected. But for the mid-tier papers, re-submitting to different conferences is the action based on the belief that the reviewing process from one to the other is more independent (in a probabilistic sense) than correlated. Otherwise, if the reviewing processes are extremely correlated, a rejection from one conference is enough evidence that you shouldn't submit to somewhere else because they are all correlated.
With due respect, I do not buy the generalist argument for hiring. isn't there already so many people who know a little about everything (like RL, vision, gradient descent, conv nets, etc)? Even any fresh school graduate worked on ML should know a bit about these. Isn't it that, as a research community, we want to understand why deep learning works at the fundamental level rather than treating it as a black box, and that is where we need depth more than ever?
I think she meant being jack of all trades, master of one. BUT your 'jack' being equivalent to others 'master'. Also, I do agree to your point on interpretability of AI!
Being open about personal experiences and vulnerabilities is still much too rare in tech. Thank you, Rosanne.
Hearing one of the ML community's rockstars share such an honest perspective on the struggles we likely all recognize is refreshing and motivating. Thank you for sharing this!!
i’m new to her work and need a bit of context - what are you referencing when saying she is a rockstar? (ie what must we know about her?)
Incredibly brave and intelligent points to make. I hope it starts a lasting conversation, thanks for starting it.
Nice to see ML collective has a UA-cam channel. Didn’t watch the whole vid but I know Rosanne is top notch from Twitter :)
Great talk! Your story almost brings tears to my eyes. 一定要成功呀!
Realistic, open, and brave! Thanks a lot for this brilliant talk.
Simply fabulous presentation!
I love the thematic connection between the career advice of changing approach to alter outcomes,
and the clever tweaking of the model to significantly change its output!
very frank and insightful talk, i wish all top industry performers analyzed themselves in public like this. thank you!
These are great insights.
It is narrow when ... all of them are trying to hire the same kind of people, with the same rigid rubric. Can not agree more on this, we call this "内卷" in chinese.
Regarding a minor point around 8:45 mark -- I don't think that conference paper decisions are *that* correlated. Sure, strong papers get in, terrible papers get rejected. But for the mid-tier papers, re-submitting to different conferences is the action based on the belief that the reviewing process from one to the other is more independent (in a probabilistic sense) than correlated. Otherwise, if the reviewing processes are extremely correlated, a rejection from one conference is enough evidence that you shouldn't submit to somewhere else because they are all correlated.
This is one genuine talk.
At a startup, would a generalist have greater value?
Fantastic !!! Quiet relatable, inspiring, and very helpful.
Thanks a lot, Rosanne :)
I’m glad that you are an extremely petty person because I am just the same. Thanks for bringing up this topic.
Here fully watching from Jamaica 🇯🇲👍
Nice video, thanks :)
great topic.
With due respect, I do not buy the generalist argument for hiring. isn't there already so many people who know a little about everything (like RL, vision, gradient descent, conv nets, etc)? Even any fresh school graduate worked on ML should know a bit about these. Isn't it that, as a research community, we want to understand why deep learning works at the fundamental level rather than treating it as a black box, and that is where we need depth more than ever?
I think she meant being jack of all trades, master of one. BUT your 'jack' being equivalent to others 'master'. Also, I do agree to your point on interpretability of AI!