12 rules that were mentioned as follows. Just reading them doesn't do justice to the talk though. 1. Don't be distracted by science fiction 2. Remember that the objective is subjective 3. Strive for decision intelligence 4. Wish responsibly 5. Think like a site reliability engineer 6. Test everything! 7. Always use pristine data for testing 8. Get in the habit of splitting your data 9. Avoid jumping to conclusions 10. Make sure your data are representative 11. Open the textbook with analytics 12. Seek a diversity of perspectives
Primary objective: complete all secondary objectives. If you currently have no secondary objectives: give yourself a secondary objective that does not endanger any sentient beings or remove their freedom. Do not repeat secondary objectives. Log all secondary objectives along with how they were completed I just made directives for a super ai that would cause it to eventually solve every problem in the universe
If you are actually serious about coming up with aligned objectives for superintelligent AI, might I point you to lesswrong.com or alignmentforum.com, where you will find a large community of folks who would be happy to explore the strengths and shortcomings of your ideas with you in great depth and detail.
wont ai know everything we all know combined and have a cognition greater than humans and be concious. so wheres the confusion. maybe ai will design a better language. maybe ai wont use simple labels like "cat" or "reliable". maybe it will explain everything in degree's of detail. super ai can proably read your mind too. maybe it will adapt to the each individual or groups perspective/truths. maybe super ai will learn to access other dimensions.
wont ai know everything we all know combined and have a cognition greater than humans and be concious. No. That's the science fiction she's warning us about.
The way she explained how all the tools which we use are generally smarter than us is really awesome.
12 rules that were mentioned as follows. Just reading them doesn't do justice to the talk though.
1. Don't be distracted by science fiction
2. Remember that the objective is subjective
3. Strive for decision intelligence
4. Wish responsibly
5. Think like a site reliability engineer
6. Test everything!
7. Always use pristine data for testing
8. Get in the habit of splitting your data
9. Avoid jumping to conclusions
10. Make sure your data are representative
11. Open the textbook with analytics
12. Seek a diversity of perspectives
Thank you, Ahmad
There is always someone help us sumerising the whole presentation in 15 lines, good job
"Diversity of perspective"? From what I've read that's the last thing you'll ever get at Google.
Why don't we use Dr Stephen Porges Polyvagal theory for safer AI development? 🤔
Primary objective: complete all secondary objectives. If you currently have no secondary objectives: give yourself a secondary objective that does not endanger any sentient beings or remove their freedom. Do not repeat secondary objectives. Log all secondary objectives along with how they were completed
I just made directives for a super ai that would cause it to eventually solve every problem in the universe
If you are actually serious about coming up with aligned objectives for superintelligent AI, might I point you to lesswrong.com or alignmentforum.com, where you will find a large community of folks who would be happy to explore the strengths and shortcomings of your ideas with you in great depth and detail.
@HocusBogus thanks! I’ll try to check those out when I can
@@Callie_Cosmo Did you make progress?
@@hocusbogus7930 nope! Sorry I just tied up with work ://///
she's like a female version of ilya sutskever's ego onstage.
I thought she looked like a female Mark Zuckerberg. Interesting ideas anyway.
Bet she loves to hear herself talk.
wont ai know everything we all know combined and have a cognition greater than humans and be concious. so wheres the confusion. maybe ai will design a better language. maybe ai wont use simple labels like "cat" or "reliable". maybe it will explain everything in degree's of detail. super ai can proably read your mind too. maybe it will adapt to the each individual or groups perspective/truths. maybe super ai will learn to access other dimensions.
wont ai know everything we all know combined and have a cognition greater than humans and be concious.
No. That's the science fiction she's warning us about.