19:30 "You hear people all the time pontificate [about] 'When is Artificial General Intelligence [AGI] coming?' ... I'm telling you it's here." ...Yeah, that's not even REMOTELY true and it strongly suggests that she has no idea what A*I is. To recap (because this is important): the fact that a machine learning ensemble can learn things about playing Go that no human has ever imagined is NOT an indication of AGI (because the Go algorithm cannot EVER take over your job as an Uber driver or stock analyst or whatever... It's not a general AI, it's a narrow AI... MOST importantly, it has no "common sense" and it cannot learn outside the narrow world of Go). And the fact that the ML algorithms are an inscrutable ("we have no idea what the computer has learned!") "black box" is a well-known limitation of current machine learning algorithms, and not some sinister or inspiring indication that AGI has arrived (if it were, markets would also constitute AGI, which they don't). She then discusses her horror about the social credit system which is a little like US credit scores but based more on citizenship. She implies that this score is calculated from ML algorithms, but that's not true. The only role of any kind of machine learning in the calculation seems to be inclusion of indicators from technologies like facial recognition (like if they see you in the street without a face mask, your score may go down). So, yes, the authorities have AI-related technologies like facial recognition available, but the system looks pretty low-tech... If they find you doing things they don't like, you get blacklisted. That is horrifying (for a westerner), but it has almost nothing to do with AI... So much so that conflating the two is either a glaring error or a lie. This is a typical business school talk. She makes a lot of "good points" without really explaining anything (notice how she throws acronyms, like A*I, and technical terms like "machine learning" around without defining them) and without proposing any solutions. I don't know, but my subjective prior underweights the likelihood that she has more than a shallow grasp of the subject matter. I doubt she's ever done anything more data intensive than a t-test. And some of her "good points" don't hold water. She claims that the only AI research outside of China is nine US companies (which is both the title of this video and an obvious falsehood). Then later, she talks about "all the start-ups" that have a focus on AI and she immediately realizes that she just contradicted herself and amends her remark that these start-ups are "partnering" with the US nine. This exclusivity argument is complete horseshit and it perpetuates the myth that the kinds of machine learning applications that we are increasingly seeing are only something that can be created by huge enterprises. In fact, the technology for machine learning is wide-spread and some of the most dangerous applications of machine learning (that we know of) seem to be in small start-ups that don't have the visibility of a facebook, Amazon, etc. and which are controlled by a single individual and seem to have loose ethical standards (e.g., Clearview AI, Cambridge Analytica). Another good point (rather laboriously made) is that real-world ethics are complex... But what was her point? I think the most obvious point is that a single ethics class offered by engineering departments probably doesn't give engineers much grounding to create systems that solve ethical dilemmas. I'm not sure that's either relevant or true. No AI is going to be programmed to solve the trolley problem. The initial ANI will probably be trained to make ethical decisions that map to the decisions made by a majority of people. In other words, ANI engineers will solve ethical problems using exactly the same methodology of machine learning that they use to solve the whole problem (driving, playing Go, etc.) It's also disingenuous to say that there are NO existing ethical guidelines. For example, it's highly likely that companies that create A*I will be held liable for any damages. So, the same US laws that prevent food poisoning in the US will be in force to prevent AI mistakes. We've already had at least two people die while using Tesla Autopilot and Tesla hasn't been sued because in both cases there was good evidence that the operators were not adhering to the manufacturer's operating instructions (basically, hands on the wheel, eyes on the road). (And, BTW, neither death was caused by "ethics" but by the vision system getting confused.) That said, she's certainly correct that ethical guidelines are important as machine learning is applied. The same could be said of business schools. Does she have an obligation to present factually correct information? To be clear? To educate? To be balanced? I guess she can say that "we get what we pay for" but I think she does have some obligations and I think her talk fails to meet those obligations. For a "quantitative futurist" I'm extremely disappointed. It sounds like she's talking to a bunch of CEO's and I would be scared if anyone took any action based on this talk.
Amy is an amazing person and an astute futurist. Her new book on bio tech with Andrew Hessel is well worth a read.
she is great...all these negative comments are all know it all's and joulais
8:04 is not Lee Sedol, but Fan Hui, European champion (2nd dan)
I feel like there should be a lot more eyes on this....
people don't listen to women, but they should
Thanks for sharing!!!
5:15 1939 World fair: Electro 5:24 Ahora resulta que sí existe 14:30 AlphaGo 18:30 Whaaaaaat!?
ceo search trick is not working for me
19:30 "You hear people all the time pontificate [about] 'When is Artificial General Intelligence [AGI] coming?' ... I'm telling you it's here."
...Yeah, that's not even REMOTELY true and it strongly suggests that she has no idea what A*I is. To recap (because this is important): the fact that a machine learning ensemble can learn things about playing Go that no human has ever imagined is NOT an indication of AGI (because the Go algorithm cannot EVER take over your job as an Uber driver or stock analyst or whatever... It's not a general AI, it's a narrow AI... MOST importantly, it has no "common sense" and it cannot learn outside the narrow world of Go).
And the fact that the ML algorithms are an inscrutable ("we have no idea what the computer has learned!") "black box" is a well-known limitation of current machine learning algorithms, and not some sinister or inspiring indication that AGI has arrived (if it were, markets would also constitute AGI, which they don't).
She then discusses her horror about the social credit system which is a little like US credit scores but based more on citizenship. She implies that this score is calculated from ML algorithms, but that's not true. The only role of any kind of machine learning in the calculation seems to be inclusion of indicators from technologies like facial recognition (like if they see you in the street without a face mask, your score may go down). So, yes, the authorities have AI-related technologies like facial recognition available, but the system looks pretty low-tech... If they find you doing things they don't like, you get blacklisted. That is horrifying (for a westerner), but it has almost nothing to do with AI... So much so that conflating the two is either a glaring error or a lie.
This is a typical business school talk. She makes a lot of "good points" without really explaining anything (notice how she throws acronyms, like A*I, and technical terms like "machine learning" around without defining them) and without proposing any solutions. I don't know, but my subjective prior underweights the likelihood that she has more than a shallow grasp of the subject matter. I doubt she's ever done anything more data intensive than a t-test.
And some of her "good points" don't hold water. She claims that the only AI research outside of China is nine US companies (which is both the title of this video and an obvious falsehood). Then later, she talks about "all the start-ups" that have a focus on AI and she immediately realizes that she just contradicted herself and amends her remark that these start-ups are "partnering" with the US nine. This exclusivity argument is complete horseshit and it perpetuates the myth that the kinds of machine learning applications that we are increasingly seeing are only something that can be created by huge enterprises. In fact, the technology for machine learning is wide-spread and some of the most dangerous applications of machine learning (that we know of) seem to be in small start-ups that don't have the visibility of a facebook, Amazon, etc. and which are controlled by a single individual and seem to have loose ethical standards (e.g., Clearview AI, Cambridge Analytica).
Another good point (rather laboriously made) is that real-world ethics are complex... But what was her point? I think the most obvious point is that a single ethics class offered by engineering departments probably doesn't give engineers much grounding to create systems that solve ethical dilemmas. I'm not sure that's either relevant or true. No AI is going to be programmed to solve the trolley problem. The initial ANI will probably be trained to make ethical decisions that map to the decisions made by a majority of people. In other words, ANI engineers will solve ethical problems using exactly the same methodology of machine learning that they use to solve the whole problem (driving, playing Go, etc.)
It's also disingenuous to say that there are NO existing ethical guidelines. For example, it's highly likely that companies that create A*I will be held liable for any damages. So, the same US laws that prevent food poisoning in the US will be in force to prevent AI mistakes. We've already had at least two people die while using Tesla Autopilot and Tesla hasn't been sued because in both cases there was good evidence that the operators were not adhering to the manufacturer's operating instructions (basically, hands on the wheel, eyes on the road). (And, BTW, neither death was caused by "ethics" but by the vision system getting confused.)
That said, she's certainly correct that ethical guidelines are important as machine learning is applied. The same could be said of business schools. Does she have an obligation to present factually correct information? To be clear? To educate? To be balanced?
I guess she can say that "we get what we pay for" but I think she does have some obligations and I think her talk fails to meet those obligations. For a "quantitative futurist" I'm extremely disappointed. It sounds like she's talking to a bunch of CEO's and I would be scared if anyone took any action based on this talk.
What about OpenAI?
6:50 "Computers WHERE dumb"
AMZN, GOOG, MSFT and AAPL are still the best.
#supernetwork
FAA GAMBIT
Facebook, Alibaba, Apple,
Google, Amazon, Microsoft, Baidu, IBM, Tencent
Who ever does not have the mark, name or number, could not buy or sell ( Rev. 13: 17 ) . AI will make this happen 🕸
Social Credit is what you should be worried about.
She has no idea!
I can't believe how negative some of these comments are. You personally attack her, I think because she's a women?
No -because she's WRONG. Read the comment by Ricki.L. above.
A lot of people can't accept a remotely articulate, not-submissive woman. It's really a threat to them.
Yes- if she was Steve or Bill, they Just would be shutting UP and giving him ALL they money. :-D
She is like someone who does not know how to cook.All she does is taste the food and make her by articulating that.
She’s just plain wrong. I am an architect who work in this field and I can tell she’s making stuff up.
Provide us all links to your talks on UA-cam, then.