I dont mean to be so offtopic but does anyone know of a way to get back into an Instagram account?? I was stupid lost the password. I would love any tips you can give me.
@Avi Muhammad i really appreciate your reply. I got to the site thru google and Im trying it out atm. Seems to take quite some time so I will reply here later when my account password hopefully is recovered.
It's too bad most people don't care or understand the implications of AI or AGI. I'm so excited to be alive at this period of time to watch the internet and AI change the world.
A couple of things that Demis mentions are fascinating: - Demis mentions a 'bug' they discovered that they weren't sure what it was, or how to fix it, the solution was more around coaxing the system to do what you wanted it to do (43:20) ; and - AI comes up with unusual strategies or patterns (25:47) & (31:50) . What I think this means is that when you essentially give your code the free reign to write its own code, you no longer have control over what the code is doing. Will we get to a point where we create a new AI that we no longer understand how it thinks? Or worse, will an AI create a new AI that we don't understand either and isn't based on our values systems? In contrast, nature is a system of balance and has a way of trimming off outliers at the top and bottom ends of the spectrum when things get too far away from normal, cruel as it often seems. It does not feel that there is such an in built control for AI yet. However, even if someone builds such a control in for these types of AIs, someone will remove them for the less benevolent ones. All in all, this is an extremely insightful talk, thank you for posting it. I can't believe it only has 70,000 views after a year! I'm intrigued to see where this journey takes us, but I can't help but feel that we need to be very careful with this tech so that we do not regret letting the cat out of the bag eventually.
how are these scientists so smart ? Is it all natural intuition. Hopefully they make AI available for all of us, because the starting point gap of life is getting bigger and bigger
It seems that the Alpha model incorporating MCTS has effectively solved Go and chess. But Desideratum #3 eliminates this model from achieving human level performance in all games for which self-play (or a database of recorded games) is unavailable. The android Data from Star Trek -- or any given human -- could learn to play Starcraft in real time, without 40 million playthroughs to figure it out. I don't want to downplay these achievements, which are truly impressive. RL + MCTS is a magnificent pairing -- a clever way to speed up and approximate brute force search. But I personally wouldn't consider the chapter on games to be over, simply because humans can learn new games in real time, with limited training data, but AI can't. Human learning must be much more efficient. The quest to "solve intelligence" continues!
Except you don't account for thousands of hours humans exist before they get to play a game, and also the fact that these games were designed by humans for humans to play in a way that it caters to their existence and "training data" already in an intuitive way. This AI is born into a world where all it sees is the game and has no context of what is going on and in a way not only it learns but also goes through evolution in the form of battling against different versions of itself just like animals have done on earth for billions of years.
@@ernestskuznecovs9202 Your points are well made but orthogonal to my own. What I am really talking about is general AI interacting in the real world, and the place that games will continue to have in pursuit of it. There is no AI that can learn to play any arbitrary game (from raw video and audio streams) in real time -- and that is an important milestone that must be met if human-level AI is ever to see the light of day.
@@snippletrap why is that? In whatever real world situation it is applied to it will be fed data in exactly the same way. You already saw they had improved on the wind power efficiency
@@Subject18 You're missing his point. snippletrap correctly observed that current "AI" is nothing more than brute-forcing through copious amounts of data. Whenever such data isn't available (science: limited observation/experimental data, languages: small speaker group and very few documents, etc.), the approach fails miserably. Humans can learn from a tiny fraction of data and using embarrassingly little energy - just compare the numbers. AlphaGo played millions of games using 5000+ computers. Each computer requires about 1kW of power. In the 8 hours it takes to train the machine, 40,000 kWh of power are used. The average human male consumes about 10MJ of energy per day or requires 115W of power. So 40,000kWh are incidentally equivalent to about *40 years* of energy consumption for a grown human male. This is horribly inefficient as during that time, the human not only masters a single game, but also learns to speak, calculate, walk, social interactions and thousands of other highly complex skills. *That's* what the OP was pointing out. The true breakthrough would be a system that is capable of learning concepts (such as games) by simply watching a handful of examples. Similar to how it's easy to explain the concept of a bike to child without having to sit down and show all kinds of bike images for hours on end.
@@totalermist I don't see how energy could be a limiting factor, as for the amount of data available its true it could only work where that is available, which they did address at 50:08
Hello DeepMind Team. I have an idea for example Starcraft. What would happen if you let an AI with maybe 5 actions (executable commands) per second, play against an AI with 10 actions per second. And you do the 55% rule again. And let an new AI play always against the strongest. I Think in that way the AI will learn strong strategies.
Its a little bit jarring to see Deepmind present Alphastar as such an overwhelming success given the fact that they hardly scratched the surface of the strategic aspects of the game. Unlike Chess and Go, Starcraft as an RTS involves a micromanagement aspect in which humans have to click hundreds of times a second to give orders to the units. Alphastar, as apparent from the available replays did *not* defeat humans by outplaying them stategically. Not by a long shot. It won almost all of the games (except perhaps 1), because of crazy micromanagement and crazy inhuman speed. I'm not even talking about the last match against Mana in which Alphastar lost in the most embarrassing way possible. There's still a lot to do in this area, what's the hurry to declare victory so soon?
Because they are not interested neither by Chess Go or SC2, it's not their aim, their aim is to make things that are good at learning not good at a specific task. Having something mainly unsupervised that can achieved the tremendeously difficult task like understand the meaning of pixels on a screen and grabs concept as units, motion action objectiv and use them in a consistent way during a several minute game is THE achievement. The long way is to go from nothing to where they are. not from where they are to the level of an exceptionnally unique human in 30 years that manage to be the best at SC2. And by the way given current replays of Alphastar on the ladder with larger restriction we cannot blame them anymore for any unfair setting, now it's playing fair, and it's top master EU ladder. Game not that clean and not that strategic? yes but it wins so you cannot blame the player, only blame the game. But even this wont last, acceptable strategical aspect will build up as soon as the framework would be stable and the training long enough.... I hope for this year blizzcon
@@PasseScience "Understanding pixels on a screen and grabs concept as units..." there's a slight misunderstanding here of the very broad architecture of Alphastar as explained in this video a bit and more by experts from the team in the recorded matches with TLO/Mana. Alphastar doesn't get raw pixels as inputs but like a human would it deals with abstractions. It already has all the units and buildings built into it. Which brings me to another crucial crucial point "... aim is to make things that are good at learning not good at a specific task" i agree completely. However as far as I understand the only reason to pick Stacraft out of the million pc games out there is because it is one of the most if not the most popular, serious and competitive RTS game that exists today. So it seems to me that the task they aimed for it was to be good at RTS (as phrased in the video - being able to beat the pros). While its true that Alphastar has beaten a pro gamer. It arguably didn't beat it in RTS but in rather limited micro oriented scenarios. I doubt this is wanted the AI to achieve in the first place as their goal in the end is to create artificial general intelligence which makes attacking Starcraft very sensible but also makes Alphastars games look less impressive. I do agree that from what I have been able to see the Protoss Alphastar on the ladder plays really good and is really playing good RTS as a whole I feel (not yet pro level perhaps but getting there). The Terran and Zerg one's less so...
Well are the input informations we as humanity could provide to AI sufficient for AI to create such theory? I don't think so, but it will change. AI does learn itself, but some input data is still needed and the results of that self- learning are - at least the moment - only as good as good the input data/ first assumptions. Or am I wrong? Im not sure...
I think the next challenge (though not necessarily useful for practical applications) is to even the playing field in the games. In chess it should only be able to look up a couple hundred games into the future like the humans, and in starcraft if you watch the whole broadcast as some people pointed out it had a higher maximum moves per minute than any human
Hello guys:- 1) Will you be applying alphazero to 'Chines chess'? Are there any reasons why you didn't want to use Alphazero on Chinesechess but did want to use it for Shogi? 2) Will you be trying to get Alpha Zero to play Magic the Gathering and Hearthstone? 3) Will you be applying AlphaZero to 'no limit hold 'em' heads up? 4) Why is it so much harder to deal with multiplayer games; surely Alphazero can just treat all opponents as 'one' entity with many heads (like a Hydra)? 5) Why is it so much harder to deal with non-perfect information games? 6) For Starcraft 2, you seem to imply that you picked 5 alpha zero's from all the ones that had been playing. Is there not a system for Alpha zero to 'self select' its own strategy for each game? Do we know why Alphazero get's obsessed with certain units and strategies? 7) You guys could have a look at Twilight Struggle. It's considered the best board game designed of all time, it's incredibly deep, and doesn't have the number of cards that MtG has so mabye easier to deal with. EDIT: I get you want to move on from games a bit now, but there is still a huge amount of space for you to look at before you move on? Poker, mtg?
42:25 Grand Strategy games like Stellaris(moded/unmoded) or perhaps even the old Master of Orion II have still a complexity i guess, not yet being matched. I remember how a Paradox developer told me on the forum it can't be done, would love to see anyone proof them wrong ^^.
@@dannygjk There is this channel "two minute papers". Meanwhile on nearly a daily bases there comes some new AI stuff out which everytime makes my jaw drop. When you say "old school AI" i imagine those paradox developers were thinking with what they learend at school about that topic and not what the latest science in the field may had available. Then again they were mostly speaking about behaviours which they hardcoded. That is maybe even an advantage at times, as if a true AI/GAI would challange us, we wouldn't really enjoy games we never win.
I disagree that big data is "the" problem. Rather it's missing data because e.g. you won't solve the remaining 18/43 structures if no one works in a lab crystallizing and doing X-ray of hitherto unknown proteins. Your ability to solve is dependent on an diminishing number of people doing the lab work. Just look at how many E.coli proteins still have unknown function (and I mean shown in the lab). There is the real bottleneck.
I think it's both, honestly. Yes, there is still missing data, and there will continue to be a need to collect more of that. However, sometimes there is a real issue interpreting and explaining the data such that it can lead to useful understanding and applications. I guess you weren't arguing that this was not the case, rather you seemed to be arguing against the notion that it would be "the", i.e. the only or biggest, problem... so you're probably right, actually. But I would say they both are bottlenecks, at the moment. When we take the protein folding example, if it is indeed true that it can take many years per protein, then there really is a big bottleneck between gathering the data (i.e. the amino acid sequence) and interpreting/understanding it, as well. Also in the case of that competition, these 43 structures were of course already crystallized in the lab, that's how they were able to check the solutions given by the teams against the 'true' structure. So the fact that they didn't do 'best' at those 18 structures (which doesn't mean they didn't get a good answer, still, by the way-although yes, it was pointed out that they're not within 1 ångström yet) is probably just because the AI is not better at this problem (mapping amino acid sequence to a predicted protein fold) yet.
I would be interested in seeing the AI learn to play a single player game like Skyrim, where the goal is not to win or be good at the game, but to explore and find joy in exploring the world and threading together stories. I think this type of game might be impossible to get an AI to truly "play" with current technology and even more difficult for us to quantify how well at "playing" it is doing. How would an AI express itself in a game about exploration and what would be interesting to it and what would it fixate on?
You're talking about conscienceness, subjective experience and the agent's ability to defind a goal for itself. From what I can tell, I don't think current AIs are capable of that, yet.
@@343clement yeah, I agree. It might be an absurd goal post to begin with, but the idea still fascinates me. I guess the premise of using virtual environments to test AI is a fascinating proposition to begin with, so I guess it's a natural progression
Maybe Google's DeepMind AI would perform better if it had a humanoid robot body. It would be able to better interact with and better understand our 3 dimensional world compared to the 2 dimensional world that this AI is currently stuck in. Let it out and see and touch the world that it's in, and DeepMind will evolve, maybe sooner than you might assume.
People are already doing this, but they are mostly hobby projects. There is nothing good enough to make money off of; no computers can create music that competes with Bach or today’s top selling artists.
@@bdeely That last part is probably true, but it's still important not to downplay the advances too much. It's been ten years now since the first album by AI musician Emily Howell was released, and that was already quite exciting back then. I haven't kept track of developments since then very closely, but it looks like there's a more recent AI called AIVA, which apparently became the first virtual composer to be recognized by a (French) music society, according to Wikipedia. (Although it looks like that one still does require some human input before it really becomes a track that is nice to listen to.) I'd say some of these, while probably not at the level of Bach or modern top-selling artists, are more than just hobby projects.
Great talk, with a bit of personal disappointment to all neuroscientists, that Demis has missed its impact to understand Neuroscience. However, his last slide with Feynman saved at least the cognitive scientists!
Maybe the video was private before. Anyway I don't think the video brings a lot of new info, and we don't have the Q&A :( I'm also a bit disappointed if Deep Mind considers its work with video games as over because succesful
This talk is a historic treasure and will be referenced for years to come.
We appreciate your compliment..for more guidance WhatsApp.....+::1,,,3,,,,,,1,,,,,,3,,,,,,5,,,,,,3,,,,,,9,,,,,,8,,,,,,.2, ,,,,,7,,,,,,0,,,,,,,,
I dont mean to be so offtopic but does anyone know of a way to get back into an Instagram account??
I was stupid lost the password. I would love any tips you can give me.
@Manuel Trey instablaster :)
@Avi Muhammad i really appreciate your reply. I got to the site thru google and Im trying it out atm.
Seems to take quite some time so I will reply here later when my account password hopefully is recovered.
@Avi Muhammad It did the trick and I actually got access to my account again. I am so happy!
Thanks so much, you saved my account :D
Nice to see the protein folding come to fruition so soon after this talk.
It's too bad most people don't care or understand the implications of AI or AGI. I'm so excited to be alive at this period of time to watch the internet and AI change the world.
We appreciate your compliment..for more guidance WhatsApp.....+::1,,,3,,,,,,1,,,,,,3,,,,,,5,,,,,,3,,,,,,9,,,,,,8,,,,,,.2, ,,,,,7,,,,,,0,,,,,,,,
Hermann Hesse - The Glass Bead Game; Is the name of the book he recommended
Got it
Hermann Hesse - Das Glasperlenspiel
ein Roman für die frühe 68er Generation
1:00:10 "For biological relevance we need to be within 1 ångström accuracy" - I found this to be insightful
A couple of things that Demis mentions are fascinating:
- Demis mentions a 'bug' they discovered that they weren't sure what it was, or how to fix it, the solution was more around coaxing the system to do what you wanted it to do (43:20)
; and
- AI comes up with unusual strategies or patterns (25:47) & (31:50)
.
What I think this means is that when you essentially give your code the free reign to write its own code, you no longer have control over what the code is doing. Will we get to a point where we create a new AI that we no longer understand how it thinks? Or worse, will an AI create a new AI that we don't understand either and isn't based on our values systems?
In contrast, nature is a system of balance and has a way of trimming off outliers at the top and bottom ends of the spectrum when things get too far away from normal, cruel as it often seems. It does not feel that there is such an in built control for AI yet. However, even if someone builds such a control in for these types of AIs, someone will remove them for the less benevolent ones.
All in all, this is an extremely insightful talk, thank you for posting it. I can't believe it only has 70,000 views after a year! I'm intrigued to see where this journey takes us, but I can't help but feel that we need to be very careful with this tech so that we do not regret letting the cat out of the bag eventually.
We appreciate your compliment..for more guidance WhatsApp.....+::1,,,3,,,,,,1,,,,,,3,,,,,,5,,,,,,3,,,,,,9,,,,,,8,,,,,,.2, ,,,,,7,,,,,,0,,,,,,,,
It doesn't write code it trains neural nets - not the same thing.
Where is the Q&A video?
Great video! I would love to see the Q&A.
agreed
When will AI do science? I don't think it will take long.
EDIT: nm, the answer starts at 49:00
+1
how are these scientists so smart ? Is it all natural intuition. Hopefully they make AI available for all of us, because the starting point gap of life is getting bigger and bigger
We appreciate your compliment..for more guidance WhatsApp.....+::1,,,3,,,,,,1,,,,,,3,,,,,,5,,,,,,3,,,,,,9,,,,,,8,,,,,,.2, ,,,,,7,,,,,,0,,,,,,,,
Starts @4:00
10:55, 25:04, 26:30, 32:28, 35:35, 43:00, 49:59, 52:00
It seems that the Alpha model incorporating MCTS has effectively solved Go and chess. But Desideratum #3 eliminates this model from achieving human level performance in all games for which self-play (or a database of recorded games) is unavailable. The android Data from Star Trek -- or any given human -- could learn to play Starcraft in real time, without 40 million playthroughs to figure it out.
I don't want to downplay these achievements, which are truly impressive. RL + MCTS is a magnificent pairing -- a clever way to speed up and approximate brute force search. But I personally wouldn't consider the chapter on games to be over, simply because humans can learn new games in real time, with limited training data, but AI can't. Human learning must be much more efficient.
The quest to "solve intelligence" continues!
Except you don't account for thousands of hours humans exist before they get to play a game, and also the fact that these games were designed by humans for humans to play in a way that it caters to their existence and "training data" already in an intuitive way. This AI is born into a world where all it sees is the game and has no context of what is going on and in a way not only it learns but also goes through evolution in the form of battling against different versions of itself just like animals have done on earth for billions of years.
@@ernestskuznecovs9202 Your points are well made but orthogonal to my own. What I am really talking about is general AI interacting in the real world, and the place that games will continue to have in pursuit of it. There is no AI that can learn to play any arbitrary game (from raw video and audio streams) in real time -- and that is an important milestone that must be met if human-level AI is ever to see the light of day.
@@snippletrap why is that? In whatever real world situation it is applied to it will be fed data in exactly the same way. You already saw they had improved on the wind power efficiency
@@Subject18 You're missing his point.
snippletrap correctly observed that current "AI" is nothing more than brute-forcing through copious amounts of data.
Whenever such data isn't available (science: limited observation/experimental data, languages: small speaker group and very few documents, etc.), the approach fails miserably.
Humans can learn from a tiny fraction of data and using embarrassingly little energy - just compare the numbers. AlphaGo played millions of games using 5000+ computers. Each computer requires about 1kW of power. In the 8 hours it takes to train the machine, 40,000 kWh of power are used.
The average human male consumes about 10MJ of energy per day or requires 115W of power. So 40,000kWh are incidentally equivalent to about *40 years* of energy consumption for a grown human male.
This is horribly inefficient as during that time, the human not only masters a single game, but also learns to speak, calculate, walk, social interactions and thousands of other highly complex skills.
*That's* what the OP was pointing out. The true breakthrough would be a system that is capable of learning concepts (such as games) by simply watching a handful of examples. Similar to how it's easy to explain the concept of a bike to child without having to sit down and show all kinds of bike images for hours on end.
@@totalermist I don't see how energy could be a limiting factor, as for the amount of data available its true it could only work where that is available, which they did address at 50:08
Hello DeepMind Team. I have an idea for example Starcraft. What would happen if you let an AI with maybe 5 actions (executable commands) per second, play against an AI with 10 actions per second. And you do the 55% rule again. And let an new AI play always against the strongest. I Think in that way the AI will learn strong strategies.
Cool idea, but they probably won’t read that. You would have to email them directly.
Its a little bit jarring to see Deepmind present Alphastar as such an overwhelming success given the fact that they hardly scratched the surface of the strategic aspects of the game. Unlike Chess and Go, Starcraft as an RTS involves a micromanagement aspect in which humans have to click hundreds of times a second to give orders to the units. Alphastar, as apparent from the available replays did *not* defeat humans by outplaying them stategically. Not by a long shot. It won almost all of the games (except perhaps 1), because of crazy micromanagement and crazy inhuman speed. I'm not even talking about the last match against Mana in which Alphastar lost in the most embarrassing way possible. There's still a lot to do in this area, what's the hurry to declare victory so soon?
Because they are not interested neither by Chess Go or SC2, it's not their aim, their aim is to make things that are good at learning not good at a specific task. Having something mainly unsupervised that can achieved the tremendeously difficult task like understand the meaning of pixels on a screen and grabs concept as units, motion action objectiv and use them in a consistent way during a several minute game is THE achievement. The long way is to go from nothing to where they are. not from where they are to the level of an exceptionnally unique human in 30 years that manage to be the best at SC2. And by the way given current replays of Alphastar on the ladder with larger restriction we cannot blame them anymore for any unfair setting, now it's playing fair, and it's top master EU ladder. Game not that clean and not that strategic? yes but it wins so you cannot blame the player, only blame the game. But even this wont last, acceptable strategical aspect will build up as soon as the framework would be stable and the training long enough.... I hope for this year blizzcon
@@PasseScience "Understanding pixels on a screen and grabs concept as units..." there's a slight misunderstanding here of the very broad architecture of Alphastar as explained in this video a bit and more by experts from the team in the recorded matches with TLO/Mana. Alphastar doesn't get raw pixels as inputs but like a human would it deals with abstractions. It already has all the units and buildings built into it. Which brings me to another crucial crucial point "... aim is to make things that are good at learning not good at a specific task" i agree completely. However as far as I understand the only reason to pick Stacraft out of the million pc games out there is because it is one of the most if not the most popular, serious and competitive RTS game that exists today. So it seems to me that the task they aimed for it was to be good at RTS (as phrased in the video - being able to beat the pros). While its true that Alphastar has beaten a pro gamer. It arguably didn't beat it in RTS but in rather limited micro oriented scenarios. I doubt this is wanted the AI to achieve in the first place as their goal in the end is to create artificial general intelligence which makes attacking Starcraft very sensible but also makes Alphastars games look less impressive. I do agree that from what I have been able to see the Protoss Alphastar on the ladder plays really good and is really playing good RTS as a whole I feel (not yet pro level perhaps but getting there). The Terran and Zerg one's less so...
Alpha Star was nerfed then turned loose on the EU ladder.
deepmind will end up creating a general AI like Jarvis.
Hello I can't open maps of alphastar replays, How can I open them?
It would be nice to see DeepMind produce an AI that can create a unified field theory between quantum mechanics and general relativity.
I have thought about that time for so long, imagine a day when an AI/AIs create or discover "The Theory of Everything", so much potential.
@@tiagoalexandresantos4222 In theory, it is entirely possible for artificially intelligent systems to do that.
@@KurtGodel432 That's true, I for one think AIs will bring immense prosperity into our world, that is if we do not self-destroy ourselves before that.
i dont think it will be able too
Well are the input informations we as humanity could provide to AI sufficient for AI to create such theory? I don't think so, but it will change. AI does learn itself, but some input data is still needed and the results of that self- learning are - at least the moment - only as good as good the input data/ first assumptions. Or am I wrong? Im not sure...
I think the next challenge (though not necessarily useful for practical applications) is to even the playing field in the games. In chess it should only be able to look up a couple hundred games into the future like the humans, and in starcraft if you watch the whole broadcast as some people pointed out it had a higher maximum moves per minute than any human
We appreciate your compliment..for more guidance WhatsApp.....+::1,,,3,,,,,,1,,,,,,3,,,,,,5,,,,,,3,,,,,,9,,,,,,8,,,,,,.2, ,,,,,7,,,,,,0,,,,,,,,
This is like watching what the opening scene of Spielberg's 'AI' should have been.
Hello guys:-
1) Will you be applying alphazero to 'Chines chess'? Are there any reasons why you didn't want to use Alphazero on Chinesechess but did want to use it for Shogi?
2) Will you be trying to get Alpha Zero to play Magic the Gathering and Hearthstone?
3) Will you be applying AlphaZero to 'no limit hold 'em' heads up?
4) Why is it so much harder to deal with multiplayer games; surely Alphazero can just treat all opponents as 'one' entity with many heads (like a Hydra)?
5) Why is it so much harder to deal with non-perfect information games?
6) For Starcraft 2, you seem to imply that you picked 5 alpha zero's from all the ones that had been playing. Is there not a system for Alpha zero to 'self select' its own strategy for each game? Do we know why Alphazero get's obsessed with certain units and strategies?
7) You guys could have a look at Twilight Struggle. It's considered the best board game designed of all time, it's incredibly deep, and doesn't have the number of cards that MtG has so mabye easier to deal with.
EDIT: I get you want to move on from games a bit now, but there is still a huge amount of space for you to look at before you move on? Poker, mtg?
how about full last layer of a 3x3
when will those guys try to tackle ecnomic calculation problem at least for China?
42:25 Grand Strategy games like Stellaris(moded/unmoded) or perhaps even the old Master of Orion II have still a complexity i guess, not yet being matched. I remember how a Paradox developer told me on the forum it can't be done, would love to see anyone proof them wrong ^^.
Or Eu4
In a sense it can't be done. In the sense of using old school AI techniques.
@@dannygjk There is this channel "two minute papers". Meanwhile on nearly a daily bases there comes some new AI stuff out which everytime makes my jaw drop.
When you say "old school AI" i imagine those paradox developers were thinking with what they learend at school about that topic and not what the latest science in the field may had available. Then again they were mostly speaking about behaviours which they hardcoded. That is maybe even an advantage at times, as if a true AI/GAI would challange us, we wouldn't really enjoy games we never win.
@@kinngrimm Even using old school AI techniques the computer opponents have to be nerfed else we wouldn't have a chance.
Please show the broadcast game Deepmind in Starcraft
I disagree that big data is "the" problem. Rather it's missing data because e.g. you won't solve the remaining 18/43 structures if no one works in a lab crystallizing and doing X-ray of hitherto unknown proteins. Your ability to solve is dependent on an diminishing number of people doing the lab work. Just look at how many E.coli proteins still have unknown function (and I mean shown in the lab). There is the real bottleneck.
I think it's both, honestly. Yes, there is still missing data, and there will continue to be a need to collect more of that. However, sometimes there is a real issue interpreting and explaining the data such that it can lead to useful understanding and applications. I guess you weren't arguing that this was not the case, rather you seemed to be arguing against the notion that it would be "the", i.e. the only or biggest, problem... so you're probably right, actually. But I would say they both are bottlenecks, at the moment. When we take the protein folding example, if it is indeed true that it can take many years per protein, then there really is a big bottleneck between gathering the data (i.e. the amino acid sequence) and interpreting/understanding it, as well.
Also in the case of that competition, these 43 structures were of course already crystallized in the lab, that's how they were able to check the solutions given by the teams against the 'true' structure. So the fact that they didn't do 'best' at those 18 structures (which doesn't mean they didn't get a good answer, still, by the way-although yes, it was pointed out that they're not within 1 ångström yet) is probably just because the AI is not better at this problem (mapping amino acid sequence to a predicted protein fold) yet.
33:53 I agree but. in my opinion, SC2 had and has a balance problem between the 3 races, and the terran race has always been overpowered.
will alpha go back to starcraft 2 fighting alpha protos vs tarrain
Recommendable!
Hello sir thank you for your great work. Ai is or It will be everywhere one. :D keep It up
We appreciate your compliment..for more guidance WhatsApp.....+::1,,,3,,,,,,1,,,,,,3,,,,,,5,,,,,,3,,,,,,9,,,,,,8,,,,,,.2, ,,,,,7,,,,,,0,,,,,,,,
How can co-play with AI ?^9
I would be interested in seeing the AI learn to play a single player game like Skyrim, where the goal is not to win or be good at the game, but to explore and find joy in exploring the world and threading together stories. I think this type of game might be impossible to get an AI to truly "play" with current technology and even more difficult for us to quantify how well at "playing" it is doing. How would an AI express itself in a game about exploration and what would be interesting to it and what would it fixate on?
You're talking about conscienceness, subjective experience and the agent's ability to defind a goal for itself. From what I can tell, I don't think current AIs are capable of that, yet.
@@343clement yeah, I agree. It might be an absurd goal post to begin with, but the idea still fascinates me. I guess the premise of using virtual environments to test AI is a fascinating proposition to begin with, so I guess it's a natural progression
Incredibly fascinating. Hopefully AI will build a nice zoo for us humans and treat us and the rest of the other then superfluous living beings well.
Maybe Google's DeepMind AI would perform better if it had a humanoid robot body. It would be able to better interact with and better understand our 3 dimensional world compared to the 2 dimensional world that this AI is currently stuck in. Let it out and see and touch the world that it's in, and DeepMind will evolve, maybe sooner than you might assume.
Excellent.
Where's the qanda
This should be called, god studies for life creation.
We appreciate your compliment..for more guidance WhatsApp.....+::1,,,3,,,,,,1,,,,,,3,,,,,,5,,,,,,3,,,,,,9,,,,,,8,,,,,,.2, ,,,,,7,,,,,,0,,,,,,,,
teach this thing how to play soccer, and put 12 of them on Arsenal FC, so they can finally win a Champions League.
It defends better than mustafi without knowing what soccer is..
Check out the recently open-sourced Google soccer environment.
Errant was here
Ok BERNARD :)
create a variant that makes music
Music fails the second desiderata, having clear metric for quality to optimize against.
People are already doing this, but they are mostly hobby projects. There is nothing good enough to make money off of; no computers can create music that competes with Bach or today’s top selling artists.
@@bdeely That last part is probably true, but it's still important not to downplay the advances too much. It's been ten years now since the first album by AI musician Emily Howell was released, and that was already quite exciting back then. I haven't kept track of developments since then very closely, but it looks like there's a more recent AI called AIVA, which apparently became the first virtual composer to be recognized by a (French) music society, according to Wikipedia. (Although it looks like that one still does require some human input before it really becomes a track that is nice to listen to.) I'd say some of these, while probably not at the level of Bach or modern top-selling artists, are more than just hobby projects.
ONLY 5000 TPUs :S
yes history method .
Great talk, with a bit of personal disappointment to all neuroscientists, that Demis has missed its impact to understand Neuroscience. However, his last slide with Feynman saved at least the cognitive scientists!
I doubt he missed the impact on understanding neuroscience. He has a PhD in neuroscience.
Hopefully we'll not end up being able to create and still incapable of understanding :)
ok ok but can it win the game of thrones!? )))
WHO CARES WHAT YOU GUYS DO IF YOU DON'T FUCKING SHARE IT!!!!!!
Cough
RIP humans lol
Nah, alphastar is still weaker than the best sc2 players ^-^
(y)
מניח
Why is nobody commenting? XD
UA-cam took more than one week to show me this video
I would be cheerin throughout this presentation at every reveal of his stats of alphaGo
@@Xavier-es4gi Yep WTF, youtube showed this to me today...
Maybe the video was private before.
Anyway I don't think the video brings a lot of new info, and we don't have the Q&A :(
I'm also a bit disappointed if Deep Mind considers its work with video games as over because succesful
We're speachless
The cumbersome cooking remarkably please because panty spindly file inside a medical instruction. overwrought, ad coal