That’s why I choose to buy their stocks, they know what it means to actually work. It was a long way for me from early 90s, when I’m - hardcore Unix user was calling Windows only using words “must die”, to start spending my free money on their stocks, and to actually admit what this company is really doing all this time. Thank you guys for keeping that spirit!
They do awesome work, VS Code is basically the best program I've ever used. It's just such a shame Windows 11 is garbage all over again. I just moved to Ubuntu at home and couldn't be happier with it.
Timeline: 9:00 What happen to the heat energy extracted during cooling? Does it get used to generate electricity to power other devices or supply energy to some of the cooling fans or is it not used for anything?
It’s not reused. The heat is distributed across millions of litres of water and it can’t be concentrated back into a single spot. Sadly we can’t take 2 litres of 50c water and turn it into 1 litre of 100c water… The water is heated, but not heated enough to be very useful for much beyond heating offices / nearby buildings. I’m curious if someone will use the heat for some sort of low energy industrial process like drying cement.
5 times the Azure supercomputer deployed each month, thats insane!!! What does that mean for training next gen frontier models? 30x November 2023 does it mean you can train it 30x longer, 30x bigger or 30x faster or what? Will this continue up to the end of the year reaching almost 65x compute in one year?
Good questions. We have deployed 30x total or on average 5 additional instances per month of the November 2023 Top 500 submission with 14k networked GPUs, 1.1m cores and 561 petaflops. These will continue getting bigger and more instances provisioned in the future. And now there are more options for GPUs and AI accelerators, too, plus the Nvidia H200 and Blackwell architectures are coming soon with more speed, power and efficiency.
"it's funny you know all these AI 'weights'. they're just basically numbers in a comma separated value file and that's our digital God, a CSV file." ~Elon Musk. 12/2023
@@Hashtag-Hashtagcucu For ever, as long as there are people thinking that it's a great idea to chat with a machine or have a robot-dog. Insanity is the new norm.
What would it take to take a 175B model to shrink it to run on a mobile phone? What are the limitations? The language used in the model? Can a compression be used or a language be developed that doesn't take up much space?
The closest correlation to size is the parameter count, so Phi-3-mini has 3.8bn parameters and is roughly 2.2GB file size to run locally on the phone as demonstrated by Mark in the video. There are things that the larger models will do in terms of reasoning and built-in knowledge, as Mark said. One example that we actually hit while planning this show is that the slightly larger Phi-3 models could phrase the cookie recipe in the writing style of Yoda from Star Wars. Because mini didn't have the pop culture references in its training set, we made the tone sarcasm instead.
@@MSFTMechanics funny I’m watching Star Wars episode 1 right now on Apple TV+😂😂😂😂 Sarcasm is something the is very rich in style and in different languages would be interesting to see how this is done in say Italian or French
Great video. I have a maybe annoying question; how can we know that cloud ai services are selling us what they say they are? For example, context length could easily be fudged.
You can stipulate that in code or using the Azure AI Studio, and you can test it. We cover that to some extent in this episode ua-cam.com/video/3hZorLy6JiA/v-deo.html
So in November 2023 here was a supercomputer of ~14k H100s. Every month since then you have done an equivalent deployment of 5 of those clusters? That is quite a few hundreds of thousands of GPUs. I wonder how many of these are being used to train the next generation of OpenAI's model. 100k? 200k?
on the subject of cooling and power requirements, i've been saying for ages that the "waste heat" is only waste if you don't use it. most electricity generators work by using heat to drive turbines. instead of using burning fuel or nuclear reactions to create heat, we should use the heat generated by compute as the source for generating electricity. pump and compress the heat from the cooling fluid into a reservoir which a second heat exchanger uses to vaporise a second working fluid which drives the turbines turning generators that feed electricity back to the GPU clusters. recycle the power endlessly.
The physics problem is around concentrating energy / heat into one spot. While the total heat energy is in the MW range, it’s distributed across millions of litres of fluid (water / air) which is lightly heated and can’t be concentrated into a single place. Thermodynamics doesn’t allow for addition of heat between working fluids. You can’t use 2 litres of 50c water to create 1 litre of 100c water. I’m a nutshell, we can take the distributed head and convert it into the high pressure high volume of steam needed to run an electricity turbine. The heat may be useful for an industrial process like drying cement. But that ends up being being uneconomical as power from the grid is much cheaper than recovered heat. I wish this process worked. It would be amazing, but the physics doesn’t work out. :(
@@jamieknight326 people always say it can't be done. i'm not convinced. low grade heat is raised when compressed by a heat pump. using a multi stage setup where a chain of pumps uses the increased temp from the previous pump as the base to concentrate further, is see no reason why a final reservoir of compressed heat shouldn't be hot enough to drive a turbine and generate electricity. you can generate electricity with a sterling engine and a cup of tea. a data centre converting 100MW of electricity into 99.9MW of heat, should be able to provide 99.9MW of heat to a heat engine.
Wouldn't it be possible to create a distributed computer system like SETI or that Protein folding project, and use this computing power to train AI systems? Those projects used peoples personal computers when they had idle time.
Yes, that was intentional, because Multi-LoRA would allow Neo to have hundreds or thousands of skills added simultaneously, not just the one like last year.
This is my favorite video that Microsoft makes. So cool
Thank you so much! Appreciate your taking the time to comment and glad you liked it.
Mark Russinovich is a legend!
Oh, he's good alright.
Awesome to see this, especially the hardware, networking and data center breakdown and info.
Glad you enjoyed it!
That’s why I choose to buy their stocks, they know what it means to actually work. It was a long way for me from early 90s, when I’m - hardcore Unix user was calling Windows only using words “must die”, to start spending my free money on their stocks, and to actually admit what this company is really doing all this time. Thank you guys for keeping that spirit!
They do awesome work, VS Code is basically the best program I've ever used. It's just such a shame Windows 11 is garbage all over again. I just moved to Ubuntu at home and couldn't be happier with it.
Very very informative…sent it to my kid who is in college to see and keep seeing till they understand every word!!!
Man, Mark is God-status at Microsoft
I’m so glad people much smarter than I are working on this.
With Great Power comes Great Capabilities...
Microsoft 📲💻🖥🎮
Timeline: 9:00 What happen to the heat energy extracted during cooling? Does it get used to generate electricity to power other devices or supply energy to some of the cooling fans or is it not used for anything?
It’s not reused. The heat is distributed across millions of litres of water and it can’t be concentrated back into a single spot. Sadly we can’t take 2 litres of 50c water and turn it into 1 litre of 100c water…
The water is heated, but not heated enough to be very useful for much beyond heating offices / nearby buildings.
I’m curious if someone will use the heat for some sort of low energy industrial process like drying cement.
@@jamieknight326 like keeping the tea, coffee, eggs etc. warm. 🤣
Underrated video, a lot of cool useful details!
Thank you! Happy that it's useful - and it keeps evolving quickly.
Ah, the sysinternals guy. I owe half my career to this guy. Thx.
Great info about the architecture! Thank you.
Thank you! Glad it helped on the architecture front.
5 times the Azure supercomputer deployed each month, thats insane!!! What does that mean for training next gen frontier models? 30x November 2023 does it mean you can train it 30x longer, 30x bigger or 30x faster or what? Will this continue up to the end of the year reaching almost 65x compute in one year?
Good questions. We have deployed 30x total or on average 5 additional instances per month of the November 2023 Top 500 submission with 14k networked GPUs, 1.1m cores and 561 petaflops. These will continue getting bigger and more instances provisioned in the future. And now there are more options for GPUs and AI accelerators, too, plus the Nvidia H200 and Blackwell architectures are coming soon with more speed, power and efficiency.
Most fascinating part for me is the Multi-LORA.
It is. It's a little like differencing disks with the additional state/data.
Interesting architecture.
Great session, Thank you
Appreciate the compliment, thank you!
"it's funny you know all these AI 'weights'. they're just basically numbers in a comma separated value file and that's our digital God, a CSV file." ~Elon Musk. 12/2023
Great session, Mark is as always the best❤
Thanks so much! Appreciate your taking the time to comment.
5 times the Azure supercomputer deployed each month? Is that a typo..
It's not. We just announced 30x have been added since November 2023
What he isn’t saying is for how long this rate goes on
@@Hashtag-Hashtagcucu For ever, as long as there are people thinking that it's a great idea to chat with a machine or have a robot-dog.
Insanity is the new norm.
@@MSFTMechanicsStargate and quantum computing hurry up
It’s amazing… impressive budget for by chips from NVIDIA. But is it worth it? Curious to see if AI will take off or not.
What would it take to take a 175B model to shrink it to run on a mobile phone? What are the limitations? The language used in the model? Can a compression be used or a language be developed that doesn't take up much space?
The closest correlation to size is the parameter count, so Phi-3-mini has 3.8bn parameters and is roughly 2.2GB file size to run locally on the phone as demonstrated by Mark in the video. There are things that the larger models will do in terms of reasoning and built-in knowledge, as Mark said. One example that we actually hit while planning this show is that the slightly larger Phi-3 models could phrase the cookie recipe in the writing style of Yoda from Star Wars. Because mini didn't have the pop culture references in its training set, we made the tone sarcasm instead.
@@MSFTMechanics funny I’m watching Star Wars episode 1 right now on Apple TV+😂😂😂😂
Sarcasm is something the is very rich in style and in different languages would be interesting to see how this is done in say Italian or French
So they can now run the same LLm on different GPUs(Nvidia vs Maya vs AMD)?
Great video. I have a maybe annoying question; how can we know that cloud ai services are selling us what they say they are? For example, context length could easily be fudged.
@@test-zg4hv yea I'm asking how you test it? Is it kind of like a error checking algorithm?
You can stipulate that in code or using the Azure AI Studio, and you can test it. We cover that to some extent in this episode ua-cam.com/video/3hZorLy6JiA/v-deo.html
What’s that again..? You’re adding the capacity of the third most powerful supercomputer every month! 😮
Did I understand correctly: "Today, 6 months later, we deploy the equivalent of 5 of those supercomputers every month"!?!?
That's right. 30+ instances have been built since November 2023
So in November 2023 here was a supercomputer of ~14k H100s. Every month since then you have done an equivalent deployment of 5 of those clusters? That is quite a few hundreds of thousands of GPUs. I wonder how many of these are being used to train the next generation of OpenAI's model. 100k? 200k?
This is awesome
Glad you liked it and thank you!
Thanks, quite impressive!
Thanks for watching and commenting!
on the subject of cooling and power requirements, i've been saying for ages that the "waste heat" is only waste if you don't use it. most electricity generators work by using heat to drive turbines. instead of using burning fuel or nuclear reactions to create heat, we should use the heat generated by compute as the source for generating electricity. pump and compress the heat from the cooling fluid into a reservoir which a second heat exchanger uses to vaporise a second working fluid which drives the turbines turning generators that feed electricity back to the GPU clusters. recycle the power endlessly.
The physics problem is around concentrating energy / heat into one spot.
While the total heat energy is in the MW range, it’s distributed across millions of litres of fluid (water / air) which is lightly heated and can’t be concentrated into a single place. Thermodynamics doesn’t allow for addition of heat between working fluids. You can’t use 2 litres of 50c water to create 1 litre of 100c water.
I’m a nutshell, we can take the distributed head and convert it into the high pressure high volume of steam needed to run an electricity turbine.
The heat may be useful for an industrial process like drying cement. But that ends up being being uneconomical as power from the grid is much cheaper than recovered heat.
I wish this process worked. It would be amazing, but the physics doesn’t work out. :(
@@jamieknight326 people always say it can't be done. i'm not convinced. low grade heat is raised when compressed by a heat pump. using a multi stage setup where a chain of pumps uses the increased temp from the previous pump as the base to concentrate further, is see no reason why a final reservoir of compressed heat shouldn't be hot enough to drive a turbine and generate electricity. you can generate electricity with a sterling engine and a cup of tea. a data centre converting 100MW of electricity into 99.9MW of heat, should be able to provide 99.9MW of heat to a heat engine.
Wouldn't it be possible to create a distributed computer system like SETI or that Protein folding project, and use this computing power to train AI systems? Those projects used peoples personal computers when they had idle time.
it's called a botnet and yeah you can do that. these are purpose built AI chips though, nobody has those at home because they are not for sale yet.
also, from the video, inferencing requires high bandwidth memory, not so much compute power, which would suffer greatly from latency
Thanks.
You're welcome
cool stuff!
Glad Microsoft is making sure there is co-existence between all hardware manufacrturers, otehrwise AI hardware will become chaos.
I prefer the much more reliable/resilient IOS. Just replacing my trusty Air with a 2TB M3 iPad Pro.
iPhone?
Yes, iPhone 15 Pro Max in this case.
that's inspiring
Glad you liked it. Thanks for taking the time to comment.
Nice
This dude AI?
Mark has been trained on at least 175 billion parameters, but he isn't AI 🙂
13:38 you used the same exact joke a year ago with mark
Yes, that was intentional, because Multi-LoRA would allow Neo to have hundreds or thousands of skills added simultaneously, not just the one like last year.
Rubén godoy islas 4:35
How much CO2 does this cost? EXACTLY how bad is it now and EXACTLY HOW will you power this by 2030
Check out the Microsoft sustainability site for details: www.microsoft.com/en-us/corporate-responsibility/sustainability-journey
Solid organic joke.