Really? People are STILL going on about this 'shill' bullshit?? Soooooooo stupid..Here's a crazy idea - If you think Greg is a shill, DON'T WATCH HIS VIDEOS.....and don't waste your time leaving comments. No one wants to hear your crap. You are obviously a 'shill' for a chocolate company anyway, trying to make their product sound awesome so we buy that instead of other candy.....
Vernon Tauro Agree and that's why I have notifications set on his channel. I'm actually finding myself turning off many notifications from other channels, but his channel is not one of them.. not by a mile.
also the original comment is forgetting something IF they could get it to work wouldn't nVidia already have it on their Telsa V100 card not just 16GB of HBM2? yes they already maxed out the speed of PCie gen 3.0 x16 so they made a new connector to go faster. to lucas rem a consumer card is not going to use the full throughput of PCIe x16 after gen 2 at this time they might in the future but right now they max out around PCIe x6 ~ x8 gen 2.0 for compute nVidia made a new connector as they could use the full throughput of the slot. i have not seen any OEM boards that support NVLink but i have seen pre built servers with NVLink
i think you got what compute cards need wrong compute cards need high throughput they also need lots of data to crunch to be effective making the 16GB HBM2 cards max for right now but the PCIe bus not anymore as the compute cards have maxed out what that bus can do. For everything else it hasn't yet but for GPU compute and in some cases depending on the work load also CPU compute cards they have maxed it out for consumer cards we are a far ways away from maxing it out nor do we need that high of throughput which is different from bandwidth and speed but smaller process nodes means more GPUs are made at once and lower power draw which can lead to faster graphics for consumers
I really enjoy and appreciate these types of videos that you make. One question though: could you slow down your talking speed a little bit? You present a TON of information but it's really hard to let it soak in fully so I can understand your point properly when you're talking so fast that you are already two talking points later down the line while we're still stuck on your intro. Just a suggestion/request from a long-time fan. Thanks and have a good day!
Christopher Samuels I suggest rewatching or slowing down manually in UA-cam settings. I can't force myself to talk slower - I've tried and I just sound weird 😂
Try listening to Science Studio videos and the Ben Shapiro show in x2 speed (no offense greg, I talk really fast myself and I know the pain when people ask me to slow down). It helps you force yourself to listen much faster than people should be able to. Eventually you will be able to interpret data as fast as AI :P
Greg - thank you! I am watching your older videos and they INDEED are very good - I am learning as I watch your reviews and travel videos as well. awesome work my friend. oh and happy EARLY 300k Sub roll count!
Informative, organized, and entertaining. While other Techtubers are painting PC components black you're producing quality content. Keep up the great work Greg!
This has nothing to do with the video, but I I need to put this up. *Staple Click* Missing: Affordable GPU that doesn't cost 50$ or more than it's selling price. If found you will be our Lord and savior to us all. It looks rectangular and has two-three fans. I really miss him :(
I am using Q6600 right now but since I didnt have any other boards or the money for a decent one, its in an Intel board. I was surprisingly able to OC my RAM to 800MHz
Thank You For Making This Video.. I Also Like, & Love Learn A Lot Too.. Since I'm A Retired Guy.. That Is Working On My Gaming Rig.. I'm Up Till Late Hours Of The Night.. Just Watching A Lot Of Tech Video's.. I Know I Can Only Push My Old DELL So Far.. Before Getting A Whole New Motherboard / Video Card / & Memory As Well.. My Old DELL - {R2D2} Right Now Is.. *{Studio XPS 435MT - Motherboard / CPU - i7 920 / Video Card - AMD Radeon RX 560 Series / Corsair Liquid Cooler H11Oi Extreme}.. As Always - *{MAY THE FORCE BE WITH YOU ALWAYS}..
This is how I use to be. Many moons a go. Researching everything. now. I spent time with my kids. :D Glad I have other to do the research for me. Keep up the good work.
Just called a tech support hoping to get my ram kit to run as advertised, 3200mhz. This video sums up my entire experience. What the fuck....? Can you say that again please? Slowly. lol Good one Greg.
HBM also seems to be favored by Nvidia for compute and AI, and from AMD for the HBCC function. It is not secret than HBM2 and 3 (which will be a doubling of HBM2) are good. They are objectively better to GDDR5X and GDDR6 respectively. The problem is, it simply isn't needed for gaming, at least not right now. GDDR6 will be enough for Volta and maybe even Newton after it, in gaming. HBM2 and 3 will see use with AMD's high end and Nvidia's supreme 100 series for sure though.
I love when windows take 2.5 hours to update and then you realize the last update re enabled gpu driver updates so pretty much it decided to update my gpu drivers to a werid version that came out almost a year ago. Thanks windows really helping people there. I wish there was a checkbox you could check that says "I know what im fucking doing don't fuck with anything"
So, what still confuses me though is... Its supposed that hbm was needed because it is far more power efficient for the radeon cards. However, the r9 fury, and consequently the RX Vega cards, are famously power hungry compared to maxwell and pascal using gddr5.
My Vega 56 HBM2 runs at 945MHz, so I calculated my memory bandwidth the moment you showed the formula, only to find you did the exact same calculation a second later in the video.. lol
HBM can be huge for portable electronics! both reduced size and power draw :3 Though it seems like its not really being taken advantage of in current full size gpus
Strange- my understanding until now was that GDDR5 was a QDR version of GDDR4, which was just a tweaked version of DDR3, tuned for graphics performance. The example for this typically being that GPU-Z reads GDDR5 clocks at 1/4th the rated speed. As I understand it now, though, is that this is from the doubled prefetch, and similarly, the doubling in GDDR5x's speed is at least in part due to its 16n prefetch. But which is more important- the QDR, or the extra prefetch? Or is one a consequence of the other?
Science Studio I'm currently building a Ryzen 5 1600 build and I'm struggling to find the right ram speed. Which ddr4 ram speed is best for me, my mobo is an asrock ab350m pro4. Thank you.
When will the prices for Graphic Cards and Rams going to drop to a more affordable price? really want to build a new pc soon. :( Great video as always Greg.
I have noticed that the RX Vega cards generally does better at 4K than their Nvidia counterparts, would this be due to the HBM2 having much higher memory bandwidth than the 1070 and 1080s GDDR5(X) since memory bandwidth is more important at 4K?
I'm actually interesed in the future of DDR4. Quad channel ddr4 (256 bits) @4ghz should have as much bandwidth as HMB 1 (2048 bits @500 mhz), but with much lower latency. It could be useful for systems with unified memory.
I love how AMD jumps onto the hot coals so we can walk over them, its very nice of them. Seriously though paying the first movers cost for a technology definitely sucks. And im glad AMD is doing it because it can ultimately benefit if not revolutionize graphics processing.
this was pretty hard to follow when I just want to know the pro's and cons of each between the HBM and GDDR5.... Simplified..... As though I was new to this....
you missed the art that the Tesla V100 also uses HBM2 but in that from the white paper over it it seems like HBM2 is great for compute not so much for gaming due to having the ability to have as many as they need for the amount of graphics ram interfaces to go through not just up to 2 on their chip designs, HBM2 also is slower speed wise but has a wider bus which is great for huge sequencely written bits of data GDDR is better with data that is not written all one next the before peice and the next peice need kinds of data making it better for games as games do not only have one set of data to put it all into graphics ram then have it all sit there to be used for the entire game graphics cards would need 20GB ~ 30GB alone for that to load the entireity of the games graphics code into the graphics card and not have to do any more loading for the game games are just not made like that but made to load what is needed into RAM system and graphics RAM then overwrite it once not needed HBM2 is faster for compute but right now it is also slower for games not really noticeably slower but enough that it is not worth the cost
After these Coffee Lake leaks I was wondering, which would provide better performance; 6C/6T or 4C/8T? Also, would AMD vs Intel make a difference? Could be an interesting subject for a video with six cores now becoming mainstream.
You need to slow down when you speak, especially with this technical stuff. I had to play the video at 0.75 speed to catch what you were even saying. Otherwise it’s an excellent video with great information.
@@GregSalazar hey cool, I have never had a content creator reply to me. Nice! I hope you didn't take what I said as rude, because that is NOT what I meant.
Hi people! I've done extensive research on pc parts and I was hoping for some feedback on building my first desktop pc. Here are the two lists I made so far. ca.pcpartpicker.com/list/YV3BCy ca.pcpartpicker.com/list/2bsPCy (I will not be getting an AIO.) Should I go with the intel build or the amd? (Currently my choice is strongly in favour of the amd build, but it is still undecided) Should I make any changes to any parts?
Hey man, I don't mean to come off as a first world problems complainer... But, I do own a 4k monitor/and TV, and you are literally one of my most favorite UA-camr's... Could you please release more videos in 4k? If not, could you explain why? You have the means to do so...
Probably render times, editing performance and storage , it is much harder to edit and render 4k than 1080/60, and isn't really justifiable since most people use 1080p. The trade offs are only worth it if you have a beast of comp and a ton of storage.
You tasty, candy-coated, bite-sized chocolate treat SHILL!
Really? People are STILL going on about this 'shill' bullshit?? Soooooooo stupid..Here's a crazy idea - If you think Greg is a shill, DON'T WATCH HIS VIDEOS.....and don't waste your time leaving comments. No one wants to hear your crap. You are obviously a 'shill' for a chocolate company anyway, trying to make their product sound awesome so we buy that instead of other candy.....
Whicheva one has the X is better!
Obviously! What else good it be?
iPhone X, Xbox One X, 1950x, 7900x. Guess that makes enough sense...
The aussies are here !
Hardware Unboxed so does that mean x299 is better than z270...
OS X, Windows X (10), Xbox One X, Kaby-Lake X
Greg your sound dampening material is turning yellow. You might consider replacement or paint.
Salazar studios answering the right questions since the beginning
Vernon Tauro Agree and that's why I have notifications set on his channel. I'm actually finding myself turning off many notifications from other channels, but his channel is not one of them.. not by a mile.
One thing can be said that with HBM2 you can use 4 stacks and get 32gb 1tb/s right now. Something you can't go with G5X or maybe even G6
Won't be feasible by the time it's implemented.
More explanations please! I don't quite get it.
vega has a 2048bit bus with a max capacity of 16GB, HBM2 can do 4 stacks and make a memory bandwidth of 1tb/s which is insanely high,
also the original comment is forgetting something IF they could get it to work wouldn't nVidia already have it on their Telsa V100 card not just 16GB of HBM2? yes they already maxed out the speed of PCie gen 3.0 x16 so they made a new connector to go faster.
to lucas rem a consumer card is not going to use the full throughput of PCIe x16 after gen 2 at this time they might in the future but right now they max out around PCIe x6 ~ x8 gen 2.0 for compute nVidia made a new connector as they could use the full throughput of the slot. i have not seen any OEM boards that support NVLink but i have seen pre built servers with NVLink
i think you got what compute cards need wrong compute cards need high throughput they also need lots of data to crunch to be effective making the 16GB HBM2 cards max for right now but the PCIe bus not anymore as the compute cards have maxed out what that bus can do. For everything else it hasn't yet but for GPU compute and in some cases depending on the work load also CPU compute cards they have maxed it out
for consumer cards we are a far ways away from maxing it out nor do we need that high of throughput which is different from bandwidth and speed but smaller process nodes means more GPUs are made at once and lower power draw which can lead to faster graphics for consumers
I really enjoy and appreciate these types of videos that you make. One question though: could you slow down your talking speed a little bit? You present a TON of information but it's really hard to let it soak in fully so I can understand your point properly when you're talking so fast that you are already two talking points later down the line while we're still stuck on your intro. Just a suggestion/request from a long-time fan. Thanks and have a good day!
Christopher Samuels I suggest rewatching or slowing down manually in UA-cam settings. I can't force myself to talk slower - I've tried and I just sound weird 😂
Try listening to Science Studio videos and the Ben Shapiro show in x2 speed (no offense greg, I talk really fast myself and I know the pain when people ask me to slow down). It helps you force yourself to listen much faster than people should be able to. Eventually you will be able to interpret data as fast as AI :P
Big thumbs up for more content like this. I Like learning about the design aspects rather than just seeing bench marks
Greg - thank you! I am watching your older videos and they INDEED are very good - I am learning as I watch your reviews and travel videos as well. awesome work my friend. oh and happy EARLY 300k Sub roll count!
Bomberspell I appreciate it! Glad they've been helpful.
These are the kinda videos I subbed for, nice work man
Thanks for watching.
so close to 300k subs. congrats man!
You make my favorite informative videos... Don't tell techquickie...
Before the video starts "Subscribe to our channel and press like button"
4lbatross Mc please subscribe... but the video hasn't even finished yet...
Techquickie hasnt made a good vieod in quite some time
If it was mope.io:
Greg: Clickstan
Linus: KOA
HBM 2 Vs. GDDR5X as fast as possible
Jk
Informative, organized, and entertaining. While other Techtubers are painting PC components black you're producing quality content. Keep up the great work Greg!
video sponsored by Eminem
Your video's are getting better.
Notifications "on" for this awesome channel
This has nothing to do with the video, but I I need to put this up. *Staple Click*
Missing: Affordable GPU that doesn't cost 50$ or more than it's selling price. If found you will be our Lord and savior to us all. It looks rectangular and has two-three fans. I really miss him :(
A very nice job Greg!
You are close to 300K subs!
I love how Greg is talking about GPU's when everyone else is talking about the iPhone X
ClarkOnUA-cam HBM is far more interesting subject then the new released iPhone
@ExalyThor 11 Months ago 😒
No apple shills please
What's up Greg? Enjoyed that h7 review!
I'm glad you did! Much appreciated.
You can't not enjoy a Science studio, period.
Hi Studio of Skience
Hi Content of Quality Skash
Can u try Overclock Core2 Quad Q6600 to 3Ghz with tape?
GIN did it myself
yourself?
So am I XD
LOL. I did it with my Core2 Quad Q8400. From 2.66 GHz to 3.2 GHz XD
I am using Q6600 right now but since I didnt have any other boards or the money for a decent one, its in an Intel board. I was surprisingly able to OC my RAM to 800MHz
Nice video Salisar Studio ;)
I for one hope HHBM(2) catches on. The potential for future improvements seems like it could harbour mind blowing performance.
Always great content! thanks.
Another good science rooted video where I learn something! Thank you!
Farrell McGovern Thanks for watching, Farrell!
Thank You For Making This Video.. I Also Like, & Love Learn A Lot Too.. Since I'm A Retired Guy.. That Is Working On My Gaming Rig.. I'm Up Till Late Hours Of The Night.. Just Watching A Lot Of Tech Video's.. I Know I Can Only Push My Old DELL So Far.. Before Getting A Whole New Motherboard / Video Card / & Memory As Well.. My Old DELL - {R2D2} Right Now Is.. *{Studio XPS 435MT - Motherboard / CPU - i7 920 / Video Card - AMD Radeon RX 560 Series / Corsair Liquid Cooler H11Oi Extreme}.. As Always - *{MAY THE FORCE BE WITH YOU ALWAYS}..
Someone should hire this guy for a movie where he has to explain how to defuse a bomb quickly to someone over the phone
Thanks for teaching us Greg..
This one of those "i need to rewatch this" videos.
...and I'm ok with that.
All aboard the internet time machine! 1 year later and HBM2 now has unnecessary memory bandwidth 4Head
All aboard the internet time machine! 1 year later and HBM2 now has unnecessary memory bandwidth 4Head
Didn't know any of this, good info, ty bud. :)
Loop Control Thanks for watching!
I feel like i just suffered through a math class
You are watching a video from a UA-cam channel named "Science Studio".
Kevin Rowe -- I always liked my math classes.. now English lit classes don't get me started 😉
This is basic math!!
Why would anybody thumb-down a video like this?
Yavor Kapitanov -- A question for the ages and we'll never know.
You forgot to mention the HBM1 memory (usually 4GB) limitations, which was the main reason they went to HBM2 even though the bandwidth was less.
That was a quality video. Thank you.
tatg69 Thanks for watching!
Thank you soon much for making this video
Bro those shades had me dying man.
Where did you get that vandalism wiring guide and i can't read the other ones
This is how I use to be. Many moons a go. Researching everything. now. I spent time with my kids. :D Glad I have other to do the research for me. Keep up the good work.
Just called a tech support hoping to get my ram kit to run as advertised, 3200mhz. This video sums up my entire experience. What the fuck....? Can you say that again please? Slowly. lol Good one Greg.
HBM also seems to be favored by Nvidia for compute and AI, and from AMD for the HBCC function.
It is not secret than HBM2 and 3 (which will be a doubling of HBM2) are good. They are objectively better to GDDR5X and GDDR6 respectively. The problem is, it simply isn't needed for gaming, at least not right now. GDDR6 will be enough for Volta and maybe even Newton after it, in gaming. HBM2 and 3 will see use with AMD's high end and Nvidia's supreme 100 series for sure though.
what frequency are you running those M&Ms at? I can’t get past 5 jiggahertz even with 2volts
I love when windows take 2.5 hours to update and then you realize the last update re enabled gpu driver updates so pretty much it decided to update my gpu drivers to a werid version that came out almost a year ago. Thanks windows really helping people there. I wish there was a checkbox you could check that says "I know what im fucking doing don't fuck with anything"
So, what still confuses me though is... Its supposed that hbm was needed because it is far more power efficient for the radeon cards. However, the r9 fury, and consequently the RX Vega cards, are famously power hungry compared to maxwell and pascal using gddr5.
My Vega 56 HBM2 runs at 945MHz, so I calculated my memory bandwidth the moment you showed the formula, only to find you did the exact same calculation a second later in the video.. lol
The mat that you have on your desk. Does it explain how to wire a computer ? If so where can I get one? I hope you made it through the storm ok.
HBM can be huge for portable electronics! both reduced size and power draw :3
Though it seems like its not really being taken advantage of in current full size gpus
Thank you for the video.
Strange- my understanding until now was that GDDR5 was a QDR version of GDDR4, which was just a tweaked version of DDR3, tuned for graphics performance. The example for this typically being that GPU-Z reads GDDR5 clocks at 1/4th the rated speed.
As I understand it now, though, is that this is from the doubled prefetch, and similarly, the doubling in GDDR5x's speed is at least in part due to its 16n prefetch. But which is more important- the QDR, or the extra prefetch? Or is one a consequence of the other?
Is that a rubber mat on your desk, if so where can I get one?
This guys appearance reminds me of that Lamborghini in my garage guy. In Holly wood hills
1:53 - says "via an imposer", makes red arrow apear the very same second as if he wanted to say "see, i fucked up *THIS* word right there."
Very complex and interesting
I love how there's such a huge bag of MnM's sitting there XD XD
LOL i commented this on twitter haha thanks greg!
Science Studio I'm currently building a Ryzen 5 1600 build and I'm struggling to find the right ram speed. Which ddr4 ram speed is best for me, my mobo is an asrock ab350m pro4. Thank you.
between 2666 and 3200. I highly recommend 3000
I got lucky, I just putted XMP (amd one) on and memories worked at 2993. I think u are right, but if u can find 3000 at 1-5 dollars more, ill buy 3000
2:06 Linus Virus Strikes Again!
I love your vids!
Thanks for watching!
When will the prices for Graphic Cards and Rams going to drop to a more affordable price? really want to build a new pc soon. :( Great video as always Greg.
I'm here for those M&Ms
Have a Bowel Movement vs God Damn Donkey Rectums 5...FIGHT!!
I have noticed that the RX Vega cards generally does better at 4K than their Nvidia counterparts, would this be due to the HBM2 having much higher memory bandwidth than the 1070 and 1080s GDDR5(X) since memory bandwidth is more important at 4K?
I remember when this channel was based around technology overall not only computer hardware have the times changed oh wow
What cheat sheet is on the table? It says wiring vandalism, that's what I do sometimes so it might come in handy
I'm actually interesed in the future of DDR4. Quad channel ddr4 (256 bits) @4ghz should have as much bandwidth as HMB 1 (2048 bits @500 mhz), but with much lower latency. It could be useful for systems with unified memory.
Good vid m8y
Thanks for blocking me on twitter, appreciate the love and support.
why was the M & M''s was one of your props i was wondering if you was gonna use it as a learning tool "melt in your hand not in your gpu?"...lol
Please, does someone know what model is his monitor? I want to buy a monitor and this looks pretty good and I want to see the specs of it. Thank you!
I love how AMD jumps onto the hot coals so we can walk over them, its very nice of them.
Seriously though paying the first movers cost for a technology definitely sucks. And im glad AMD is doing it because it can ultimately benefit if not revolutionize graphics processing.
This is not fair. The video should be called "HBM 2 VS GDDR5X" OR SOMETHING LIKE THAT.🤔🤔
Wait you mean you don't just know all these numbers and terms off the top of your head unsub
jake winchester can you remember all this?
@@eratous4477 he is being sarcastic
I don't doubt the wisdom of "more is better", but what sort of work benefits most from this increase in bandwidth?
Praise Gamers Nexus
More guest apperances by M&Ms please
this was pretty hard to follow when I just want to know the pro's and cons of each between the HBM and GDDR5.... Simplified..... As though I was new to this....
That's the biggest pack of M&M's Ive ever seen
good job
Is that a macbook?
So, why is vega so power hungry if HMB2 is more efficient with power?
you missed the art that the Tesla V100 also uses HBM2 but in that from the white paper over it it seems like HBM2 is great for compute not so much for gaming due to having the ability to have as many as they need for the amount of graphics ram interfaces to go through not just up to 2 on their chip designs, HBM2 also is slower speed wise but has a wider bus which is great for huge sequencely written bits of data GDDR is better with data that is not written all one next the before peice and the next peice need kinds of data making it better for games as games do not only have one set of data to put it all into graphics ram then have it all sit there to be used for the entire game graphics cards would need 20GB ~ 30GB alone for that to load the entireity of the games graphics code into the graphics card and not have to do any more loading for the game games are just not made like that but made to load what is needed into RAM system and graphics RAM then overwrite it once not needed
HBM2 is faster for compute but right now it is also slower for games not really noticeably slower but enough that it is not worth the cost
The Titan V also has HBM2
can you explain how the hell vega has so much more power-consumption? seems like clockspeeds (nvidia) won over bandwidth performancewise..
I think it would have been a lot higher if they had used GDDR5.
After these Coffee Lake leaks I was wondering, which would provide better performance; 6C/6T or 4C/8T? Also, would AMD vs Intel make a difference? Could be an interesting subject for a video with six cores now becoming mainstream.
You need to slow down when you speak, especially with this technical stuff. I had to play the video at 0.75 speed to catch what you were even saying. Otherwise it’s an excellent video with great information.
Thanks for the feedback!
@@GregSalazar hey cool, I have never had a content creator reply to me. Nice! I hope you didn't take what I said as rude, because that is NOT what I meant.
Nothing between the two memory types when it comes to gaming performance that's for sure.
nice video, a little bit too fast but i can just watch it again
HBM is so expensive it will not go mainstream until the production cost go massive down
I finally can of get it, thanks
Hi people! I've done extensive research on pc parts and I was hoping for some feedback on building my first desktop pc. Here are the two lists I made so far.
ca.pcpartpicker.com/list/YV3BCy
ca.pcpartpicker.com/list/2bsPCy
(I will not be getting an AIO.)
Should I go with the intel build or the amd? (Currently my choice is strongly in favour of the amd build, but it is still undecided)
Should I make any changes to any parts?
please do a video to test if Samsung RAPID mode improves gaming performance when running with Geforce recording/instant replay!!
I'm not sure what button to hit, I hate everything about life... What do I do?!?!
This was the first time I've seen a video that said no views.
and what about HBM2 vs GDDR6?
hbm2 = timeless
Volta will have GDDR 6 because it will be more cost effective than hbm 2 memory.
So HBM is better than GDDR5?
Considero que HBM en cualquiera de sus generaciones tendria una mejor practica en equipos portatiles de alto redimiento
I'm still waiting on Nvidia's implementation of hbm on their gaming cards, really curious how it will fare with AMDs product stack
Meledyne volta will probably use gddr6 instead
If you don't mind being ask, what was your major in college? Just wondering how you know all this.
he majored in petroleum engineering I think
Hey man, I don't mean to come off as a first world problems complainer... But, I do own a 4k monitor/and TV, and you are literally one of my most favorite UA-camr's... Could you please release more videos in 4k? If not, could you explain why? You have the means to do so...
Brayden Turner Nope. I only release a few in 4K for the tech porn, but a vast majority prefer 1080/60.
Probably render times, editing performance and storage , it is much harder to edit and render 4k than 1080/60, and isn't really justifiable since most people use 1080p. The trade offs are only worth it if you have a beast of comp and a ton of storage.