Hi Steve Just recently I saw a new video posted by Puget Systems about using an Intel HD graphics for hardware accelerated encoding i adobe premiere cc. They showed a notable difference in rendering quality. Test consisted of 2 render passes with the same settings used, one difference of course was either using software rendering or hardware accelerated rendering with intel graphics. Files generated with Hardware encoding showed significant file size reduction on a final render compared do software rendering which seems to be strange. Upon further investigation it seems like hardware encoding had much lower quality overall. Although rendering speed was much faster on hardware i thing the degradation was very visible. This seems to be more in line with another task where hardware encoders are used for example streaming with OBS. For example software OBS encoding seems to provide ok results with around 3000 kbps bitrate , while using Quick Sync, NVEC or AMD hardware encoder 3000 kbps bitrate seems to be unacceptable to me. I think this topic is worth to do a deeper dive. This could be an interesting topic, different scenes might also prove different challenges for those algorithms such as city spaces vs high vegetation areas and youtube compression is especially bad with vegetation. Source: Puget systems - H.264 Hardware Acceleration in Adobe Media Encoder - Good or Bad?
I have seen almost everything worth watching on UA-cam. Thanks for delivering consistent content that's relevant, informative and entertaining. Originally GN stood out to me because you narrate all your test results. Keep it up Steve, you are the Tech Jesus.
I think some of the confusion is from what I was doing, combining a "Transition Duct" with a set of Stator vanes, to both help turn rotation into exhaust velocity/pressure (The stator vanes) and allow the air to fill in behind the fan motor (Transition Duct) for a radiator. It is incredibly complex stuff with many easy to mix up terms, so not bad at all explaining the rest Steve. I've only been aware that Blades are not Vanes for about a year and change now thanks to AgentJayz's jet engine vids, and I still mix them up at times. lol. and yeah, I've seen no dif at all, other than how much rad gets air, it really isn't changing the CFM of the fan enough to make a big diff. (at most 1c, tho I don't trust the CLC's soft that much, lol) Great vid Steve and Crew. B)
hey steve i got one of those 5.2 at 1.35 8700k's as well! the only thing i would worry about is llc spiking after a high load has ended. i wouldnt go over 1.35 for everyday use unless you got really high gains for it. and with 5.3 at 1.35 you are not going to get much (if anything more). of course you can save some power by going 5.2 at 1.27-1.3 or so and not really see any real hit in perf. i run mine at 5.1 1.26 just cause of the power draw and i dont really need anymore
Steve, hearing you talk about SGI made me go in my basement and undust my Silicon Graphics Indy :-) The bootsound made me so happy i had to run some demos on it..
Arm is geared for small low watt applications tho, so as a PC equivalent it has a few limitations due to the low watt design. As a low watt CPU, Arm is very good at what it was designed to do (from spacecraft flight control computers to automotive and avionic systems). Places where size, weight, and watts are a premium constraint.
Iftar Miftar Perhaps I should say Qualcomm(which works with Microsoft to have Windows running on their hardware) or Samsung(which is the largest Silicon manufacturer).
Also ARM is used in high end networking gear and even high end servers, so it's not just for the low end anymore. Plus the persistent rumors of Apple dropping Intel in an upcoming MacBook in favor of an ARM CPU
#AskGN I’ve heard different things from different sources, but does heat degrade CPU’s more than the pure voltage put into the CPU? Phrased differently, Can one safely input higher voltages into the CPU, as long as the temperature is low, or will the high voltage degrade the CPU rergardless of low temperatures?
Gamers Nexus thanks! It’s actually really bugging me, since I can easily keep my 6700k cool (50-60c) at 1.4vCore and above, but I have this nagging fear about raising the voltage above 1.4 🙈
I'm particularly curious of this for it's potential relevance to core performance boost on 2nd gen Ryzen (and similar features/behavior building off this in future chips), since as the load temperature goes down (let's say from an absurd overkill custom loop with multiple 360/420 rads, d5 pump, high rpm fans), those chips are spending way more time at their peak/boost voltages (1.5-1.6) which would be well into degradation territory if those voltages were 24/7. For those unaware, the spec AMD explicitly stated on 1st gen Ryzen for 24/7 vcore before degradation can occur was 1.425 As it happens ... I'm running my 2700x under under one of those aforementioned overkill loops and am pretty damned sure I degraded it already from CPB alone. Started getting bluescreens, tried different bios options to troubleshoot, solution ended up being adding dynamic vcore offset which worked for a little while, then came back ... rinse and repeat a few times before I realized it was looking an awful lot like degradation (which I had previously never seen first-hand)
Buildzoid actually did a video about that. He killed a CPU with voltage (to proof his point) while still remaining in the safe temperature region. Here is the video: ua-cam.com/video/bXOu3hseXRg/v-deo.html
Motorola was a chip manufacturer for Apple all the way up until 2006, when they did the Intel switch. Then their processor devision was sold to Freescale
John Paul Bacon Motorola was making IBM PowerPC architecture based CPU’s though, Apple would rotate between proper IBM chips and Motorola licensed ones
Fee Nicks True, but it is worth pointing out that Motorola specialised in making lower power embedded application CPUs, vs the IBM POWER server CPUs. The POWER5 was perfectly fine for server and workstation use, but it couldn't be scaled down the the low power lineup that was the Motorola PowerPC lineup. If I'm correct the PowerPC trademark is under the ownership of NXP now that they bought Freescale, NXP still makes PowerPC parts, but they are built within the same TDP envelope as the G4 series of parts, and tend to be used in telecom and routing equipment. G5 was an Apple trademark, and PowerPC was Motorola, so that would explain why they exclusively switched to using G5 in their branding after dropped Motorola as a supplier.
Oracle is also making CPUs based on SPARC V9 architecture in their SPARC T-Series Server. Those are used for mass database management usually found in air traffic controls, or cloud service. The processor in your phone is most likely ARM architecture. There was also a small boom in Amiga PCs in the early 2000s that died off fairly quickly.
Not to be pedantic but VIA's GPUs are what was once S3 graphics. And voodoo was a card made by 3DFX. They were bought by nVidia. Matrox still exists but they've recently started using AMD GPUs in some of their cards.
He also forgot PowerVR. Had some great GPU tech and made some decent PC GPUs in the 90s. They survived the great PC gpu manufacturer culling of the late90s/early 2000s during which time they designed Sega Dreamcast's GPU. They made Apple Iphone GPUs until this year or last year (can't remember). Hopefully they can continue to survive and still have a meaningful presents somewhere.
I believe they were also used by Intel as the IGPU too for some time. I think this is why some of the Atom CPU's can't run on one of the Windows 10 big updates because they never got the IGPU drivers running on it so a lot of low powered systems suddenly couldn't update their OS. Microsoft changed the graphics stack slightly which required newer drivers.
VIA (Formerly Cyrix) is still trying too. They have some low power x86 chips up to quad core. They are made on a 40nm process. They still have the x86 license from the Cyrix days.
First of all I love your channel Steve, you go really in depth and know your stuff! Anyway my question for next time is, what hair products do you use to take care of your glorious hair?
On the subject of GPU memory overclocking: you can check gpu memory errors with hwinfo. You don't have to have the error count to be 0 if you're just gaming. Encoding/transcoding with gpu is usually more sensitive to these errors.
Speaking of former GPU manufacturers...Diamond used to make their own unique cards. I had a few Number Nine cards too. A bunch of manufacturers merged into S3.
Have you seen RISC-V? Do you think it might be relevant in the next few years? It's got 32bit, 64bit and soon to have 128bit ISA support. It's a processor architecture somewhat like ARM/aarch64/PPC and it's open, with the idea to be modern and easy to use and implement.
#AskGN you're my favourite arms dealer! i have received my hand to hand combat today and just wanted to say that it looks awesome! (even if it doesn't spin like yours) keep up the good work tech Jesus (and co)
that first question is funny we used to have alot of options owned both matrox and voodoo brings back memories if anything and not allways fond one's like mounting heatsinks on the open AMD cores things sure got easier
VIA also used to make their own series of low-cost/low-power x86 CPUs. A huge chunk of the early niche that is ITX were largely powered by VIA Eden series chips.
They were both Silicon Graphics, Inc. and SGI. They started as the former and changed their name to the latter not too long before they split up and went bankrupt. They also birthed MIPS, which was a RISC CPU architecture. While we are in the wayback machine, I remember the very first GPU I ever had with a fan on it: It was an NVidia Riva 128 on a card made by Canopus. It was advertised as being a workstation card, but kicked ass for gaming as well.
Steve, it's not true that SGI didn't think consumer GPUs would ever be a thing. In reality there was a hefty battle within the company over the issue (about half thought it was a great idea, the other half thought it was dumb), and it happened more than once, first with SGI's IMPACT graphics tech, later when the N64 project got going, and again when NVIDIA (by nabbing some staff) based its GF256 on SGI's IR tech, amid a rather behind the scenes deal to transfer staff and IP (not quite sure on the precise timing). All sorts of things went on back then, eg. NVIDIA offered to port a Quadro to run in Octane (SGI sorely needed the better performance), they even wrote drivers, but SGI ignored it, though it was certainly mostly SGI that made the mistakes, but don't think there was nobody within SGI who could see a more sensible road (there were plenty; they're the ones who setup and also later moved to NVIDIA, but remember some SGI staff also moved later from NVIDIA to ATI, others to ArtX earlier, etc.); alas they were not the majority of influencing opinion, and there were also external pressures pushing back against the notion of entering the consumer market. These issues affected their CPU development plans too, and the support infrastucture, eg. it was I, with the help of the admin of sgi.com, who managed to persuade SGI marketing to have the info for their R16K/1GHz (16MB L2) CPU actually added to the site PR material, but it took 18 months, and then it turned out there was also an R16K/900MHz available for Fuel which even the admin didn't know about. Prior to this, SGI had quad-1GHz boards available for Origin, Onyx and Tezro systems, but they were not advertising them, which is bizarre. Way back in the early years, SUN asked SGI if they could license out RE2 gfx, but SGI refused. Probably a mistake. The deal with Nintendo was... complicated, perhapse a tad naive on SGI's part. SGI jumped into bed with MS several times, but got burned (the Farenheit project, again with dropped WinXP support for the VW320/540 which ruined the possibility of decent FW support). There were a lot of errors, perhaps the worst of which was believing Intel's bluff about IA64 and thus sacrificing their own Alien/Beast SNx and Cray CPU plans, that was a disaster, it meant (with IA64 so late) Origin launched with only 16 CPUs per rack, instead of the 128 it could have had if SGI had just decided to stick with MIPS and progress to a multi-core MIPS/Cray hyrbid (despite this, Origin did very well, easily become the bandwidth server monster of its time; the most popular porn server back then was a deskside Origin2000, very fast and totally reliable). SGI did eventually release a 128-CPU rack (the 3900 series; I have a typical "brick" with 16x 700MHz CPUs), but it was just at best a final faster MIPS (the 1GHz), not a newer MIPS V with MDMX design, no cool vector stuff from Cray, etc. (the idea had been to create a CPU with vectors, each of which could support media extensions; the computational power of such a design would have been astonishing back then). A major opportunity lost. About a 3rd of SGI's CPU design people moved to Intel, which delayed their own MIPS schedule by many years, and eventually the combined Cray plans fell apart. Why so many mistakes? Why ignore where the market was clearly heading with gfx, and indeed had already gone? Various reasons, but the main one is the way SGI sold its systems, namely via, "official resellers". These companies had a lot of influence, and the sales reps were raking it in via commissions for major sales to big companies and organisations, back when they were riding high (govt, auto, defense, oil & gas, medical, film/effects, sciences, edu, NASA, etc.); I helped out with the mass IR gfx upgrade at Dreamworks in preparation for the Lost World production, that alone was many millions. Resellers, their sales reps, had absolutely no desire to deal with ordinary consumers, messing about with tiny orders for mere hundreds or even tens of $. During all this time, the entire ecosystem of the way SGIs were sold was based around very secretive pricing (an old joke on USENET: a PC user asked how much an SGI cost, an SGI employee replied, "If you don't know, you can't afford one." The arrogance of that response rings very hollow now). Resellers would often price things based on how much they either believed or knew the customer could afford, the perfect way to maximally exploit academic research & grant funding. I saw this process in action many times while in charge of various high-end SGIs for several years. Within SGI, there were likewise marketing people who loved this system aswell. Remember, this kind of business model meant individuals got to meet with Important People and travel to Interesting Places. Some reseller reps became millionaires. So, even if SGI had somehow decided to support the consumer market, license RE2 out, respin IMPACT into a $400 board, or beat NVIDIA to the IR derivation that became the GF256, none of it would have made the slightest difference to their eventual demise unless they also completely abandoned their reseller sales model and allowed people to order direct, with a proper ecommerce site such as Dell's or HP's. This was never going to happen though. Everything I heard from those involved both at the time and since has made it clear the company was stuck in a management and marketing rutt, they were obsessed with big money clients. Great ideas were produced, but not properly supported and evolved over time (O2, VW series, etc.), wasting multiple opportunities to break into volume/consumer markets, sometimes made worse by quite dreadful marketing campaigns, probably the worst being the O2+ launch (that fiasco lost SGI the support of companies like ILM; people I knew there, like me, had been expecting something along the lines of a dual-core R9000 on a mbd with a much higher max RAM, fitted with Cobalt gfx from the VW series, ie. the same arch but 10X faster, which would have been great; what SGI released was a change in case colour). Thus, even if the world had seen the launch of an SGI consumer GPU, I don't think it would have done very well, not without a huge marketing shakeup. More likely, after an initial strong buzz, the relevant unit would have been sold off, and we'd have ended up with an NVIDIA anyway. For a while SGI ruled the world of 3D, gfx and film, for good reasons, but like so many entities that expand very fast, the money flying about diluted the cause, corrupted the original worthy aims. Much was wasted on fancy parties & suchlike. Reminds me a bit of Worldcom and Enron. When the company started taking on conventional business and markteing people in the mid/late 90s, then it really went downhill. The pricing secrecy got even worse. By the mid 2000s I'd helped sell over $50M worth of SGIs, but after six months trying I still couldn't obtain a quote for an Octane III (I was asking on behalf of movie companies and others). Talking to a lady at SGI UK sales about these issues (she'd only been there a few weeks), she reckoned that at least half, perhaps 2/3rds of all marketing staff would have to be fired in order for any kind of conventional sales system to be viable, with publicly visible pricing, direct ordering & delivery, etc. I remember in the late 1990s it had become quite a thing to actually get hold of any kind of price list, those which did become public mostly coming from academia, eg. here's mine: www.sgidepot.co.uk/depot/prices1.gif www.sgidepot.co.uk/depot/prices2.gif SGI is a lesson in how the mighty can fall. I certainly drank the coolaid for too long back then, a stance which didn't change until the O2+ debacle knocked off the rose tinted specs, and I started hearing more about the difficulties going on within the company. Later they also took on numerous Linux/x86 people who were rather hostile to the old guard MIPS/IRIX people. One person told me the remaining MIPS/IRIX building section was a depressing place to be. SGI's original and highly successful ethos of top-down design of advanced tech is still possible, but to endure it needs a connection between engineering and management that's hard to maintain, and it needs a marketing structure that does not succumb to greed, or allow external pressures to dictate policy. This is difficult. The way companies like Intel and NVIDIA have been behaving in recent years has strong echoes of what happened at SGI, the arrogance present in certain policies, ignoring customers, taking them for granted, chasin the big money, etc. Ian.
Does the patreon bonus video get released on you're youtube channel at a later date? There are those that are on a tight budget and can't donate monthly. I don't own a credit card or have a paypal account. I support your channel as much as I can regardless.
People who worked for Silicon Graphics Inc founded 3dfx Interactive in the mid-90ties. To me one of the most amazing startups ever. They saw the potential for consumer GPUs SGI did not, and were the first to build a dedicated affordable gaming GPU for PCs. The funny thing is that all analysts said, that 3dfx's idea of using a pass thru concept for their Voodoo cards is simply nuts and wouldn't be accepted by consumers. You had to have a normal VGA card and put a Voodoo alongside. Then you had to connect the 2 cards using a pass thru cable, like so www.dansdata.com/images/buildpc/320/passthru.JPG . A Voodoo card would then accelerate games using 3dfx own Glide API ( en.wikipedia.org/wiki/Glide_(API) ) and switch to full screen, as nothing else was possible by using pass thru. Despite all this 3dfx was hugely successful. In case you want to know more check out ua-cam.com/video/3MghYhf-GhU/v-deo.html. Smart bunch!
Talos workstation is the latest Power9 try for security oriented users, whose firmware is (or supposed to) also open sourced. Phoronix is also doing some comparison between Power9/Xeon/TR. RISC-V is also gaining its territory, as an open-source (well, sometimes firmware provided by hardware vendor is not open sourced) alternative to ARM. MIPS is also widely used by some TV boxes and routers. But at least for closed source (or call it serious) gaming, x86/x86_64 is still the only option. For the "gaming" route, I personally always choose any decent router which supports OpenWRT, or even low power x86 based soft router.
"SGI... what was it? Silicon Graphics, I think? Uhh... Silicon Graphics, Incorporated maybe was it?" O_O *sobs in a corner* #imnotold #kidsthesedays :D
Yes. I used to work on an SGI Indy. Quite a nice machine back in 1993. IRIX was a pain though. I think there was a lot more wrong with SGI than just their focus.
#AskGN with the transistor size threshold on silicon nearly hitting it's peak (3-5nm) when do you think we'll see different materials like Graphene or something similar being used in future processors if we continue to shrink the transistors?
12:43 sometimes when you overclock your VRAM, the Memory Timings (wich you cannot see or control) will increase a lot to compensate the higher clock speeds, maybe this is why his VRAM clocks so high ...
#AskGN is it safe to use nail polish on pc parts? motherboard in my case... i don't like the bios led, there's no option to turn it off so i'm gonna paint it with a nail polish. almost all the nail polishes are non-conductive as far as i know, but i even bought a multimeter for this and i'll test it before using it. other than conductivity and causing shorts, is there any risk involved? can it cause any damage over time?
fun fact: the high end 3dfx cards needed 8 layer pcbs, if 3dfx sold off their pcb manufacturing stuff, they might have survived long enough to release their spectre cards (they were only a few months away from being ready for release)
Steve - just to correct you - Matrox is still alive and kicking. Nearly all server graphics, which are grouped under BMC, iDRAC, and other are using ASPEED chips, which are essentially Matrox G200, G450 or G550, they have VGA output and stream video to internal KVM-over-IP). I even happen to have Matrox Millenium in PCIe format (it has PCIe x1 and dual DVI output). For others, Via had its Unichrome series, which in one form or another survived, as a legacy product. There are efforts to bring ARM's Mali GPU to PCIe format, but that's just PoC for now. We have just complete lack of anything more powerful then modern Raspberry Pi :(
In terms of cooling, many cases have large empty areas in them, particularly on the side of the case opposite to the motherboard. I have to imagine that air is entering the case, and traveling through these empty areas to the back where it exits without coming into proximity with any hot components. Would it benefit cooling at all if you were to fill these large empty areas with an inert substance like Styrofoam to essentially force all (or at least more) of the air passing through the case into proximity with the components of your computer. I believe it would have the affect of speeding up airflow inside the case since you are restricting the space that the air must pass through.
video idea/request for @gamersnexus The video is as simple as follows.....where do you get the biggest temperature improvements increasing radiator size and when do they stop improving? Take a hot chip like the 7900x+VRM cooling and a hot GPU like 1080ti and overclock them (so that hardware/heat produced isnt your limiting factor), take a single loop, then a 120mm radiator and see what the temps reach and how long they take to get there under a controlled test environment, then repeat with a 240mm, 280mm, 360mm, 480mm, 560mm radiator etc and see the results. I see recommendations from like 2012 saying a "a 120mm radiator per cooled component is good enough, so a 240mm is perfect for most systems" however this information is very old, we didnt have 10+ core consumer CPU's with insane VRM's like the x299 and the crazy GPU's we have now..
If you also think this would make a great video please help me get the Steve to see this
The Crusoe processor by Transmeta was x86, it used software to virtualize the instructions. en.wikipedia.org/wiki/Transmeta_Crusoe that´s late 90s tech for ya.
I used to play games on a Tandy 1000 AX, which had a TGA (Tandy Graphics Adapter). All my PC friends were jealous of my beautiful 16 color display, while they were stuck with 4. I was kinda jealous of my friend's Amiga graphics though.
IIRC, there's actually a third company with an x86 (and I think also an AMD64) license. It's a Chinese company named Zhaoxin, which got their license from VIA and they announced some new x86-cpus in December 2017 or January 2018. As far as I remember, they weren't particularly competitive and were manufactured on 28nm but it's a step in the right direction
@askGN: Hey Steve ! Thanks for the great content this week ! Hope this question has not been asked before: Question: Would eGPU boxes make more sense when paired with mid/low end gpus, rather than high end GPUs ? Since the issue is losing performance due to bandwidth over the cable, wouldnt a mid-tier GPU (1060 / 480 ) lose LESS performance than a 1080ti and thus be efficient enough to game on ? (Im mainly looking at this from a point of view of buying laptops in the future with no onboard GPUs, but a powerfull cpu. Then just having the eGPU enclosure with a dedicated and upgradeable GPU inside.)
13:16 My MSI GTX 960 4Gb Version its +550 at the Afterburner and probably +1100 actual speed so from 7 Ghz to 8.1 Ghz ... i dunno if his afterburner just register the actual or the normal ... if he put +1000 and after burner applied +1000 to the actual speed not the doubled speed then y maybe posible..
Please do a review on the EVGA DG-87 case, I got one today and its awesome cooling, the case stays at Ambient temps even under load and my EVGA 1070ti silent doesn't go over 52c 21.5 Ambient.
VIA was here with x86 CPUs and did not die. They specialise in industrial CPUs these days, but 15-20 years ago, a VIA CPU (soldered on a VIA mother board) was a viable option.
ARM was originally a desktop product. The Acorn Archimedes and RISC PC lines all used ARM CPUs, and they performed really well for their time. The Acorn versions of 3D games spanked the Amiga and Atari ST, both using 68k-series CPUs. The ARM CPU was also used in the 3DO, though I'm not sure that's exactly a glowing endorsement. #askGN Is there a reason that an ARM licensee (or even ARM themselves) couldn't potentially beef up ARM to make it into a competitive CPU architecture once more? With Windows 10 now supporting ARM, it seems like the time might be right.
For other 3rd partys back in the day Digital Equipment Corporation, Centaur Technology (IDT), Transmeta, Fujitsu to name a few off the top of my head a lot of which were bought out then phased out. Centaur and Cyrix bought by VIA, the Cyrix III which is actually not Cyrix at all but Centaur designed via chips tell the nano.
I am looking forward to disk io speeding up with PCIe 4 and 5.0. Almost justify Thread Ripper board with 8 core CPU to get extra PCIe lanes today for disk io.
Someone already mentioned the POWER9 workstations which are awesome, but IBM is also working on creating POWER processors for embedded platforms, which should really help with bringing them to the mainstream
I'm surprised Qualcomm hasn't been mentioned in the first topic, though they are currently focused on mobile SoC's they could very well expand to the desktop market. Snapdragon and Adreno already have a big marketshare
I think you're thinking about SPACERS not STATORS VANES on the stator part of the question. The idea of the stators like on jet engines or on the back of the Noctua F12 is to turn the rotational energy in the air to static pressure. Because you see the air coming out of the fan has a lot of rotational energy to it so the stators stop the rotation and that energy is turned into static pressure.
#askgn a atx motherboard has a x16 underneath that a x8 and underneath that a x4 pci-e slot, right? If so, isn't there a restricted performance for the 2nd gpu for SLI?
Hello Steve, hope you are doing well. I know you mostly cover mid tower cases (DBP 900 and HAF X revisited are the only exceptions?) but do you plan on reviewing some full tower cases as well? I made the mistake 6 years ago to buy the 800d and now, since you educated me, I ripped the drive cage apart, put 2 140mm fans in the front and glued Silverstone Air filters on the front panel. Now I have much better temperatures, but that means all my 7 drives are in the air held by cables or sitting at the bottom of the case. I am thinking its time for me to upgrade to a nice new 2018 case, but I really want a full tower. Which one would you suggest, except the HAF since it just looks SO bad... I read reviews around but I dont really trust no one, except you on the matter. Even just your opinion by seeing a picture of a case is more valuable to me, than some "reviews" out there. Thanks in advance.
I feel your pain, I have not seen a good case with that many drive cages in a while, typing this on a comp in a case from the 90's era myself. you either end up with good airflow and lack of enough drive cages, or enough drive slots and bad airflow. I constantly find myself reminding others that a NAS only moves the drive prob to another computer case (aka, you still need a case for the drives with good airflow), and they do not make 40 Tera-Byte solid state drives yet that I can afford, lol. I sometimes get the feeling that Phantex copied the specs for my workstation for the revised Evolv X with 10 rust disk bays and over 4 SSDs. However unless Steve says it's as good as the PM01 for cooling, it will never come close to the 90's era case I have. There is just no market for workstations with good airflow these days, I guess it's "Gaming" only stuff of late (All looks, with terrible ease of building in and horrendous airflow). As soon as you need more than 3 rust disks, forget it. I'm actually thinking about literally taping a Silverstone CS381 to a Silverstone PM01 so I can have good cooling and 10+ drives, lol.
I was looking at it today and it really seems the only solution if there is really not a decent full tower. I always had full towers and it will feel like a downgrade for me (yes I am weird). Having the space, the external size, the options is just too appealing to me. If there is not a decent full tower I will probably stick with my modded 800d. I overpaid for it, better use it, its not like it will ever ...break.
If you are shopping for a midi tower, there are so many good options now (I am pretty sure Steve is responsible for that) but full towers? I like some of them, but I dont want to risk and having to mod again. I dont mind spending extra for a full tower, since its my preference, but I demand to provide me a nice clean build with good airflow, which is not guaranteed. I was hoping I could see some numbers and comparisons and the only guy to do this is Steve. I really can't understand why there is not a market for full towers, its not like you could have space for mid tower but not 20cm extra height for a full tower. Price is a very good argument but full towers are not THAT more expensive. Especially for enthusiasts tampering with their builds often, having that space matters every time. So for gaming, watercooling, workstations with lots of drives etc, full towers seem the better choice overall and yet no one seems to prefer them.
I've been pointed to Caselabs and Lian Li a lot. Caselabs is great if you're doing custom water loops. If what you need is good air cooling with lots of drives, the cases that accommodate more than 3 drives is a bit lacking in the layout. It's more like they made a huge box and tried to space out the least amount of stuff to fill the volume. And I'm not super thrilled by the airflow paths anyway, many have fans that make air completely avoid anything that needs airflow for cooling. Also, many of there better Workstation cases have been discontinued, so not the best option, especially for the price as pointed out. Lian Li, aside from the 011 Der8auer case, seriously lack airflow. It's like they assumed the computer is only going to burn 40 watts like an old 90's era 486, and the cases with 9 PCI slots for WTX (not to be confused with EATX) have completely sealed off hard drive sections that have nothing for airflow. (NEXT! lol) Oh, and there is Supermicro. They have a realy nice workstation case. It lacks the number of drive bays I would need, however it does have 8 bays, and room for an EATX motherboard with fantastic cooling. The ketch is you got to pay over a grand for the case, then gut the system it came with to put your stuff in the case. Good case, terrible price value if all you want is the case, lol. www.supermicro.com/products/system/4U/7048/SYS-7048GR-TR.cfm I'm not in a need right now, however, I am looking at what there is as possible options, and am not impressed with anything of late.
Question. I've heard a few tech tubers mention when delidding that you don't need to use sealant when reassembling. Is this true? I beleave one of your vids says to use it but then you've also mentioned better thermals with out it. What are the pro's and con's? And do you have an tips or tricks to putting it all back together when the CPU isn't stuck together? Love the content 👍
I put a Nepton 280L AIO in my build about 2 1/2 years ago. I have an overclocked, delidded, 6700K with liquid metal running 24/7, but only occasionally running at anything over idle on the CPU. Should I be concerned about AIO longevity? When should I be replacing it? Should I be looking into opening it up, flushing it and replacing the fluid? I'm not concerned about the fans, I replaced them long ago. What I am concerned about is all the critical stuff I can't see inside the loop. Thanks!
I'm curious to know if the difference in power efficiency in power supplies can cause a significant difference in pc temperatures, atleast in some situations, or is the difference in heat being released negligible?
How difficult will it be for new players in either gpu or cpu market when taking copyright into account? IE with all the tech already copyrighted by the players in the game, how difficult will it be to design "new" ways of making graphics cards and cpu's without being allowed to use current tech's/components.
I have a question about Skylake windows 7 support. It was supposed to end 2017 in July and it was extended to July 2018. I built my mom a window 7 machine using i5 6400 and she declined the free upgrade. I'm sure plenty of 6700k gamers might have questions too. Do we NEED to upgrade to windows 10 asap or is it just another security vulnerability to worry about with Intel? Do you think they will push micro code to disable compatibility to make skylake more in line with kaby and coffee lake?
Agreed on memory. I have my 1080ti at +575. I can go over it, but going over reduces FPS, and going below this again reduces frames, so +575 has been my optimal number.
orcale makes cpus still. like the sparc m7 32cores 256 threads and up to 4.133ghz and they encrypt the data on the cpu aswell. its built for severs of course though.
ARM is definitely a major competitor to Intel, and with Microsoft massively pushing to natively support ARM on windows, we could see some loss of x86. At least in desktop usecases, like web browsing, spreadsheets, and video playback, ARM runs on par with low end x86 stuff, as long as the app being used has ARM support.
You forgot to mention another 3rd party GPU company and they were a part of the 3D accelerator wars of the 1990's was the S3 Graphics company. It's now owned by HTC witch is only focusing on the mobile platform.
#AskGN We are using x86-64 for a very long, x86 Intel, x86-64 AMD(amd64). Why is there no new architecture or instruction set? I mean like x86-128 or whatever like 128bit processor for a desktop? Isn't it time for a new one that can make 5GHZ a base clock not a OC or Turbo Clock?
Elite Gamer Edition Modmat when? Speaking of other manufacturers, do you guys have any info on what VIA is doing with Zhaoxin in China? They're supposedly developing low TDP x86 SoCs and their ZX-F line (7 nm) planned for 2019 is suppose to be on par with Zen+. I was hoping to see more of these China-made CPUs. I can find a few articles on the new SKUs they were planning but no shipped products with them, and there's confusion over VIAs x86 license expiring by the end of 2018. Supposedly some Lenovo PCs use ZX-D processors but I haven't found any.
Wow.. For GPUs we used to have PowerVR, Rendition, 3DFx, Matrox, SiS, S3, Tseng Labs.... Euhhh.. More I can't think of right now. Cyrix, NexGen and Transmeta on the CPU front.
To continue the question about ''gaming routers'' , what about killer networking ? Except the software side of things, is there really an advantage about it ?
I made a report about a year ago on GPU Industry. I was surprised reviewing my own research. What I think is, probably we will Never see any 3rd party GPU manufacturer in the market. Just let me tell a few reasons which were more than enough for me. 1. Very successful graphics companies are swallowed by Nvidia, such as 3dfx Interactive. 2. There is basically one "king" in this industry (Nvidia), and he doesn't care a thing. 3. Barrier of the industry is Nuclear Strike Proof. (Customer reliance, trust, future service, brand image, fear of being swallowed by the "king" etc... etc...) 4. Steve is right, its gonna cost BILLIONS of Dollars only for opening a plant.
I use an 8700k in my work station and its still a heat pig even with water and a 9700k in my big rig and I'm happy it runs at 5.4ghz 1.295v and in the 20's c.
On the 1000mhz memory thing: Just wanted to point out, it appears EVGA might have replaced some FTW1 1080 cards with the memory from FTW2 when replaced for the heat issues. They released a bios update for FTW2 cards that brings memory bandwidth to 11GHz, (5500x2). I have a FTW2, and I'm then able to overclock mine to +400, bringing it to 5900MHz, or 11.8GHz. A friend of mine has a FTW1 that was sent in for a replacement with the heatsink pads and whatnot, and was able to overclock his memory to nearly the same as my FTW2, yet EVGA did not provide a bios update to FTW1 or SC1 cards, presumable because of memory differences. Completely stable in games, no artifacts, no issues in more than 6 months
"Gaming" routers are often BS but it is beneficial if your router has a QOS setting that can handle your ISPs bandwidth. Quality QOS settings can avoid network ping spikes when someone else on the network starts downloading a huge file, backing up a phone, etc.
In response to the first #AskGN question. Qualcomm is a major mobile CPU manufacturer along with several other products. A huge market of Android phones and now even a few Chromebooks have the Snapdragon processors. Though it is unlikely for them to come compete in the desktop area of the market their server processors along with their AI technology is pretty incredible nonetheless.
What about RISC-V? They seem to become a new big player in the PC space (ARM is already, but not for main rigs, yet) in a few years. There is nice video by Linus (LTT). It's a nice introduction, just search for "RISC-V LTT".
Ask GN 90 is here! ua-cam.com/video/KzSIfxHppPY/v-deo.html
Patreon episode is also live: patreon.com/gamersnexus
Hi Steve
Just recently I saw a new video posted by Puget Systems about using an Intel HD graphics for hardware accelerated encoding i adobe premiere cc.
They showed a notable difference in rendering quality. Test consisted of 2 render passes with the same settings used, one difference of course was either using software rendering or hardware accelerated rendering with intel graphics. Files generated with Hardware encoding showed significant file size reduction on a final render compared do software rendering which seems to be strange. Upon further investigation it seems like hardware encoding had much lower quality overall.
Although rendering speed was much faster on hardware i thing the degradation was very visible.
This seems to be more in line with another task where hardware encoders are used for example streaming with OBS.
For example software OBS encoding seems to provide ok results with around 3000 kbps bitrate , while using Quick Sync, NVEC or AMD hardware encoder 3000 kbps bitrate seems to be unacceptable to me.
I think this topic is worth to do a deeper dive. This could be an interesting topic, different scenes might also prove different challenges for those algorithms such as city spaces vs high vegetation areas and youtube compression is especially bad with vegetation.
Source: Puget systems - H.264 Hardware Acceleration in Adobe Media Encoder - Good or Bad?
Via still sells Cyrix and S3 Graphics based products. They are the ones that absorbed those 2 companies.
I have seen almost everything worth watching on UA-cam. Thanks for delivering consistent content that's relevant, informative and entertaining.
Originally GN stood out to me because you narrate all your test results. Keep it up Steve, you are the Tech Jesus.
I think some of the confusion is from what I was doing, combining a "Transition Duct" with a set of Stator vanes, to both help turn rotation into exhaust velocity/pressure (The stator vanes) and allow the air to fill in behind the fan motor (Transition Duct) for a radiator. It is incredibly complex stuff with many easy to mix up terms, so not bad at all explaining the rest Steve.
I've only been aware that Blades are not Vanes for about a year and change now thanks to AgentJayz's jet engine vids, and I still mix them up at times. lol.
and yeah, I've seen no dif at all, other than how much rad gets air, it really isn't changing the CFM of the fan enough to make a big diff. (at most 1c, tho I don't trust the CLC's soft that much, lol)
Great vid Steve and Crew. B)
hey steve i got one of those 5.2 at 1.35 8700k's as well! the only thing i would worry about is llc spiking after a high load has ended. i wouldnt go over 1.35 for everyday use unless you got really high gains for it. and with 5.3 at 1.35 you are not going to get much (if anything more). of course you can save some power by going 5.2 at 1.27-1.3 or so and not really see any real hit in perf. i run mine at 5.1 1.26 just cause of the power draw and i dont really need anymore
Steve, hearing you talk about SGI made me go in my basement and undust my Silicon Graphics Indy :-) The bootsound made me so happy i had to run some demos on it..
Arm is the third party for CPUs at the moment. I have hope for RISC-V but don't know if it will have mainstream consumer any time soon.
username Great point! ARM probably scares Intel more than anyone else.
ARM is a bit different. They are only licensing their architecture and don't build own stuff.
Arm is geared for small low watt applications tho, so as a PC equivalent it has a few limitations due to the low watt design. As a low watt CPU, Arm is very good at what it was designed to do (from spacecraft flight control computers to automotive and avionic systems). Places where size, weight, and watts are a premium constraint.
Iftar Miftar Perhaps I should say Qualcomm(which works with Microsoft to have Windows running on their hardware) or Samsung(which is the largest Silicon manufacturer).
Also ARM is used in high end networking gear and even high end servers, so it's not just for the low end anymore. Plus the persistent rumors of Apple dropping Intel in an upcoming MacBook in favor of an ARM CPU
3Dfx was the company, Voodoo was their GPU product line ;)
#AskGN
I’ve heard different things from different sources, but does heat degrade CPU’s more than the pure voltage put into the CPU? Phrased differently, Can one safely input higher voltages into the CPU, as long as the temperature is low, or will the high voltage degrade the CPU rergardless of low temperatures?
Great question for next time!
Gamers Nexus thanks!
It’s actually really bugging me, since I can easily keep my 6700k cool (50-60c) at 1.4vCore and above, but I have this nagging fear about raising the voltage above 1.4 🙈
I'm particularly curious of this for it's potential relevance to core performance boost on 2nd gen Ryzen (and similar features/behavior building off this in future chips), since as the load temperature goes down (let's say from an absurd overkill custom loop with multiple 360/420 rads, d5 pump, high rpm fans), those chips are spending way more time at their peak/boost voltages (1.5-1.6) which would be well into degradation territory if those voltages were 24/7. For those unaware, the spec AMD explicitly stated on 1st gen Ryzen for 24/7 vcore before degradation can occur was 1.425
As it happens ... I'm running my 2700x under under one of those aforementioned overkill loops and am pretty damned sure I degraded it already from CPB alone. Started getting bluescreens, tried different bios options to troubleshoot, solution ended up being adding dynamic vcore offset which worked for a little while, then came back ... rinse and repeat a few times before I realized it was looking an awful lot like degradation (which I had previously never seen first-hand)
Buildzoid actually did a video about that.
He killed a CPU with voltage (to proof his point) while still remaining in the safe temperature region.
Here is the video: ua-cam.com/video/bXOu3hseXRg/v-deo.html
Melvin Klein, thanks I’ll go watch that! but would still love an answer from Steve too 🙂
Motorola was another CPU manufacture in the past - the 68lk series of cpu's - example the 68000 - 68010 / 020 etc.
John Paul Bacon Didn’t these chips go into the first Apple computers?
Motorola was a chip manufacturer for Apple all the way up until 2006, when they did the Intel switch. Then their processor devision was sold to Freescale
John Paul Bacon Motorola was making IBM PowerPC architecture based CPU’s though, Apple would rotate between proper IBM chips and Motorola licensed ones
Matthew McKellar they were a manufacturer for many customers of IBM PowerPC processors
Fee Nicks True, but it is worth pointing out that Motorola specialised in making lower power embedded application CPUs, vs the IBM POWER server CPUs. The POWER5 was perfectly fine for server and workstation use, but it couldn't be scaled down the the low power lineup that was the Motorola PowerPC lineup.
If I'm correct the PowerPC trademark is under the ownership of NXP now that they bought Freescale, NXP still makes PowerPC parts, but they are built within the same TDP envelope as the G4 series of parts, and tend to be used in telecom and routing equipment.
G5 was an Apple trademark, and PowerPC was Motorola, so that would explain why they exclusively switched to using G5 in their branding after dropped Motorola as a supplier.
Oracle is also making CPUs based on SPARC V9 architecture in their SPARC T-Series Server. Those are used for mass database management usually found in air traffic controls, or cloud service. The processor in your phone is most likely ARM architecture. There was also a small boom in Amiga PCs in the early 2000s that died off fairly quickly.
The only way I can recognise Raja Koduri is dropping the RX 480 in the interview with LTT.
Not to be pedantic but VIA's GPUs are what was once S3 graphics. And voodoo was a card made by 3DFX. They were bought by nVidia. Matrox still exists but they've recently started using AMD GPUs in some of their cards.
ManWithBeard1990 Yes. They're mostly used just to get extra display inputs nowadays.
He also forgot PowerVR. Had some great GPU tech and made some decent PC GPUs in the 90s. They survived the great PC gpu manufacturer culling of the late90s/early 2000s during which time they designed Sega Dreamcast's GPU. They made Apple Iphone GPUs until this year or last year (can't remember). Hopefully they can continue to survive and still have a meaningful presents somewhere.
I believe they were also used by Intel as the IGPU too for some time. I think this is why some of the Atom CPU's can't run on one of the Windows 10 big updates because they never got the IGPU drivers running on it so a lot of low powered systems suddenly couldn't update their OS. Microsoft changed the graphics stack slightly which required newer drivers.
Yeah, the Intel GMA integrated graphics were based on PowerVR. I forgot about them since I didn't know they made add-in cards.
I worked for Cyrix in the 90s. Fun times.
VIA (Formerly Cyrix) is still trying too. They have some low power x86 chips up to quad core. They are made on a 40nm process. They still have the x86 license from the Cyrix days.
First of all I love your channel Steve, you go really in depth and know your stuff! Anyway my question for next time is, what hair products do you use to take care of your glorious hair?
On the subject of GPU memory overclocking: you can check gpu memory errors with hwinfo. You don't have to have the error count to be 0 if you're just gaming. Encoding/transcoding with gpu is usually more sensitive to these errors.
Speaking of former GPU manufacturers...Diamond used to make their own unique cards. I had a few Number Nine cards too. A bunch of manufacturers merged into S3.
Have you seen RISC-V? Do you think it might be relevant in the next few years? It's got 32bit, 64bit and soon to have 128bit ISA support. It's a processor architecture somewhat like ARM/aarch64/PPC and it's open, with the idea to be modern and easy to use and implement.
#AskGN you're my favourite arms dealer! i have received my hand to hand combat today and just wanted to say that it looks awesome! (even if it doesn't spin like yours) keep up the good work tech Jesus (and co)
that first question is funny we used to have alot of options owned both matrox and voodoo brings back memories if anything and not allways fond one's like mounting heatsinks on the open AMD cores things sure got easier
Performant is a real word, Steve. Love ya!
VIA also used to make their own series of low-cost/low-power x86 CPUs. A huge chunk of the early niche that is ITX were largely powered by VIA Eden series chips.
My first graphics card was a Matrox!!! It was just a huge upgrade at the time for me.
They were both Silicon Graphics, Inc. and SGI. They started as the former and changed their name to the latter not too long before they split up and went bankrupt. They also birthed MIPS, which was a RISC CPU architecture. While we are in the wayback machine, I remember the very first GPU I ever had with a fan on it: It was an NVidia Riva 128 on a card made by Canopus. It was advertised as being a workstation card, but kicked ass for gaming as well.
Speaking of high memory overclocking, was was able to get +725 stable, checked with things like OCCT, furmark, and benchmarks did get higher scores
AdoredTV has a very informative and in-depth 2 part video on the history of Nvidia (and its "competitors"). Cheers!
@Gamers Nexus 3200 CL 14 is pretty affordable??? for 16Gb of it it's $209. You could get this for around $90 last year.
Steve, it's not true that SGI didn't think consumer GPUs would ever be a thing. In reality there was a hefty battle within the company over the issue (about half thought it was a great idea, the other half thought it was dumb), and it happened more than once, first with SGI's IMPACT graphics tech, later when the N64 project got going, and again when NVIDIA (by nabbing some staff) based its GF256 on SGI's IR tech, amid a rather behind the scenes deal to transfer staff and IP (not quite sure on the precise timing).
All sorts of things went on back then, eg. NVIDIA offered to port a Quadro to run in Octane (SGI sorely needed the better performance), they even wrote drivers, but SGI ignored it, though it was certainly mostly SGI that made the mistakes, but don't think there was nobody within SGI who could see a more sensible road (there were plenty; they're the ones who setup and also later moved to NVIDIA, but remember some SGI staff also moved later from NVIDIA to ATI, others to ArtX earlier, etc.); alas they were not the majority of influencing opinion, and there were also external pressures pushing back against the notion of entering the consumer market.
These issues affected their CPU development plans too, and the support infrastucture, eg. it was I, with the help of the admin of sgi.com, who managed to persuade SGI marketing to have the info for their R16K/1GHz (16MB L2) CPU actually added to the site PR material, but it took 18 months, and then it turned out there was also an R16K/900MHz available for Fuel which even the admin didn't know about. Prior to this, SGI had quad-1GHz boards available for Origin, Onyx and Tezro systems, but they were not advertising them, which is bizarre.
Way back in the early years, SUN asked SGI if they could license out RE2 gfx, but SGI refused. Probably a mistake. The deal with Nintendo was... complicated, perhapse a tad naive on SGI's part. SGI jumped into bed with MS several times, but got burned (the Farenheit project, again with dropped WinXP support for the VW320/540 which ruined the possibility of decent FW support). There were a lot of errors, perhaps the worst of which was believing Intel's bluff about IA64 and thus sacrificing their own Alien/Beast SNx and Cray CPU plans, that was a disaster, it meant (with IA64 so late) Origin launched with only 16 CPUs per rack, instead of the 128 it could have had if SGI had just decided to stick with MIPS and progress to a multi-core MIPS/Cray hyrbid (despite this, Origin did very well, easily become the bandwidth server monster of its time; the most popular porn server back then was a deskside Origin2000, very fast and totally reliable). SGI did eventually release a 128-CPU rack (the 3900 series; I have a typical "brick" with 16x 700MHz CPUs), but it was just at best a final faster MIPS (the 1GHz), not a newer MIPS V with MDMX design, no cool vector stuff from Cray, etc. (the idea had been to create a CPU with vectors, each of which could support media extensions; the computational power of such a design would have been astonishing back then). A major opportunity lost. About a 3rd of SGI's CPU design people moved to Intel, which delayed their own MIPS schedule by many years, and eventually the combined Cray plans fell apart.
Why so many mistakes? Why ignore where the market was clearly heading with gfx, and indeed had already gone? Various reasons, but the main one is the way SGI sold its systems, namely via, "official resellers". These companies had a lot of influence, and the sales reps were raking it in via commissions for major sales to big companies and organisations, back when they were riding high (govt, auto, defense, oil & gas, medical, film/effects, sciences, edu, NASA, etc.); I helped out with the mass IR gfx upgrade at Dreamworks in preparation for the Lost World production, that alone was many millions. Resellers, their sales reps, had absolutely no desire to deal with ordinary consumers, messing about with tiny orders for mere hundreds or even tens of $. During all this time, the entire ecosystem of the way SGIs were sold was based around very secretive pricing (an old joke on USENET: a PC user asked how much an SGI cost, an SGI employee replied, "If you don't know, you can't afford one." The arrogance of that response rings very hollow now). Resellers would often price things based on how much they either believed or knew the customer could afford, the perfect way to maximally exploit academic research & grant funding. I saw this process in action many times while in charge of various high-end SGIs for several years. Within SGI, there were likewise marketing people who loved this system aswell. Remember, this kind of business model meant individuals got to meet with Important People and travel to Interesting Places. Some reseller reps became millionaires.
So, even if SGI had somehow decided to support the consumer market, license RE2 out, respin IMPACT into a $400 board, or beat NVIDIA to the IR derivation that became the GF256, none of it would have made the slightest difference to their eventual demise unless they also completely abandoned their reseller sales model and allowed people to order direct, with a proper ecommerce site such as Dell's or HP's. This was never going to happen though. Everything I heard from those involved both at the time and since has made it clear the company was stuck in a management and marketing rutt, they were obsessed with big money clients. Great ideas were produced, but not properly supported and evolved over time (O2, VW series, etc.), wasting multiple opportunities to break into volume/consumer markets, sometimes made worse by quite dreadful marketing campaigns, probably the worst being the O2+ launch (that fiasco lost SGI the support of companies like ILM; people I knew there, like me, had been expecting something along the lines of a dual-core R9000 on a mbd with a much higher max RAM, fitted with Cobalt gfx from the VW series, ie. the same arch but 10X faster, which would have been great; what SGI released was a change in case colour). Thus, even if the world had seen the launch of an SGI consumer GPU, I don't think it would have done very well, not without a huge marketing shakeup. More likely, after an initial strong buzz, the relevant unit would have been sold off, and we'd have ended up with an NVIDIA anyway.
For a while SGI ruled the world of 3D, gfx and film, for good reasons, but like so many entities that expand very fast, the money flying about diluted the cause, corrupted the original worthy aims. Much was wasted on fancy parties & suchlike. Reminds me a bit of Worldcom and Enron. When the company started taking on conventional business and markteing people in the mid/late 90s, then it really went downhill. The pricing secrecy got even worse. By the mid 2000s I'd helped sell over $50M worth of SGIs, but after six months trying I still couldn't obtain a quote for an Octane III (I was asking on behalf of movie companies and others). Talking to a lady at SGI UK sales about these issues (she'd only been there a few weeks), she reckoned that at least half, perhaps 2/3rds of all marketing staff would have to be fired in order for any kind of conventional sales system to be viable, with publicly visible pricing, direct ordering & delivery, etc. I remember in the late 1990s it had become quite a thing to actually get hold of any kind of price list, those which did become public mostly coming from academia, eg. here's mine:
www.sgidepot.co.uk/depot/prices1.gif
www.sgidepot.co.uk/depot/prices2.gif
SGI is a lesson in how the mighty can fall. I certainly drank the coolaid for too long back then, a stance which didn't change until the O2+ debacle knocked off the rose tinted specs, and I started hearing more about the difficulties going on within the company. Later they also took on numerous Linux/x86 people who were rather hostile to the old guard MIPS/IRIX people. One person told me the remaining MIPS/IRIX building section was a depressing place to be.
SGI's original and highly successful ethos of top-down design of advanced tech is still possible, but to endure it needs a connection between engineering and management that's hard to maintain, and it needs a marketing structure that does not succumb to greed, or allow external pressures to dictate policy. This is difficult. The way companies like Intel and NVIDIA have been behaving in recent years has strong echoes of what happened at SGI, the arrogance present in certain policies, ignoring customers, taking them for granted, chasin the big money, etc.
Ian.
I had a Diamond Stealth II s220 in the late 90's that used a Verite 2100 chip from Rendition, great card for the money at that time
Does the patreon bonus video get released on you're youtube channel at a later date? There are those that are on a tight budget and can't donate monthly. I don't own a credit card or have a paypal account. I support your channel as much as I can regardless.
17:24 I want a J79 edition cooling fan.
Is it possible to put a radiator in the fridge for additional cooling? or even the Freezer?
People who worked for Silicon Graphics Inc founded 3dfx Interactive in the mid-90ties. To me one of the most amazing startups ever. They saw the potential for consumer GPUs SGI did not, and were the first to build a dedicated affordable gaming GPU for PCs. The funny thing is that all analysts said, that 3dfx's idea of using a pass thru concept for their Voodoo cards is simply nuts and wouldn't be accepted by consumers. You had to have a normal VGA card and put a Voodoo alongside. Then you had to connect the 2 cards using a pass thru cable, like so www.dansdata.com/images/buildpc/320/passthru.JPG .
A Voodoo card would then accelerate games using 3dfx own Glide API ( en.wikipedia.org/wiki/Glide_(API) ) and switch to full screen, as nothing else was possible by using pass thru.
Despite all this 3dfx was hugely successful. In case you want to know more check out ua-cam.com/video/3MghYhf-GhU/v-deo.html. Smart bunch!
1:31 Rest in peace 3dfx
Talos workstation is the latest Power9 try for security oriented users, whose firmware is (or supposed to) also open sourced.
Phoronix is also doing some comparison between Power9/Xeon/TR.
RISC-V is also gaining its territory, as an open-source (well, sometimes firmware provided by hardware vendor is not open sourced) alternative to ARM.
MIPS is also widely used by some TV boxes and routers.
But at least for closed source (or call it serious) gaming, x86/x86_64 is still the only option.
For the "gaming" route, I personally always choose any decent router which supports OpenWRT, or even low power x86 based soft router.
"SGI... what was it? Silicon Graphics, I think? Uhh... Silicon Graphics, Incorporated maybe was it?" O_O
*sobs in a corner* #imnotold #kidsthesedays :D
Yes.
I used to work on an SGI Indy.
Quite a nice machine back in 1993.
IRIX was a pain though.
I think there was a lot more wrong with SGI than just their focus.
for the last question: gaming packet prioritisation ? Is it available on normal routers ?
#AskGN with the transistor size threshold on silicon nearly hitting it's peak (3-5nm) when do you think we'll see different materials like Graphene or something similar being used in future processors if we continue to shrink the transistors?
I use a 20mm Phobya fan shroud on my H90 It helps about 5-10c when fully saturated.
12:43 sometimes when you overclock your VRAM, the Memory Timings (wich you cannot see or control) will increase a lot to compensate the higher clock speeds, maybe this is why his VRAM clocks so high ...
#AskGN is it safe to use nail polish on pc parts? motherboard in my case... i don't like the bios led, there's no option to turn it off so i'm gonna paint it with a nail polish. almost all the nail polishes are non-conductive as far as i know, but i even bought a multimeter for this and i'll test it before using it. other than conductivity and causing shorts, is there any risk involved? can it cause any damage over time?
fun fact: the high end 3dfx cards needed 8 layer pcbs, if 3dfx sold off their pcb manufacturing stuff, they might have survived long enough to release their spectre cards (they were only a few months away from being ready for release)
Steve - just to correct you - Matrox is still alive and kicking. Nearly all server graphics, which are grouped under BMC, iDRAC, and other are using ASPEED chips, which are essentially Matrox G200, G450 or G550, they have VGA output and stream video to internal KVM-over-IP). I even happen to have Matrox Millenium in PCIe format (it has PCIe x1 and dual DVI output).
For others, Via had its Unichrome series, which in one form or another survived, as a legacy product. There are efforts to bring ARM's Mali GPU to PCIe format, but that's just PoC for now. We have just complete lack of anything more powerful then modern Raspberry Pi :(
In terms of cooling, many cases have large empty areas in them, particularly on the side of the case opposite to the motherboard. I have to imagine that air is entering the case, and traveling through these empty areas to the back where it exits without coming into proximity with any hot components.
Would it benefit cooling at all if you were to fill these large empty areas with an inert substance like Styrofoam to essentially force all (or at least more) of the air passing through the case into proximity with the components of your computer. I believe it would have the affect of speeding up airflow inside the case since you are restricting the space that the air must pass through.
video idea/request for @gamersnexus
The video is as simple as follows.....where do you get the biggest temperature improvements increasing radiator size and when do they stop improving?
Take a hot chip like the 7900x+VRM cooling and a hot GPU like 1080ti and overclock them (so that hardware/heat produced isnt your limiting factor), take a single loop, then a 120mm radiator and see what the temps reach and how long they take to get there under a controlled test environment, then repeat with a 240mm, 280mm, 360mm, 480mm, 560mm radiator etc and see the results.
I see recommendations from like 2012 saying a "a 120mm radiator per cooled component is good enough, so a 240mm is perfect for most systems" however this information is very old, we didnt have 10+ core consumer CPU's with insane VRM's like the x299 and the crazy GPU's we have now..
If you also think this would make a great video please help me get the Steve to see this
The Crusoe processor by Transmeta was x86, it used software to virtualize the instructions.
en.wikipedia.org/wiki/Transmeta_Crusoe
that´s late 90s tech for ya.
I used to play games on a Tandy 1000 AX, which had a TGA (Tandy Graphics Adapter). All my PC friends were jealous of my beautiful 16 color display, while they were stuck with 4. I was kinda jealous of my friend's Amiga graphics though.
IIRC, there's actually a third company with an x86 (and I think also an AMD64) license. It's a Chinese company named Zhaoxin, which got their license from VIA and they announced some new x86-cpus in December 2017 or January 2018. As far as I remember, they weren't particularly competitive and were manufactured on 28nm but it's a step in the right direction
oracle also still makes the SPARC and RISC cpu's
@GamersNexus are sparc and mips processors still around?
@askGN:
Hey Steve ! Thanks for the great content this week ! Hope this question has not been asked before:
Question: Would eGPU boxes make more sense when paired with mid/low end gpus, rather than high end GPUs ? Since the issue is losing performance due to bandwidth over the cable, wouldnt a mid-tier GPU (1060 / 480 ) lose LESS performance than a 1080ti and thus be efficient enough to game on ? (Im mainly looking at this from a point of view of buying laptops in the future with no onboard GPUs, but a powerfull cpu. Then just having the eGPU enclosure with a dedicated and upgradeable GPU inside.)
13:16 My MSI GTX 960 4Gb Version its +550 at the Afterburner and probably +1100 actual speed so from 7 Ghz to 8.1 Ghz
... i dunno if his afterburner just register the actual or the normal ... if he put +1000 and after burner applied +1000 to the actual speed not the doubled speed then y maybe posible..
Steve ... Corsair just launch Spec-06. When will you be reviewing it?
I had SGI workstations in HS/College.
What's with your arm at 11:12?
Please do a review on the EVGA DG-87 case, I got one today and its awesome cooling, the case stays at Ambient temps even under load and my EVGA 1070ti silent doesn't go over 52c 21.5 Ambient.
VIA was here with x86 CPUs and did not die. They specialise in industrial CPUs these days, but 15-20 years ago, a VIA CPU (soldered on a VIA mother board) was a viable option.
If OpenGL/Vulcan got ported to POWER, could we use IBM CPUs to game on Linux?
Nope, x86 instructions shenanigans
ARM was originally a desktop product. The Acorn Archimedes and RISC PC lines all used ARM CPUs, and they performed really well for their time. The Acorn versions of 3D games spanked the Amiga and Atari ST, both using 68k-series CPUs. The ARM CPU was also used in the 3DO, though I'm not sure that's exactly a glowing endorsement.
#askGN Is there a reason that an ARM licensee (or even ARM themselves) couldn't potentially beef up ARM to make it into a competitive CPU architecture once more? With Windows 10 now supporting ARM, it seems like the time might be right.
You mentioned that modern games no longer require port forwarding. Could you explain why?
For other 3rd partys back in the day Digital Equipment Corporation, Centaur Technology (IDT), Transmeta, Fujitsu to name a few off the top of my head a lot of which were bought out then phased out. Centaur and Cyrix bought by VIA, the Cyrix III which is actually not Cyrix at all but Centaur designed via chips tell the nano.
I am looking forward to disk io speeding up with PCIe 4 and 5.0. Almost justify Thread Ripper board with 8 core CPU to get extra PCIe lanes today for disk io.
Someone already mentioned the POWER9 workstations which are awesome, but IBM is also working on creating POWER processors for embedded platforms, which should really help with bringing them to the mainstream
I'm surprised Qualcomm hasn't been mentioned in the first topic, though they are currently focused on mobile SoC's they could very well expand to the desktop market. Snapdragon and Adreno already have a big marketshare
I think you're thinking about SPACERS not STATORS VANES on the stator part of the question. The idea of the stators like on jet engines or on the back of the Noctua F12 is to turn the rotational energy in the air to static pressure. Because you see the air coming out of the fan has a lot of rotational energy to it so the stators stop the rotation and that energy is turned into static pressure.
This
#askgn a atx motherboard has a x16 underneath that a x8 and underneath that a x4 pci-e slot, right? If so, isn't there a restricted performance for the 2nd gpu for SLI?
VIA also has a x86 licence and is active in China with a own x86 design.
Hello Steve, hope you are doing well. I know you mostly cover mid tower cases (DBP 900 and HAF X revisited are the only exceptions?) but do you plan on reviewing some full tower cases as well? I made the mistake 6 years ago to buy the 800d and now, since you educated me, I ripped the drive cage apart, put 2 140mm fans in the front and glued Silverstone Air filters on the front panel. Now I have much better temperatures, but that means all my 7 drives are in the air held by cables or sitting at the bottom of the case.
I am thinking its time for me to upgrade to a nice new 2018 case, but I really want a full tower. Which one would you suggest, except the HAF since it just looks SO bad... I read reviews around but I dont really trust no one, except you on the matter. Even just your opinion by seeing a picture of a case is more valuable to me, than some "reviews" out there.
Thanks in advance.
I feel your pain, I have not seen a good case with that many drive cages in a while, typing this on a comp in a case from the 90's era myself. you either end up with good airflow and lack of enough drive cages, or enough drive slots and bad airflow. I constantly find myself reminding others that a NAS only moves the drive prob to another computer case (aka, you still need a case for the drives with good airflow), and they do not make 40 Tera-Byte solid state drives yet that I can afford, lol.
I sometimes get the feeling that Phantex copied the specs for my workstation for the revised Evolv X with 10 rust disk bays and over 4 SSDs. However unless Steve says it's as good as the PM01 for cooling, it will never come close to the 90's era case I have. There is just no market for workstations with good airflow these days, I guess it's "Gaming" only stuff of late (All looks, with terrible ease of building in and horrendous airflow). As soon as you need more than 3 rust disks, forget it.
I'm actually thinking about literally taping a Silverstone CS381 to a Silverstone PM01 so I can have good cooling and 10+ drives, lol.
I was looking at it today and it really seems the only solution if there is really not a decent full tower. I always had full towers and it will feel like a downgrade for me (yes I am weird). Having the space, the external size, the options is just too appealing to me. If there is not a decent full tower I will probably stick with my modded 800d. I overpaid for it, better use it, its not like it will ever ...break.
If you are shopping for a midi tower, there are so many good options now (I am pretty sure Steve is responsible for that) but full towers? I like some of them, but I dont want to risk and having to mod again. I dont mind spending extra for a full tower, since its my preference, but I demand to provide me a nice clean build with good airflow, which is not guaranteed. I was hoping I could see some numbers and comparisons and the only guy to do this is Steve. I really can't understand why there is not a market for full towers, its not like you could have space for mid tower but not 20cm extra height for a full tower. Price is a very good argument but full towers are not THAT more expensive. Especially for enthusiasts tampering with their builds often, having that space matters every time. So for gaming, watercooling, workstations with lots of drives etc, full towers seem the better choice overall and yet no one seems to prefer them.
Yeah, sadly full towers aren't very popular. You might want to look into Caselabs stuff, they can get pretty expensive though.
I've been pointed to Caselabs and Lian Li a lot. Caselabs is great if you're doing custom water loops. If what you need is good air cooling with lots of drives, the cases that accommodate more than 3 drives is a bit lacking in the layout. It's more like they made a huge box and tried to space out the least amount of stuff to fill the volume. And I'm not super thrilled by the airflow paths anyway, many have fans that make air completely avoid anything that needs airflow for cooling. Also, many of there better Workstation cases have been discontinued, so not the best option, especially for the price as pointed out.
Lian Li, aside from the 011 Der8auer case, seriously lack airflow. It's like they assumed the computer is only going to burn 40 watts like an old 90's era 486, and the cases with 9 PCI slots for WTX (not to be confused with EATX) have completely sealed off hard drive sections that have nothing for airflow. (NEXT! lol)
Oh, and there is Supermicro. They have a realy nice workstation case. It lacks the number of drive bays I would need, however it does have 8 bays, and room for an EATX motherboard with fantastic cooling. The ketch is you got to pay over a grand for the case, then gut the system it came with to put your stuff in the case. Good case, terrible price value if all you want is the case, lol.
www.supermicro.com/products/system/4U/7048/SYS-7048GR-TR.cfm
I'm not in a need right now, however, I am looking at what there is as possible options, and am not impressed with anything of late.
Steve, does or would GN be willing to review prototype/breakout computer components?
Question. I've heard a few tech tubers mention when delidding that you don't need to use sealant when reassembling. Is this true? I beleave one of your vids says to use it but then you've also mentioned better thermals with out it. What are the pro's and con's? And do you have an tips or tricks to putting it all back together when the CPU isn't stuck together?
Love the content 👍
does dx12 will be implemented in most new games or new dx will ?
I still got a SGI Onyx work station :o
I put a Nepton 280L AIO in my build about 2 1/2 years ago. I have an overclocked, delidded, 6700K with liquid metal running 24/7, but only occasionally running at anything over idle on the CPU. Should I be concerned about AIO longevity? When should I be replacing it? Should I be looking into opening it up, flushing it and replacing the fluid? I'm not concerned about the fans, I replaced them long ago. What I am concerned about is all the critical stuff I can't see inside the loop. Thanks!
That's a cool t-shirt you have there. I was wondering if you sold those :)
I'm curious to know if the difference in power efficiency in power supplies can cause a significant difference in pc temperatures, atleast in some situations, or is the difference in heat being released negligible?
How difficult will it be for new players in either gpu or cpu market when taking copyright into account? IE with all the tech already copyrighted by the players in the game, how difficult will it be to design "new" ways of making graphics cards and cpu's without being allowed to use current tech's/components.
I have a question about Skylake windows 7 support. It was supposed to end 2017 in July and it was extended to July 2018. I built my mom a window 7 machine using i5 6400 and she declined the free upgrade. I'm sure plenty of 6700k gamers might have questions too. Do we NEED to upgrade to windows 10 asap or is it just another security vulnerability to worry about with Intel? Do you think they will push micro code to disable compatibility to make skylake more in line with kaby and coffee lake?
I actually had a Cyrix “686” and a voodoo 2 6mb. Came with Turok and MotoRacer. Good old days. 4mb of ram for 200$?
Can the 3dfx Voodoo 5500 be rebuilt with today's technologies?
Agreed on memory. I have my 1080ti at +575. I can go over it, but going over reduces FPS, and going below this again reduces frames, so +575 has been my optimal number.
orcale makes cpus still. like the sparc m7 32cores 256 threads and up to 4.133ghz and they encrypt the data on the cpu aswell. its built for severs of course though.
ARM is definitely a major competitor to Intel, and with Microsoft massively pushing to natively support ARM on windows, we could see some loss of x86. At least in desktop usecases, like web browsing, spreadsheets, and video playback, ARM runs on par with low end x86 stuff, as long as the app being used has ARM support.
Hi Steve! What you will be doing for "Ask GN:100" ? 100 questions related to tech or Q and A about GN?
You forgot to mention another 3rd party GPU company and they were a part of the 3D accelerator wars of the 1990's was the S3 Graphics company. It's now owned by HTC witch is only focusing on the mobile platform.
#AskGN We are using x86-64 for a very long, x86 Intel, x86-64 AMD(amd64). Why is there no new architecture or instruction set? I mean like x86-128 or whatever like 128bit processor for a desktop? Isn't it time for a new one that can make 5GHZ a base clock not a OC or Turbo Clock?
5.3GHz @ 1.35V is insane. I almost find it hard to believe that actually being stable. Even getting it to boot at that voltage is impressive.
Elite Gamer Edition Modmat when?
Speaking of other manufacturers, do you guys have any info on what VIA is doing with Zhaoxin in China? They're supposedly developing low TDP x86 SoCs and their ZX-F line (7 nm) planned for 2019 is suppose to be on par with Zen+. I was hoping to see more of these China-made CPUs.
I can find a few articles on the new SKUs they were planning but no shipped products with them, and there's confusion over VIAs x86 license expiring by the end of 2018. Supposedly some Lenovo PCs use ZX-D processors but I haven't found any.
Wow.. For GPUs we used to have PowerVR, Rendition, 3DFx, Matrox, SiS, S3, Tseng Labs.... Euhhh.. More I can't think of right now. Cyrix, NexGen and Transmeta on the CPU front.
ADOREDTV has the best videos on the history of the CPU and GPU industry.
Anyone remember XGI Volari? That 3rd party graphics was some what more recent. That failed spectacularly.
To continue the question about ''gaming routers'' , what about killer networking ?
Except the software side of things, is there really an advantage about it ?
Other processor architecture: PowerPC you said, ARM obviously, but also SPARC and MIPS.
I think you already mentioned it..there are some CPU from Google...used in NAS and such ..so yeah they exist :)
PowerVR should come back to desktop GPUs. Old timers will remember the Kyro II, very efficient and fast competition for the Geforce 2 MX.
I made a report about a year ago on GPU Industry. I was surprised reviewing my own research. What I think is, probably we will Never see any 3rd party GPU manufacturer in the market. Just let me tell a few reasons which were more than enough for me.
1. Very successful graphics companies are swallowed by Nvidia, such as 3dfx Interactive.
2. There is basically one "king" in this industry (Nvidia), and he doesn't care a thing.
3. Barrier of the industry is Nuclear Strike Proof. (Customer reliance, trust, future service, brand image, fear of being swallowed by the "king" etc... etc...)
4. Steve is right, its gonna cost BILLIONS of Dollars only for opening a plant.
What about something to compete with x86? Something like ARM?
I use an 8700k in my work station and its still a heat pig even with water and a 9700k in my big rig and I'm happy it runs at 5.4ghz 1.295v and in the 20's c.
On the 1000mhz memory thing: Just wanted to point out, it appears EVGA might have replaced some FTW1 1080 cards with the memory from FTW2 when replaced for the heat issues. They released a bios update for FTW2 cards that brings memory bandwidth to 11GHz, (5500x2). I have a FTW2, and I'm then able to overclock mine to +400, bringing it to 5900MHz, or 11.8GHz. A friend of mine has a FTW1 that was sent in for a replacement with the heatsink pads and whatnot, and was able to overclock his memory to nearly the same as my FTW2, yet EVGA did not provide a bios update to FTW1 or SC1 cards, presumable because of memory differences.
Completely stable in games, no artifacts, no issues in more than 6 months
#AskGN I got a weird question. Is there any colling benefit in stacking fans together? (e.g. same fans at the same speed & flow direction)
How does the 4gb 480/580 hold up in 2018. These cards need modern benchmarks.
"Gaming" routers are often BS but it is beneficial if your router has a QOS setting that can handle your ISPs bandwidth. Quality QOS settings can avoid network ping spikes when someone else on the network starts downloading a huge file, backing up a phone, etc.
In response to the first #AskGN question. Qualcomm is a major mobile CPU manufacturer along with several other products. A huge market of Android phones and now even a few Chromebooks have the Snapdragon processors. Though it is unlikely for them to come compete in the desktop area of the market their server processors along with their AI technology is pretty incredible nonetheless.
What about RISC-V? They seem to become a new big player in the PC space (ARM is already, but not for main rigs, yet) in a few years. There is nice video by Linus (LTT). It's a nice introduction, just search for "RISC-V LTT".