You're absolutely right Patrick. I like the tech, I like the price, I just don't like the artificially constrained CPU support and that's more than enough to keep me away. If they open it up to the entire Xeon line in the near future, then it could drastically change the way people like me build out our clusters.
Though if you're paying for CPU-based licensing, like for RDBMS like SQL, the hardware costs of the solution is a small fraction over the lifespan, and if you get hung up in standard vs L SKUs pricing and relative hardware costs you are doing yourself a bad favor with regards to TCO for the solution as a whole compared to how much performance you get for that TCO.
Optane would have been a killer feature especially in the low power and low core-count Xeons. Large data centers concerned in high density (buying specifically those extremely expensive CPUs and motherboards) can afford true DRAM that provides reliable and consistent performance that their users demand. If cheap (NAS-level) Xeons supported Optane, Intel would have very likely sold an average of 4 P-RAM DIMMS for every Xeon CPU they shipped out the factory doors. Considering that Intel does not sell DRAM, this would have only increased their adoption and profits. But, the sad reality is that in the last decade, Intel did a lot of questionable decisions and relied on the few moments of good luck (and bad luck for AMD) in their strategies. With Micron out, I expect Optane to be fade into obscurity like their MIC architecture or their HEDT platform; obviously, without ever being sought after or lusted for as the HEDT ecosystem.
Optane tends to be power hungry and Intel hasn't invested the development time to reduce the power draw... I can't remember where I read it, but IIRC each Optane DIMM add about 4 watts at idle and double digits (14?) when active to the power drain... and the U.2 run hot....
Those aren't QR Codes. Those are DataMatrix codes, a 2D barcode competitor to QR Codes. QR codes always have those "bullseye-like" markers, while DataMatrix codes have 2 solid line edges.
A new technology we have developed is an LLVM compiler extension for Optane such that data structures are automatically persistent. The research paper is under peer review and should make the future uses of Optane easier for developers.
Thanks for the great video! Having been a strong believer in the potential of Optane/XPoint technologies for years now, it's great to hear your overview and views on potential future possibilities. It still amazes me how rare even the NVME drive use is in the mid market with the huge performance per dollar it can offer for many database server applications - particularly when factoring in density and software licensing components.
I wish micron would make more NVDIMMs for both Intel and AMD instead of supporting a system specific standard. Being able to use DIMMs as a RAMDISK gives amazing performance and means you don't have to worry about write back in the event of power out. Currently RAMDISK works best with RAM and NVME to safely with write back, but with NVDIMMs you don't have to worry about the write back. Just think of taking old servers running HDDs and swapping half the ram with NVRAM and seeing a massive performance increase. There is also a latency and lookup improvement. Stats below: HDD SEQ1M Q8T1 131R 126WR HDD RND4K Q32T1 2.26R 2.48WR SSD SEQ1M Q8T1 548R 480WR SSD RND4K Q32T1 228R 194WR NVME SEQ1M Q8T1 4975R 4257WR NVME RND4K Q32T1 606R 554WR RAM SEQ1M Q8T1 28,706R 29,352WR RAM RND4K Q32T1 723R 655WR
hey i saw you mentioned NVDIMMs could you tell me about wether or not they actually keep the data on the memory after a reboot, ive heard some really werid things about how it saves its data and i just wanna hear it from someone who has the same idea of using NVDIMMs
@@ServeTheHomeVideo Patrick, you're clearly one of THE BEST currently on planet Earth. Sometimes I just enjoy watching how your mind grinds thru this highly technical stuff so effortlessly. Some people's lips are faster than their brains; your brains are much faster than your lips! LOL!! KEEP UP THE GOOD WORK and thanks for the valuable tip today.
Remembering what optane could have been makes me sad every time. The applications for a true midpoint in the data delegation chain between system memory and long term storage are more numerous than can be described. Servers, absolutely, it would shine there, but home computing too. We've already seen the system save state possibilities in the h20 modules, and the way it completely re-invigorated the idea of a cached spinning drive, where before nand cached spinners were actually SLOWER than uncached. In gaming, rendering, editing, animating, ANY intensive workload on GPU or CPU could benefit from more immediately available transfers over base memory, especially with the spikes from system memory rewriting. But intel killed adoption cause of artificial compatibility limits. Imagine, artificially limiting compatibility, then throwing your hands in the air and shelving the whole technology once you realize that it's not being adopted. Yes intel, limiting who can use something will cause fewer people to use it. Almost like resources that stand to benefit the entire planet shouldn't be in the hands of people willing to kill those resources to save a buck (recent Unity scandal anyone?).
Intel really blew it with Optane. It was a decent technology, but they restricted it in weird ways, and didn't explain well when it should be used. Intel is lost at sea, and has imploded somewhat. They still make great chips though; extremely reliable.
And to think I was under the assumption it was just a hopped up ' Ramdrive ' with more options.... Verry detailed where I really needed it, keep up the great work!
2016 Intel CEO Meeting: We predict the future is fast memory, 3D, Data-Fast Yeah Enginering Team: But how about our CPU? 10nm? and GPU? CEO: Our i7 already great, 4 core for customer, and Only "Nerd" gamer buy GPU, Nobody would change that fact Ah yes... the doom of Intel
Maybe they have somsthing in Development.Think about a Game Console the oldfashioned Way where You just push a Module with the Game into but with this Technology. No booting Time, the Game starts immediately because it's already in Memory like on the Atari 2600
Overconfidence tends to be Intel's greatest weakness, though their ability to shift quickly (for a corporation anyways) saves them often. The loss of Micron eventually did Optane in outright, sadly.
I wouldn't be boosting a single-sourced, "enterprise" technology where the vendor says "we have a plan [now that the only manufacturer quits the market]". Isn't "that's all we have" a red flag for anyone? So due to the risk, you should de-risk your enterprises, and sell out of your inventory of this risky tech. Please. I would like to buy it cheap for my home PC.
Ye. As sad as it is that theres probably not going to be any more useable upgrades to this, the idea of it becoming early legacy tech, falling out of hte server market and into uhe used market, like with teh x79-x99 xeons... well, let's just say budget never looked so fast
Better basic explanation than intel ever did, but I still think its a very niche, very expensive solution looking for a very niche problem to solve. If they had pushed a simpler form to the consumer mainstream at a much lower price it may of been viable by now.
that bit about the 'L' CPUs is not too far off from what intel did to effectively kill Optane for desktop use. ALL of the older systems that would have benefited from the Optane as a drive cache for platter drives was excluded from Optane compatibility, and few with M.2 drives would 'feel' the game load-time difference with the then newer systems. making Optane effectively DOA for desktops.
I got up to 7:46m and still don't know why I would want to use optane, could I, what it actually is, advantages etc so you are right - so few know and probably explains why not many buy it.
It makes more sense to have pmem modules in U.2++ on the CXL, but if Micron has no xpoint fab anymore, what would they connect to that Bus? (They just speculate beeing cashed out by Intel, most likely because they already have a better flash technology In the makes...)
I watched this video to get more educated about Optane Memory. I was looking at a laptop to possibly buy online and they were offering Optane Memory at no additional cost. Not knowing anything about Optane, I just assumed it was faster memory! Well, after hearing your explanation and how you described how the speeds could be slower, I am still confused about this technology and if I would even want to have something like this in a laptop. When I purchased my laptop 8 years ago I purchased a hybrid model storage technology thinking it was better and faster only because it was more expensive. Turns out I should have called someone at HP and talked about this because I later learned it wasn't the fastest choice. I later purchased a 500GB SSD to replace the HDD and it now is much faster. Thanks again for your explanation.
For a laptop: Get the RAM you want and the SSD you want, skip Optane Memory. That is caching via a lower-cost NVMe SSD. This could be a much longer response, but that is the advice I give everyone, and follow myself.
PMEM could have a billion different applications everywhere -- if not for Intel's insane insistence on keeping the bare chips unavailable. Portable devices, IoT, opus, networking... there are so many places where persistent memory could be extremely successful without ever encroaching on Intel's exclusivity on the "use pmem on DDR4 sticks" idea.
They invested heavily for a long time to develop it, so i guess it makes sense that they are fighting hard to not let it become a commodity market component but rather try to recap their costs in as many ways as possible. But if they cared about the technology itself generating the funds rather than overall company bottom line, then i agree, they should sell chips as well. They are using lock-in to increase overall profit (which it probably does, and when you have a fiduciary duty to shareholders it's kind of a legal requirement) even though customers wish they didn't.
The problem is more in the complexity of the drive and controller itself. Compared to NAND or DRAM you need a much more sophistacated system to get it to work, in conjunction with a software overhaul that will take advantage of the benefit of the device. When you approach many companies about doing this sort of work for the marginal gain that the drive provides they don't want to do it.
@@gulllars4620 Mora Fermi is right. Remember they aren't making a profit, the factory is losing $400M a year. It's all about scale. No scale == high cost. high cost == high price. high price == low demand. low demand == no scale. No scale == no profit. A billion niche applications that value the technology IMHO probably would have enabled scale early on. If they care about company bottom line... the current plan isn't doing a good job of protecting it near as I can tell.
Prosumer, not really. Even our W-3275 did not work. You can get workstations like the Precision T7920 www.servethehome.com/dell-precision-t7920-dual-intel-xeon-workstation-review/ and those support Optane PMem because they are basically servers in a tower chassis. William did that review, then I got to see the one we purchased Dmitrij for his router/ firewall testing and I really wanted on solely for DCPMM support.
It seems Optane is trying to be too many things at once, when they could have made 4 or 5 products that just focused on one feature at time. When someone has a problem, they want a solution, and people will always pick the simplest solution for their problem. If someone wants DRAM with persistence, I'd imagine they would pick a product that was just 'DRAM with persistence' rather than Optane, because Optane is too complex. Imagine Intel made a box that could be a CPU, or a GPU, or even a TPU.... If I needed any one of those features, I wouldn't buy the Intel solution (not matter how awesome it must be) because I'm clearly paying for features I don't need, so I would buy the product that just gave me what I needed. I see this in startups as well, people buy the simpler things that deliver what they need, and only when the customer is a larger enterprise are they more likely to buy the 'can do everything' product. In my opinion Optane is too complex, and it needs to be a sharper, narrower, and split offering product. BTW. I still wouldn't buy it, even after this great video, it still sounds too complex.
'Imagine Intel made a box that could be a CPU, or a GPU, or even a TPU...' You get an FPGA, and Intel does make those LOL. (I understand it is not what you mean, but I just have to point out this fun fact...) However, combining features seems to be a trend now, just take M1 as an example. It is a CPU, GPU, memory, AI engine and other ASICs combined and people are buying it like crazy. And I think the reason behind that 'what I need' for many people nowadays is 'a bit of all'. Yet I totally agree that Optane should be a sharper product. I would even go as far as saying that it should not be something that fit into a lot of existing product categories, it needs to be something of its own kind (I know Intel has been advertising this way but it is not). For example, how about a storage device that the system can directly boot 'into' without having to get the data out?
I think you hit the nail on the head with the timing. Optane just has too little of a use case and even less of a value proposition. Micron sees the writing on the wall and is bailing out.
CXL is the future. It is a open standard and will meet the price to performance targets that optane was designed to meet. It is a interesting technology but it always comes down to what do you get for the $$$.
What do you run on PMEM and Optane SSD? MySQL? As a filesystem? A detailed tutorial would be nice. Especially with Hypervisor in the mix, mixed mode and also apps which can actually use the pmem for stuff like cache (redis?) or Ceph store w/ (mirrored?) transaction logs on pmem?
Graylog comes to my mind - coupled with regular backups to slower redundant media. They always say you should use the fastest disks possible so why not put it on Optane?^^ Having many machines log to graylog means ungodly amounts of disk IO
Databases benefit the most from it. You can store the metadata in the Optane and use it as both a write cache and a read cache, which speeds up your most commonly accessed data
@@jmlinden7 well yes maybe, what databases do you have used with pmem? For other apps,after all PMEM is slower than dram and more risky to use than flash. I haven’t found good use cases yet, maybe ephemeral VM drives for Hypervisor or large redis session cache (requires Intel patch)
Is that P5800X only available in the U.2 form factor, however? After experiencing success with Highpoint's SSD7103, I really like the advantages of hosting such fast storage in a single PCIe x16 slot, particularly by going with the latest PCIe Gen4 add-in cards. Highpoint now has a bootable low-profile add-in card that hosts 2 x M.2 NVMe SSDs, but I think it's only Gen3. And, the model SSD7540 supports 8 x Gen4 M.2 NVMe SSDs! It took quite a while for Highpoint to make their AICs bootable, but I'm very glad I waited: the installation was a piece o' cake, e.g. by running the "Migrate OS" feature in Partition Wizard. Everything worked fine the first time I tried to boot from the SSD7103. The only possible "glitch" was the requirement to change a BIOS setting to support UEFI, but I knew about that requirement ahead of time.
@@supremelawfirm The Highpoint solution looks very interesting but at low queue depth nothing beats Optane. Regarding the P5800X, it comes in both U.2 and E1.S form factors. It supports PCIe 4x4. The capacity goes from 400GB to 3.2TB.
@@Patrick73787 For clarity - Do all the current Optane U.2 models support both PCIe Gen 3 x 4 and PCIe Gen 4 x4 interfaces and use the PCIe Gen of the connected host adapter card or motherboard? This is something that isn't very clear.... FWIW, I think there's a missing piece from the Optane story... the possibility of a second on chip memory controller such that another X DIMM slots could be on the motherboard and use as a super fast SSD - think table indexes for DB, kernel swap space, memory mapped files, or a scratch disk for intermediate app results all without impact on the main DRAM... in short an early prototype CXL memory system... Bottom line though, I agree with you that Intel is using/abusing its IP and sacrificing Optane to support their current subpar CPUs....no wonder Micron wants out as it can't expand the Optane market to become profitable.
@@danwolfe8954 My Optane 905P SSD is a U.2 PCIe 3x4 drive. The P5800X is the PCIe 4x4 successor to both the P4800X and the 905P SSDs. The consumer variants like the 905P came in both U.2 and Add-in PCIe card, while the P5800X is aimed to datacenters only in U.2 and E1.S form factors. Every Optane SSDs use 4x PCIe lanes like regular NVMe drives. The CXL protocol comes from Intel and requires at least 16 PCIe 5.0 lanes to work in non-degraded mode. CXL will be first utilized on Sapphire Rapids CPUs. Those CPUs will also be paired with 3rd gen Optane DIMMs as mentioned in this video. Let's wait and see how Optane will fare in the CXL ecosystem.
Ah.... glad to hv watched this AWESOME video - learnt a Great Deal from your videos! Wonder if I understd this right: 1) Memory Mode: Runs lots of VMs / Containers 2) Persistence / "ramdisk" mode: for Apps tt can use the APIs. Perhaps use as cache for NAS? 3) Mix mode: balance btw Memory intensive and storage intensive services - VMs & NAS.
I know it might be too soon now, but i definitely want to see more about CXL and its impact on things like truenas. And even things like your desktop gaming systems if system ram and graphics ram run together, cause right now video ram runs on incredibly different standards (like GDDR6X vs 6/5, and differing bus speeds etc)
Probably some time until you really see CXL impact on desktops. It will happen but this is being driven by hyperscale data centers. Eventually, the idea is that you can build huge systems and share resources to get cost benefits. Desktops are limited a bit by wall power. Only so big you can make a system in only 1.4-1.6kW. Data center GPUs are already at 500W (A100 80GB SXM4) and going much higher. Still, it will trickle down eventually.
I am using a 900p in my desktop, because I don't like the unreliability of ssd caches. And slc lasts forever. Most ssd benchmarks don't write more than 1gb which is convieniently the most common ssd cache size.
What I'd really like to see is a way of being able to put SRAM into DIMM slots. At scale, it should only cost about 6x as much, but provide one of the biggest performance improvements possible.
Terrible idea, actually. 6x of the cost for advantage, that will be eaten by caches anyway. The market is even turning opoosite way - eDRAM (DRAM on a die just next to CPU die within same substrate)
Intel is currently buying most if not all of there Optane 3XPoint memory from the Micron fab in Lehi Utah. Intels fab in Albuquerque is partially up but in the same stage as Microns with low output. Micron is not selling the technology just the building and possibly some of the equipment. I work at the Lehi site and this announcement was like getting kicked. We worked our butts off developing this technology only to be told we are getting sold with the building really sucks.
Meanwhile, I'm over here using a 16GB Intel Optane "Memory" module (the NVMe drive, not one of the fancy DIMMs) as swap space on my tiny home NAS, because it was a $7 impulse buy :P
Really enjoyed this overview of PMEM 100/200. Also it opens my mind into thinking of delaying going to PowerEdge PCIe Gen4 Server and PMEM 200 into waiting for PowerEdge PCIe Gen 5 in 2023-2024. Why because the one thing you didn't bring out about the downside of using the older 2017 PCIe Gen 3 Servers stuffed with PMEM 100 at really cheap prices for cool technology is about Server hot swap of GPU's, NIC's. NVMe Gen5 SSD's and other failure prone devices directly from the hot swap bay on the front of the upcoming new PCIe Gen5 Servers. What do you think about that comment, does that sound accurate?
High-end GPUs are not going to E3.S in this generation, so they will not be swappable. NICs are more OCP NIC 3.0 form factors. You are right, being in the memory channels is a big downside of the technology.
@@ServeTheHomeVideo Thanks for you constant set of valuable inputs. Can’t wait to see your episode on how to refer the cheapest two socket PCIe Gen 5 Windows 2022 Server that will accept Kioxia PM7 SSDs.
Love your presentations on your channel! @7:39 you have as total the "clock x speed" but a better number would have been gigabytes/sec, so we can directly compare to the bus/network needs. Also I believe it's spelled "Deficit"
Jay! You are totally right. Deficit was an error that got corrected in the main site version but apparently did not get into the video portion. Oh well. On the clock * speed this was meant to be a high-level piece so it was left as this just to be a conceptual model that was easier to consume. You are totally right that GB/s would be better, but was just trying to get the higher-level relative percentage out there for folks. GB/s was one more conversion that would need to happen.
@@ServeTheHomeVideo Patrick! Thanks for getting back to me, shows how hard you work for your channel! Intel recommends 4 to 1 ratio DRAM to Optane, that means give up 25% RAM performance to Optane = 50 of 200 GB/s for 8 channel 3200 PCIe V4 = 2GB/s per lane so you need 25 lanes w/o giving up RAM performance, guess that's why Micron lets it go That's why I always use GB/s so it wont matter if we talk OMI, FBDIMM, DDR5 etc
Hello Patrick, how are you doing? Thank you for sharing your insights on cutting edge technology. It is in many ways a glimpse into the future. I sure hope that tech like this gets to be available for consumers too one day, it seems like a win-win situation.
"...If you really want CXL to take off, it needs to get into other sockets at this point other than just Intel and became an industry solution...", 22:40. Yes and no. It can become an industry solution by diverting from Intel if these became laggards. From new OCP-driven PCIe, DIMM PHYs and internal controller architecture, kernel scripting, memory and CPU compute reallocations and emergent ARM SoCs/ wafer designs; reimagining the silicon. My opinion we are at this point in the industry.
As a prosumer, i love optane in any form factor, i have beein using optane(M.2) as a cache drive in my file servers sinse it was introduced. At first i added it to my Dell C2100, then i build a backup server using Ryzen 5 Pro 4650G and those two 100GB optane drives keep that slow SAS2 controller seeming very snappy. Sure they're not "supported" on anything other than Intel..... 7th gen and later? But i've had no issues with it on AMD and older intel, sure it doesnt work the way intel wants it to work unless you have 7th/8th gen, but most server OSes can have storage tiers for caching and use them in much the same way. I would like to see someone bring PMEM to the consumer market. I'd be more than happy with 1 channel of 64GB DDR4 2400, and 1 channel of PMEM if i could get 200GB of PMEM (in mixed persistant mode) for the same price as 64GB of DDR4
@@andyhu9542 Ideally, i'd like tripple channel memory controllers on consumer desktop designed to work with 3 traditional channels, or 2+1 PMEM channel, but with the ability to do all PMEM. AMD at least will need tripple channel memory for their upcoming RDAN2 APUs, and if they go throuigh with the rumored 16 compute unit APU, then 3 channels probably wont be enough, even with DDR5 5500. Look at the 5500xt, a low end GPU with only 22 compute units, but it basically has the bandwidth of 7 channels of DDR4 4000 to keep those compute units fed. I dont recall off hand but IIRC PMEM is as fast as DDR2, If AMD were to say, have a huge TSV cache that covered the whole die, instead of just sitting on top of the existing cache on die, then maybe the performance hit wont be as bad for going from 2 channels of DDR4 3200 down to 3 channels of basically DDR2 667(probably slower) Heck, having 400+MB of total cache on an APU might make RDNA2 usable at only 2 channels of DDR5 6400. For reference, Valve engineers decided that 2 channels of DDR5 5500 would not be enough for that low end APU, so they gave it a whopping 4 channels, though, really they're 32 bit channels where as DDR4 i believe is 60+8, or 56+8 something like that
I would like to see someone rigging up some DDR4 Optane to an OMI CPU with one of those adapters. POWER10 comes to mind. That won't drag the system throughout down, except that some memory channels are now for Optane, not DRAM.
I would love to use optane ssds for my cfd simulations, but the markup is too high compared to regular ssds. If it was say 1.5x $/gig to an mlc ssd i would buy it, but not like this...
Yeah, it is a bit of shame about the pricing. Earlier on it was only around 4x the cost for a while but since NAND pricing has significantly dropped while Optane pricing hasn't really moved and has even increased slightly in some markets for some products, it seems around 10x now.
If you just need high write endurance, get a Micron 9300 MAX, they are extremely durable, 18 PB of writes on the 3 TB model. A pair of them in RAID 0, if you need more speed. If you need no such endurance, there are many more good SSDs on the market, some of them are much faster.
@@TheBackyardChemist i am using 970 pros as scratch drives, my simulations can create around 200GB of data per day, but thats not always the case. just have to make sure everything is backed up and be ready to buy a new one when one of them dies... something like the 9300max would make sense but i cannot justify the straight up costs of it :( (i mean, i could, if i had the capita)
An adapter (probably needing an integrated controller) so you can get a couple of optane DIMMs in a PCIe slot to show up as a normal NVMe SSD would be a pretty handy thing once optane starts hitting ebay in volume. Kind of like the RAM drives of old, but more usable. Also seems like intel need to work hard on DIMM capacity - if they could push that up, then a stick of optane and a stick of RAM would retake the capacity crown over two sticks of RAM. It's a lot easier to turn a profit when you're offering something no-one else can, rather than offering a budget alternative to RAM.
> a couple of optane DIMMs in a PCIe slot to show up as a normal NVMe SSD That's exactly what Highpoint and other vendors of PCIe expansion cards are now offering aka "4x4" add-in cards ("AICs"). There is also a Highpoint "2x4" AIC for low-profile chassis. It took Highpoint a while to make their AICs bootable, but that feature is now standard on some of their AICs. We've had much success for a whole year already, using Highpoint's SSD7103 to host a RAID-0 array of 4 x Samsung 970 EVO Plus M.2 SSDs. Their latest products now support PCIe Gen4 speeds. p.s. I'm a big fan of Highpoint's hardware, because it works and it's very reliable. It's their documentation that needed lots of attention and hopefully getting better with time.
@@maxhammick948 Thanks! I misunderstood your point above. I appreciate the clarification. Also, I seem to remember a much older product aka "DDRdrive X1" , that held 2 or 4 DDR DIMMs in a PCIe add-in card. I don't believe it found much of a market, however. Found it here: ddrdrive.com NOTE WELL the short edge connector (looks like x1), which necessarily limits its raw bandwidth upstream. If Optane DIMMs currently max out at DDR4-2666 - as shown by Patrick in this video - then the calculations I did below seem to favor -- by quite a big margin -- a 4x4 add-in card with 4 x Gen4 M.2 NVMe SSDs: DDR4-2666 x 8 bytes = 21,328 MB/second raw bandwidth 16G / 8.125 x 16 lanes = 31,507 MB/second raw bandwidth 4x4 = x16 = elegant symmetry (The PCIe 3.0 "jumbo frame" is 1 start bit + 16 bytes + 1 stop bit = 130 bits / 16 bytes = 8.125 bits per byte transmitted, hence the divisor above.) I would choose a Western Digital Black SN850, partly because I prefer having a DRAM cache in each RAID-0 array member: 4 x DRAM cache @ 250MB = 1GB combined cache in RAID-0 mode. With PCIe doubling the clock speed at every generation, it appears to be catching and surpassing modern DDR4 DIMMs in raw bandwidth. p.s. I ran these numbers today, because for decades I have believed that DRAM was THE fastest memory available. Now, 4x4 AICs are proving to be a real challenge to that belief.
Does skylake (8124M) support optane with 610 Intel chipset? Is it really more endurable than nand given that memory operations are thousand time more frequent then disk operations ?
Newby here: Is there also a mode where the memory bandwidth is increased at the cost of lower available storage (similar to RAID 0)? It seems that 8 instead of 4 memory channels give (only) a 10/20% boost for a single CPU, but I can imagine a lot of options there, too?
The problem with optane dimms is not the performance nor the non volatile nature of them. The problem is, most enterprises use virtualiization systems. If you run a VMware cluster of say four dual socket xeons, powering about 50 Servers where two of those are running a productive database (aka no development or test systems). You do need to enable all of these four hypervisors access to optane storage... And on this point things get complicated. First you need to sacrifice one memory channel for optane on each server, limiting a the amount of memory your hypervisors have access too and B sacrifice performance of the memory they have access to. So basicly you are impeding the pe rformance of 48 VMs just by installing optane, it's not jet used at all. And now what? You got 4 Hypervisors with say 1TB optane storage each...do you use it localy? If the machine dies all vms will get migrated to other servers, except the production database which runs on your local optane disk? That you need that much and that fast that It was chosen to buy optane for it. This thing is now offline? Or you use vsan with your optane drives to share it accros your VMware cluster, but this introduces overhead, latency and the high storage iops will be no faster than your blody ethernet will allow it to be... Optane in this state is dead in most use cases. You need to pair it with a storage server solution something that runs low latency connections to the hypervisors and has failover capabilities. Or vsan if your servers have enough IO to handle verry low latency rdma capable ethernet/infinyband connections. But guess what high bandwit low latency looks like? Correct High speed ethernet (40-100GBit) RDMA support on those nics And a ton of fast storage drives in the system.... Something nvme ssds do already deliver for a lot less money..... The vsan application was something I looked into in our environment.... And we got a little shafted by fujistsu/Intel with our servers. Fist they do not have very much pcie capabilitys at all...and now the fun part: for some reason all pcie x8 an x16 slots share the same interrupt... Very neat if you want something to be fast... Outside a virtual environment optane is fast but the non volatile nature of the storage stays in contrast to the potential failure of the host..... Oh and by "failure of the host" I don't meant it hast to explode to cause a problem... A network error, a temporary bug of some sort, everything disconnecting the thing from it's network will be enough to get you in trouble... No pyrotech needed...
How would the tradeoff look like between having PMEM in memory mode and having (a few, lets say 4) P5800X drives dedicated to Linux swap partitions? If most of the data in RAM is indeed cold or lukewarm, then maybe even that would be an acceptable compromise? And this would of course work in an EPYC system.
I'm legit curious about the fundamental physics behind Optane/Xpoint memory... If I recal some of the early documentation that I read, the way they described the technology sounded a lot like memristor technology, and honestly sounds a lot like Hewlett Packard's Memristor/Crossbar concept from 2007. HP abandoned it years later, due to difficulties manufacturing it. Right around that time, Intel came up with Optane. The memristance effect was theorized in 1976, and some anomalous measurements as far back as the 1800s might be attributable to the effect, but not recognized at the time. There are a few hobbyists, including a UA-camr who has published videos on the memristance effect, but it wasn't till HP announced in 2007 that they had been working on a prototype memristor that interest really took off. The basic concept of memristance, is that the resistance of a memristor changes with the flow of current. Flow in one polarity increases resistance, and flow in the opposing polarity decreases resistance. The changes made to the materials are non-volatile, and highly durable. Reading a value alters the stored value, unless the opposing current to counter the read current isn't also applied. What always got me with the memristor concepts, was that HP had conceived a way to not just use it as non-volatile, high durability memory, but also figured out how to perform massively parallel processing _in the crossbar array itself,_ effectively merging the memory with the processing unit. They had even come up with a way to use the Crossbar system to create neural networks. In later years, they pulled back from the more elaborate concepts, and settled on just trying to market it as the heart of what they dubbed "The Machine", which was just a server with massive amounts of encrypted non-volatile memristor RAM, that served as both working and storage memory. Really, no different than what Optane is now. What's curious, is the initial descriptions of Optane (ones that I saw), really made it look like Intel was just doing memristor memory, though Intel has, as far as I'm aware, always denied that Optane had anything to do with the memristance effect.
3DXPoint is a phase change memory. It's resistance/memristance is not polarity related. I know specifically how the memory cell is read and written, but it's sufficient to say that the resistance of each cell is bistable, dependent upon a phase change sequence. It is also bit addressable/readable/writable due to the cross hatch. Indeed, it was disappointing when micron announced their mothballing/ termination of 3DXP and sale of that fab.
Forget pmem. With current high and ever increasing DIMM capacity/speed + NVME4/SAS24 low latency/high capacity/and increasing endurance drives, I find it hard to justify this technology and it's complexity which essentially requires software engineers to write for it. Optane SSD's may have their place though but only as a specialized endurance drives --- maybe --- for now anyways.
I can't find Optane DIMM power on memory slot power rail have heard energy in the structure on writes and efficient power reads but what is the difference with DRAM? Intel and sources will talk about SSD read and write power but what about DIMM? mb
Thousands of dollars more for XCL+r you got to be kidding me $400 a pop for % grade SKU split mirroring coming out of the fab in 1M unit orders if u accept Intel terms to use what u need and become a hyperscale or OEM broker dealer for what you don't want and will sell to others less the Intel Inside NRE allowance that of course u pocket for yourself, I mean u'r employer, well, everyone knows what I mean. mb
Is there any performance impact beyond lowered clock speeds if I want to toss, say, 2x PMem100 DIMMs into an existing optimally-configured 6-channel Skylake/Cascade Lake system? Like lowered peak bandwidth? Increased DRAM latency? Or does it need to be balanced, like RAM, so minimum 6x Pmem DIMMs for a Skylake/Cascade Lake platform?
There is an impact. With interleaving, going to 6x per socket means you have more modules interleaved than 2x. Also, memory/ DIMM population was a several minute segment that was pulled out of this already very long video. That is another area of high complexity.
I honestly think Micron realized that the technology wasn't catching on and they needed to exit the market before their losses ended up accruing with no profit in sight. I mean, if their customers were asking for 3d xpoint technology and they were able to ship volume, then I'm sure they would have decided not to exit the market. But I see Micron going into cxl technology because their customers are going to need memory that can communicate at different levels of their system. I can see cxl being useful for machine learning and other applications that have mixed workloads.
@@ServeTheHomeVideo knew it I have some on this ECC ram in my duel opteron test bed lol Loving the channel.you get all the great gear I'd love to tinker with I actually have the 905p 380gb m.2 and the 900p 280gb. I'm about to move to AMD with a 5950x I'm hoping I can utilize optane as cache at very least.
@@ryanwallace983 I have primocache for software since it's been default for all my rigs. Thank you I'll give it a shot once get the 2TB Sabrent rocket Plus in mail
How much is the cost savings with Optane DIMMs actually? The last time I looked at the larger Optane SSDs, they cost almost as much as the same amount of ECC FB-DIMMs, so my assumption with the Optane DIMMs is that, unless they're substantially cheaper than those large Optane SSDs, you might as well just use regular RAM in the server instead of Optane. It seems to me that there are a bunch of ways that Intel could have sold Optane with lower bars to entry, but always chose to go with the approach that maximized the implementation complexity and cost. For example, Optane could work great in the consumer space as a transparent caching layer on an SSD. Pair a bunch of QLC with Optane and present it to the host system as a single drive, with the SSD controller handling the management. That could be neat, right? QLC cost with Optane performance? Well, Intel tried this, only the "integrated" products exposed the single SSD as two different drives (one Optane and one NAND) to the host system, and required the user to set up caching software to leverage it. Then they tried selling tiny Optane-only SSDs for caching, but they inexplicably locked them to certain Intel chipsets and still required external software to leverage them. Even Intel CPU owners were locked out if they had the "wrong" chipset. Talk about shooting yourself in the foot!
in storage, Optane has great (low) latency compared to most NAND, which have large page and block sizes for cost. Samsung and one other? makes a small page/block NAND in which the latency is probably good enough in relation to Optane. As memory, if an operating system really used persistent memory, then it could have had some interesting uses? but this would take time to develop. Otherwise, the use case is a situation in which the system with max DRAM still have huge IOPS (1M IOPS) but much lower IOPS with the extra capacity possible with DRAM/Optane, and it is unclear that such a work load exists in sufficient numbers
I'm surprised Apple didn't use this for AppDirect mode in their M1 Max SoC. If anyone could optimise Optane it's Apple. Sure, 7000MB/s 8TB storage. But it's still Nand Flash
OMG I had one! It was faster than normal SSD after that?! Is that NAND versus AND based gate? It was faster but not cheaper. It might actually be a FLIPFLOP. Hence it is...... non-volatile. Wicked.
Patrick: love this video as I do all your videos. Question: you are the 1 in a million IT experts who would be able to answer this question off the top of your head: Have you encountered any systems that supported a fresh OS install to Optane DIMMs? I posted a similar question at another IT website reporting Micron's recent decision. What originally came to my mind was a re-design of triple channel chipsets, which allowed the third channel to host persistent Optane DIMMs for effectively running an entire OS in a ramdisk that is non-volatile. In other words, using Windows terminology, the C: system partition would exist on that Optane ramdisk. To implement this hybrid approach correctly, DRAM controllers would need to operate at different frequencies, so as to prevent the problem you described which down-clocks all DRAM to the same frequency as the Optane DIMMs. Yes, enhancements would also need to be added to a motherboard's BIOS, chiefly by adding something like a "Format RAM" feature which supports a fresh OS install, and subsequently detects if the OS is already installed in an Optane ramdisk running on that third channel. FYI: I filed a provisional patent application for such a "Format RAM" feature, many years ago, but that provisional application expired.
My comment at another IT User Forum, FYI: [begin quote] In the interest of scientific experimentation, if nothing else, I would like to have seen a few radical enhancements to standard server and workstation chipsets, to allow a fresh OS install to a ramdisk hosted by Optane DIMMs. Along these lines, one configuration that came to mind was those dated triple-channel motherboards: the third channel could be dedicated to such a persistent ramdisk, and the other 2 or 4 channels could be assigned to current quad-channel CPUs. The BIOS could be enhanced to permit very fast STARTUPs and RESTARTS, and of course a "Format RAM" feature would support fresh OS installs to Optane DIMMs installed in a third channel. By way of comparison, last year I migrated Windows 10 to a bootable Highpoint SSD7103 hosting a RAID-0 array of 4 x Samsung 970 EVO Plus M.2 NVMe SSDs. I recall measuring >11,690 MB/sec. READs with CDM. I continue to be amazed at how quickly that Windows 10 workstation does routine maintenance tasks, like a virus check of every discrete file in the C: system partition. p.s. Somewhere in my daily reading of PC-related news, I saw a Forum comment by an experienced User who did something similar -- by installing an OS in a VM. He reported the same extraordinary speed launching all tasks, no matter how large or small. [end quote]
@@ServeTheHomeVideo Aren't Optane NVMe SSDs limited to x4 PCIe lanes? I thought one major advantage of Optane DIMMs was their superior bandwidth i.e. parallel DIMM channels. M.2 and U.2 form factors are both x4. And, all of the Optane AICs I see at Newegg also use x4 edge connectors. Am I missing something important?
Should I be comparing Optane DIMMs with a RAID-0 array of 4 x Optane NVMe SSDs? If I could afford 4 x Optane M.2 NVMe SSDs, I would be able to compare them when installed in our Highpoint SSD7103. The latter RAID-0 array currently hosts 4 x Samsung 970 EVO Plus m.2 SSDs.
Many thanks for the expert direction, Patrick. I checked Highpoint's website, and their model SSD7505 supports 4 x M.2 @ Gen4 and it's also bootable, much like our SSD7103 which is booting Windows 10 AOK. The latest crop of Gen4 M.2 NVMe SSDs should offer persistence, extraordinary performance, and enormous capacity, even though they are not byte-addressable like Optane. As such, Optane M.2 SSDs are up against some stiff competition with the advent of Gen4 M.2 SSDs e.g. Sabrent, Corsair, Gigabyte and Samsung. The performance "gap" between DIMM slots and PCIe x16 slots should close even more with the advent of PCIe Gen5. From Highpoint's website, see: HighPoint SSD7505 PCIe Gen4 x16 4-Port M.2 NVMe RAID Controller Dedicated PCIe 4.0 x16 direct to CPU NVMe RAID Solution Truly Platform Independent RAID 0, 1, 1/0 and single-disk 4x M.2 NVMe PCIe 4.0 SSD’s PCIe Gen 3 Compatible Up to 32TB capacity per controller Low-Noise Hyper-Cooling Solution Integrated SSD TBW and temperature monitoring capability Bootable RAID Support for Windows and Linux
For some reason UA-cam deleted my reply. It might be your name. It’s an added layer of complexity that is there because hard drives were a major bottleneck because of their speed. Soon load times will be zero with the absence of ram. Also the state of a pc will not be lost if there was a power outage or if it became unplugged.
@@ultraderek we have hybernation and it is quite error-prone. Also dont forget, optane is slower, and more expensive. Todays deficit of electronics should hint you how important price is.
@@ultraderek speaking of YT censorship, know this: lot of people answers to me, so if it deletes you, it means inequality of google censorship. Welcome to 2021!
This is SOOOOO complex.. I may be completely misunderstanding this, but my short takeaway is that they're doing something proprietary that works only with certain CPUs, with the benefit being no better than using an NVMe disk for the persistent storage and leaving your DRAM going as fast as possible, because to take advantage of the persistence effectively anyway, the program running on the system must be specifically written to take advantage of it? (which therefore limits the systems it will run on, since it wont work on non-Xeon systems).. Unless I'm seriously misunderstanding, I see this dying off like the "hard cards" of the '80s. That being said.. wow.. server tech has certainly come a long way...
Wow, I didnt know that optane still existed in its original context still cause I thought it just turned into a name for intel ssd drives that I see in best buy once in a while lololol
It’s a shame Intel always ruin a technology with it’s insecure strategy. Should just open it up for platform independence, so people can use it anywhere like a hard drive. even with some licensing cost added, people lust over its unique property would gladly shelve out money
Excellent point! I never really understood why the first Optane M.2 SSDs only used an x2 interface, when almost every other M.2 SSD used an x4 interface. Also, pitching it as a "cache" limited its appeal even further. My speculations here may be worthless, but when first announced I was expecting a 2.5" Optane SSD to compete with other 2.5" nand flash SSDs that became widely available. The endurance and very low latencies were really something to brag about. The U.2 form factor never really took off for Prosumers, once motherboard manufacturers adopted the M.2 form factor + native RAID support for multiple M.2 slots. And, imho, the "4x4" add-in cards spelled even stiffer competition for existing Optane products. I also suspect that upper management must have experienced extra pressure to start recovering the high initial Optane R&D expenses, after the long delays before actual products were released. Lastly, many of the performance claims for Optane turned out to be very disappointing, as I recall.
EXACTLY! "like a hard drive" = great analogy. Imagine what would have happened to the PCI Express standard if Intel had locked it up with proprietary IP licensing restrictions! In economics, there is the property of "elasticity". If a 10% reduction in price results in a 20% increase in sales, then there is "elasticity" between price changes and corresponding changes in market demand. I still feel that Intel would have experienced much larger demand for Optane with lower MSRPs (see my other comments above, for more issues that arose with Intel's early Optane products). Similarly, the long wait for real products, and the glowing predictions that were not realized, left Prosumers without the WOW POWER they were expecting. That was a real shame: one of the reasons for Intel's historic success has been the market feedback they have received from very intelligent and vastly experienced people like Patrick here. Intel ignores people like Patrick at their great peril, imho. It's a great innovation; just a shame that Intel's marketing and engineering groups did not "sync up" more closely.
@@supremelawfirm yeah! Like display technology, NAND is like OLED with burn in problem. Spindisk is like LCD, just inherently bad. Now intel Optane is like an Apple_Owned micro-led that will not work with anything other than the stupid Apple eco system.
@@jmssun Ah, yes: then came "VROC". I will always remember the exquisite gesture which Allyn Malventano made during a PC Perspectives video: he held up that Intel "dongle" -- that was required for certain modern RAID modes to work at all -- and didn't say another word. He merely let his dongle dangle at eye level. The silence during that brief pause was so thick, you could cut it with a meat cleaver. I think it was only this morning that I finally read, for the first time, that Intel is now planning to increase the lane count from 4 to 8 on their DMI channels. Intel's failure to do so much sooner was arguably one of THE main reasons why "4x4" add-in cards by other vendors quickly became popular adapters for hosting RAID arrays using multiple M.2 NVMe drives. 4 x 4 = x16 = elegant symmetry Prosumers who had tried to configure fast RAID arrays downstream of current DMI links quickly hit a low ceiling a/k/a "MINNY HEADROOM" (the opposite of MAX HEADROOM). Hey, Intel: THE REAL ACTION IS OVER HERE ON ALL THESE EMPTY x16 PCIe SLOTS ... wired straight from these multi-core quad-channel MONSTER CPUs. DUUUUH!! And the realization was slow to dawn on Intel that dedicated I/O Processors were not needed any longer, with so many multiple CPU cores almost idling from lack of work to compute! By way of raw comparisons, in response to that dangled dongle AMD released free software for hosting very fast RAID arrays on multiples of these competing "4x4" add-in cards installed in Threadripper motherboards. And, ASRock responded to me THE SAME DAY when I requested directions for configuring a 4x4 card with one of their latest Threadripper motherboards. Just my 2 cents + an opportunity to vent. Hope nobody here is offended; no offenses intended.
Just wait until Gen-Z. Probably more like CXL 2.0 (maybe later) but that is where we get shelves of memory connected to fabric and shared across multiple accelerators/ CPUs.
Wait...so you're saying that intel cooked up a pretty cool technology that seemed to be implemented just so they could make more money, then a competitor came along and basically said 'No, do it our way, it's cheaper?' Where have I heard this before? *cough*ia64*cough*
You're absolutely right Patrick. I like the tech, I like the price, I just don't like the artificially constrained CPU support and that's more than enough to keep me away. If they open it up to the entire Xeon line in the near future, then it could drastically change the way people like me build out our clusters.
Though if you're paying for CPU-based licensing, like for RDBMS like SQL, the hardware costs of the solution is a small fraction over the lifespan, and if you get hung up in standard vs L SKUs pricing and relative hardware costs you are doing yourself a bad favor with regards to TCO for the solution as a whole compared to how much performance you get for that TCO.
This is the type of content that keeps bringing me back to your channel. Great original unbiased content! Awesome work
Optane would have been a killer feature especially in the low power and low core-count Xeons. Large data centers concerned in high density (buying specifically those extremely expensive CPUs and motherboards) can afford true DRAM that provides reliable and consistent performance that their users demand.
If cheap (NAS-level) Xeons supported Optane, Intel would have very likely sold an average of 4 P-RAM DIMMS for every Xeon CPU they shipped out the factory doors. Considering that Intel does not sell DRAM, this would have only increased their adoption and profits.
But, the sad reality is that in the last decade, Intel did a lot of questionable decisions and relied on the few moments of good luck (and bad luck for AMD) in their strategies. With Micron out, I expect Optane to be fade into obscurity like their MIC architecture or their HEDT platform; obviously, without ever being sought after or lusted for as the HEDT ecosystem.
Optane tends to be power hungry and Intel hasn't invested the development time to reduce the power draw... I can't remember where I read it, but IIRC each Optane DIMM add about 4 watts at idle and double digits (14?) when active to the power drain... and the U.2 run hot....
VROC made sense for Intel to compete with HBA/RAID card providers but optane dimms - not so much. They should leave that one.
The QR codes on the DIMMs that are readable from the tray or even when the DIMM is installed are a really smart move
Those aren't QR Codes. Those are DataMatrix codes, a 2D barcode competitor to QR Codes. QR codes always have those "bullseye-like" markers, while DataMatrix codes have 2 solid line edges.
@@PanduPoluan still a really smart move if they have serial data or other info... Could make inventory interesting
A new technology we have developed is an LLVM compiler extension for Optane such that data structures are automatically persistent. The research paper is under peer review and should make the future uses of Optane easier for developers.
not sure how many times I have to say this, but your tech updates are exceptional in so many ways
Great explainer, I especially loved the bios walkthrough. Sometimes there's no substitute for seeing something for yourself. Thanks!
Thanks Seth. That is something that there is not a lot on. I figured if I had not seen much of it, others will not have either. Have a super day.
Great explainer and insight.
Love STH's content and the neutral perspective.
Glad you enjoyed it!
Always interesting content Patrick! Thanks for sharing your knowledge on this stuff!
Thanks for the great video! Having been a strong believer in the potential of Optane/XPoint technologies for years now, it's great to hear your overview and views on potential future possibilities. It still amazes me how rare even the NVME drive use is in the mid market with the huge performance per dollar it can offer for many database server applications - particularly when factoring in density and software licensing components.
I can appreciate your statement about the gap between high level and low level info about this architecture so thanks for filling the gap a little.
I wish micron would make more NVDIMMs for both Intel and AMD instead of supporting a system specific standard. Being able to use DIMMs as a RAMDISK gives amazing performance and means you don't have to worry about write back in the event of power out. Currently RAMDISK works best with RAM and NVME to safely with write back, but with NVDIMMs you don't have to worry about the write back. Just think of taking old servers running HDDs and swapping half the ram with NVRAM and seeing a massive performance increase.
There is also a latency and lookup improvement. Stats below:
HDD SEQ1M Q8T1 131R 126WR
HDD RND4K Q32T1 2.26R 2.48WR
SSD SEQ1M Q8T1 548R 480WR
SSD RND4K Q32T1 228R 194WR
NVME SEQ1M Q8T1 4975R 4257WR
NVME RND4K Q32T1 606R 554WR
RAM SEQ1M Q8T1 28,706R 29,352WR
RAM RND4K Q32T1 723R 655WR
hey i saw you mentioned NVDIMMs could you tell me about wether or not they actually keep the data on the memory after a reboot, ive heard some really werid things about how it saves its data and i just wanna hear it from someone who has the same idea of using NVDIMMs
ABSOLUTELY AWESOME !!!
Congrats on 50K!
Thank you! It has been a long road.
Yea I just subscribed too, ex Telo Systems Engineer here.
@@ServeTheHomeVideo Patrick, you're clearly one of THE BEST currently on planet Earth. Sometimes I just enjoy watching how your mind grinds thru this highly technical stuff so effortlessly. Some people's lips are faster than their brains; your brains are much faster than your lips! LOL!! KEEP UP THE GOOD WORK and thanks for the valuable tip today.
Remembering what optane could have been makes me sad every time. The applications for a true midpoint in the data delegation chain between system memory and long term storage are more numerous than can be described. Servers, absolutely, it would shine there, but home computing too. We've already seen the system save state possibilities in the h20 modules, and the way it completely re-invigorated the idea of a cached spinning drive, where before nand cached spinners were actually SLOWER than uncached. In gaming, rendering, editing, animating, ANY intensive workload on GPU or CPU could benefit from more immediately available transfers over base memory, especially with the spikes from system memory rewriting. But intel killed adoption cause of artificial compatibility limits. Imagine, artificially limiting compatibility, then throwing your hands in the air and shelving the whole technology once you realize that it's not being adopted. Yes intel, limiting who can use something will cause fewer people to use it. Almost like resources that stand to benefit the entire planet shouldn't be in the hands of people willing to kill those resources to save a buck (recent Unity scandal anyone?).
No subtitles? How the hell can I watch this without waking my wife?
Intel really blew it with Optane. It was a decent technology, but they restricted it in weird ways, and didn't explain well when it should be used. Intel is lost at sea, and has imploded somewhat. They still make great chips though; extremely reliable.
And to think I was under the assumption it was just a hopped up ' Ramdrive ' with more options....
Verry detailed where I really needed it, keep up the great work!
Really good video, explains so much of the optane stuff that isn't clear from intel
2016 Intel CEO Meeting: We predict the future is fast memory, 3D, Data-Fast Yeah
Enginering Team: But how about our CPU? 10nm? and GPU?
CEO: Our i7 already great, 4 core for customer, and Only "Nerd" gamer buy GPU, Nobody would change that fact
Ah yes... the doom of Intel
Maybe they have somsthing in Development.Think about a Game Console the oldfashioned Way where You just push a Module with the Game into but with this Technology. No booting Time, the Game starts immediately because it's already in Memory like on the Atari 2600
Overconfidence tends to be Intel's greatest weakness, though their ability to shift quickly (for a corporation anyways) saves them often. The loss of Micron eventually did Optane in outright, sadly.
I wouldn't be boosting a single-sourced, "enterprise" technology where the vendor says "we have a plan [now that the only manufacturer quits the market]". Isn't "that's all we have" a red flag for anyone?
So due to the risk, you should de-risk your enterprises, and sell out of your inventory of this risky tech. Please. I would like to buy it cheap for my home PC.
Ye. As sad as it is that theres probably not going to be any more useable upgrades to this, the idea of it becoming early legacy tech, falling out of hte server market and into uhe used market, like with teh x79-x99 xeons... well, let's just say budget never looked so fast
Better basic explanation than intel ever did, but I still think its a very niche, very expensive solution looking for a very niche problem to solve.
If they had pushed a simpler form to the consumer mainstream at a much lower price it may of been viable by now.
Wow, so many valuable details. Also thank you so much for your direct engineer to engineer communication style!
that bit about the 'L' CPUs is not too far off from what intel did to effectively kill Optane for desktop use. ALL of the older systems that would have benefited from the Optane as a drive cache for platter drives was excluded from Optane compatibility, and few with M.2 drives would 'feel' the game load-time difference with the then newer systems. making Optane effectively DOA for desktops.
I got up to 7:46m and still don't know why I would want to use optane, could I, what it actually is, advantages etc so you are right - so few know and probably explains why not many buy it.
It makes more sense to have pmem modules in U.2++ on the CXL, but if Micron has no xpoint fab anymore, what would they connect to that Bus? (They just speculate beeing cashed out by Intel, most likely because they already have a better flash technology In the makes...)
There are several new technologies coming, and they could always design a DRAM/ capacitor/ NAND solution for CXL
I watched this video to get more educated about Optane Memory. I was looking at a laptop to possibly buy online and they were offering Optane Memory at no additional cost. Not knowing anything about Optane, I just assumed it was faster memory! Well, after hearing your explanation and how you described how the speeds could be slower, I am still confused about this technology and if I would even want to have something like this in a laptop. When I purchased my laptop 8 years ago I purchased a hybrid model storage technology thinking it was better and faster only because it was more expensive. Turns out I should have called someone at HP and talked about this because I later learned it wasn't the fastest choice. I later purchased a 500GB SSD to replace the HDD and it now is much faster. Thanks again for your explanation.
For a laptop: Get the RAM you want and the SSD you want, skip Optane Memory. That is caching via a lower-cost NVMe SSD.
This could be a much longer response, but that is the advice I give everyone, and follow myself.
PMEM could have a billion different applications everywhere -- if not for Intel's insane insistence on keeping the bare chips unavailable.
Portable devices, IoT, opus, networking... there are so many places where persistent memory could be extremely successful without ever encroaching on Intel's exclusivity on the "use pmem on DDR4 sticks" idea.
They invested heavily for a long time to develop it, so i guess it makes sense that they are fighting hard to not let it become a commodity market component but rather try to recap their costs in as many ways as possible. But if they cared about the technology itself generating the funds rather than overall company bottom line, then i agree, they should sell chips as well. They are using lock-in to increase overall profit (which it probably does, and when you have a fiduciary duty to shareholders it's kind of a legal requirement) even though customers wish they didn't.
The problem is more in the complexity of the drive and controller itself. Compared to NAND or DRAM you need a much more sophistacated system to get it to work, in conjunction with a software overhaul that will take advantage of the benefit of the device. When you approach many companies about doing this sort of work for the marginal gain that the drive provides they don't want to do it.
@@gulllars4620 Mora Fermi is right. Remember they aren't making a profit, the factory is losing $400M a year. It's all about scale. No scale == high cost. high cost == high price. high price == low demand. low demand == no scale. No scale == no profit. A billion niche applications that value the technology IMHO probably would have enabled scale early on. If they care about company bottom line... the current plan isn't doing a good job of protecting it near as I can tell.
how that was really interesting I've never fully understood the implications of optane until now thanks a lot!
Do any prosumer/workstation platforms support Intel Optane DIMMs (even unofficially)?
Prosumer, not really. Even our W-3275 did not work. You can get workstations like the Precision T7920 www.servethehome.com/dell-precision-t7920-dual-intel-xeon-workstation-review/ and those support Optane PMem because they are basically servers in a tower chassis. William did that review, then I got to see the one we purchased Dmitrij for his router/ firewall testing and I really wanted on solely for DCPMM support.
@@ServeTheHomeVideo which home/workstation software would benefit from this?
@@Ramoonus I imagine CAD and product design would benefit.
It seems Optane is trying to be too many things at once, when they could have made 4 or 5 products that just focused on one feature at time. When someone has a problem, they want a solution, and people will always pick the simplest solution for their problem. If someone wants DRAM with persistence, I'd imagine they would pick a product that was just 'DRAM with persistence' rather than Optane, because Optane is too complex.
Imagine Intel made a box that could be a CPU, or a GPU, or even a TPU.... If I needed any one of those features, I wouldn't buy the Intel solution (not matter how awesome it must be) because I'm clearly paying for features I don't need, so I would buy the product that just gave me what I needed.
I see this in startups as well, people buy the simpler things that deliver what they need, and only when the customer is a larger enterprise are they more likely to buy the 'can do everything' product.
In my opinion Optane is too complex, and it needs to be a sharper, narrower, and split offering product.
BTW. I still wouldn't buy it, even after this great video, it still sounds too complex.
'Imagine Intel made a box that could be a CPU, or a GPU, or even a TPU...' You get an FPGA, and Intel does make those LOL. (I understand it is not what you mean, but I just have to point out this fun fact...)
However, combining features seems to be a trend now, just take M1 as an example. It is a CPU, GPU, memory, AI engine and other ASICs combined and people are buying it like crazy. And I think the reason behind that 'what I need' for many people nowadays is 'a bit of all'.
Yet I totally agree that Optane should be a sharper product. I would even go as far as saying that it should not be something that fit into a lot of existing product categories, it needs to be something of its own kind (I know Intel has been advertising this way but it is not). For example, how about a storage device that the system can directly boot 'into' without having to get the data out?
I'd be happy with NVMe 3DXPoint storage, especially that it would just work, unlike Optane cache and DIMM variants that needs special platform to work
I like the level of detail this guys goes into.
I think you hit the nail on the head with the timing. Optane just has too little of a use case and even less of a value proposition. Micron sees the writing on the wall and is bailing out.
Awesome video, even for us data center laymen.
CXL is the future. It is a open standard and will meet the price to performance targets that optane was designed to meet. It is a interesting technology but it always comes down to what do you get for the $$$.
What do you run on PMEM and Optane SSD? MySQL? As a filesystem? A detailed tutorial would be nice. Especially with Hypervisor in the mix, mixed mode and also apps which can actually use the pmem for stuff like cache (redis?) or Ceph store w/ (mirrored?) transaction logs on pmem?
Graylog comes to my mind - coupled with regular backups to slower redundant media. They always say you should use the fastest disks possible so why not put it on Optane?^^ Having many machines log to graylog means ungodly amounts of disk IO
Databases benefit the most from it. You can store the metadata in the Optane and use it as both a write cache and a read cache, which speeds up your most commonly accessed data
@@jmlinden7 well yes maybe, what databases do you have used with pmem? For other apps,after all PMEM is slower than dram and more risky to use than flash. I haven’t found good use cases yet, maybe ephemeral VM drives for Hypervisor or large redis session cache (requires Intel patch)
Thank you for this explainer.
I can't wait to see some benchmarks of the Optane P5800X SSD.
I agree. Not sure if we are under embargo for those right now so they are not shown.
Is that P5800X only available in the U.2 form factor, however?
After experiencing success with Highpoint's SSD7103,
I really like the advantages of hosting such fast storage in a single PCIe x16 slot,
particularly by going with the latest PCIe Gen4 add-in cards.
Highpoint now has a bootable low-profile add-in card that
hosts 2 x M.2 NVMe SSDs, but I think it's only Gen3.
And, the model SSD7540 supports 8 x Gen4 M.2 NVMe SSDs!
It took quite a while for Highpoint to make their AICs bootable,
but I'm very glad I waited: the installation was a piece o' cake,
e.g. by running the "Migrate OS" feature in Partition Wizard.
Everything worked fine the first time I tried to boot from the SSD7103.
The only possible "glitch" was the requirement to change a BIOS setting
to support UEFI, but I knew about that requirement ahead of time.
@@supremelawfirm The Highpoint solution looks very interesting but at low queue depth nothing beats Optane. Regarding the P5800X, it comes in both U.2 and E1.S form factors. It supports PCIe 4x4. The capacity goes from 400GB to 3.2TB.
@@Patrick73787 For clarity - Do all the current Optane U.2 models support both PCIe Gen 3 x 4 and PCIe Gen 4 x4 interfaces and use the PCIe Gen of the connected host adapter card or motherboard? This is something that isn't very clear.... FWIW, I think there's a missing piece from the Optane story... the possibility of a second on chip memory controller such that another X DIMM slots could be on the motherboard and use as a super fast SSD - think table indexes for DB, kernel swap space, memory mapped files, or a scratch disk for intermediate app results all without impact on the main DRAM... in short an early prototype CXL memory system... Bottom line though, I agree with you that Intel is using/abusing its IP and sacrificing Optane to support their current subpar CPUs....no wonder Micron wants out as it can't expand the Optane market to become profitable.
@@danwolfe8954 My Optane 905P SSD is a U.2 PCIe 3x4 drive. The P5800X is the PCIe 4x4 successor to both the P4800X and the 905P SSDs. The consumer variants like the 905P came in both U.2 and Add-in PCIe card, while the P5800X is aimed to datacenters only in U.2 and E1.S form factors. Every Optane SSDs use 4x PCIe lanes like regular NVMe drives.
The CXL protocol comes from Intel and requires at least 16 PCIe 5.0 lanes to work in non-degraded mode. CXL will be first utilized on Sapphire Rapids CPUs. Those CPUs will also be paired with 3rd gen Optane DIMMs as mentioned in this video. Let's wait and see how Optane will fare in the CXL ecosystem.
Ah.... glad to hv watched this AWESOME video - learnt a Great Deal from your videos!
Wonder if I understd this right:
1) Memory Mode: Runs lots of VMs / Containers
2) Persistence / "ramdisk" mode: for Apps tt can use the APIs. Perhaps use as cache for NAS?
3) Mix mode: balance btw Memory intensive and storage intensive services - VMs & NAS.
I know it might be too soon now, but i definitely want to see more about CXL and its impact on things like truenas. And even things like your desktop gaming systems if system ram and graphics ram run together, cause right now video ram runs on incredibly different standards (like GDDR6X vs 6/5, and differing bus speeds etc)
Probably some time until you really see CXL impact on desktops. It will happen but this is being driven by hyperscale data centers. Eventually, the idea is that you can build huge systems and share resources to get cost benefits. Desktops are limited a bit by wall power. Only so big you can make a system in only 1.4-1.6kW. Data center GPUs are already at 500W (A100 80GB SXM4) and going much higher.
Still, it will trickle down eventually.
I am using a 900p in my desktop, because I don't like the unreliability of ssd caches. And slc lasts forever. Most ssd benchmarks don't write more than 1gb which is convieniently the most common ssd cache size.
Thanks for this.
What I'd really like to see is a way of being able to put SRAM into DIMM slots. At scale, it should only cost about 6x as much, but provide one of the biggest performance improvements possible.
Terrible idea, actually. 6x of the cost for advantage, that will be eaten by caches anyway. The market is even turning opoosite way - eDRAM (DRAM on a die just next to CPU die within same substrate)
Intel is currently buying most if not all of there Optane 3XPoint memory from the Micron fab in Lehi Utah. Intels fab in Albuquerque is partially up but in the same stage as Microns with low output. Micron is not selling the technology just the building and possibly some of the equipment. I work at the Lehi site and this announcement was like getting kicked. We worked our butts off developing this technology only to be told we are getting sold with the building really sucks.
That is a bummer. This is truly great tech before others are to market.
What technology
It really did suck didn't it
I strongly agree with CXL displacing many of the extra DDR channels. The days of huge pincount may be behind us
lga 6096: im bout to end this mans hole career
Meanwhile, I'm over here using a 16GB Intel Optane "Memory" module (the NVMe drive, not one of the fancy DIMMs) as swap space on my tiny home NAS, because it was a $7 impulse buy :P
26:57 I literally thought I started another video!
Ha!
How can I best benchmark or measure my memory bandwidth usage in Linux? `iostat` seens good.
This is great stuff, thank you for the content and have an awesome day too!
Really enjoyed this overview of PMEM 100/200. Also it opens my mind into thinking of delaying going to PowerEdge PCIe Gen4 Server and PMEM 200 into waiting for PowerEdge PCIe Gen 5 in 2023-2024. Why because the one thing you didn't bring out about the downside of using the older 2017 PCIe Gen 3 Servers stuffed with PMEM 100 at really cheap prices for cool technology is about Server hot swap of GPU's, NIC's. NVMe Gen5 SSD's and other failure prone devices directly from the hot swap bay on the front of the upcoming new PCIe Gen5 Servers. What do you think about that comment, does that sound accurate?
High-end GPUs are not going to E3.S in this generation, so they will not be swappable. NICs are more OCP NIC 3.0 form factors. You are right, being in the memory channels is a big downside of the technology.
@@ServeTheHomeVideo Thanks for you constant set of valuable inputs. Can’t wait to see your episode on how to refer the cheapest two socket PCIe Gen 5 Windows 2022 Server that will accept Kioxia PM7 SSDs.
Do Optane dimms have fewer bandwidth than traditional RAM at a fixed clockspeed?
Perhaps the bigger challenge is that they are higher latency than DRAM.
@@ServeTheHomeVideo oh yeah I would expect that, thank you for your time anyway!
So if App direct mode is optimized and good for SAP then would optane App direct mode also be valuable for Tableau?
Love your presentations on your channel! @7:39 you have as total the "clock x speed" but a better number would have been gigabytes/sec, so we can directly compare to the bus/network needs. Also I believe it's spelled "Deficit"
Jay! You are totally right. Deficit was an error that got corrected in the main site version but apparently did not get into the video portion. Oh well.
On the clock * speed this was meant to be a high-level piece so it was left as this just to be a conceptual model that was easier to consume. You are totally right that GB/s would be better, but was just trying to get the higher-level relative percentage out there for folks. GB/s was one more conversion that would need to happen.
@@ServeTheHomeVideo Patrick! Thanks for getting back to me, shows how hard you work for your channel!
Intel recommends 4 to 1 ratio DRAM to Optane, that means give up 25% RAM performance to Optane = 50 of 200 GB/s for 8 channel 3200
PCIe V4 = 2GB/s per lane so you need 25 lanes w/o giving up RAM performance, guess that's why Micron lets it go
That's why I always use GB/s so it wont matter if we talk OMI, FBDIMM, DDR5 etc
Hello Patrick, how are you doing?
Thank you for sharing your insights on cutting edge technology. It is in many ways a glimpse into the future. I sure hope that tech like this gets to be available for consumers too one day, it seems like a win-win situation.
"...If you really want CXL to take off, it needs to get into other sockets at this point other than just Intel and became an industry solution...", 22:40. Yes and no. It can become an industry solution by diverting from Intel if these became laggards. From new OCP-driven PCIe, DIMM PHYs and internal controller architecture, kernel scripting, memory and CPU compute reallocations and emergent ARM SoCs/ wafer designs; reimagining the silicon. My opinion we are at this point in the industry.
As a prosumer, i love optane in any form factor, i have beein using optane(M.2) as a cache drive in my file servers sinse it was introduced. At first i added it to my Dell C2100, then i build a backup server using Ryzen 5 Pro 4650G and those two 100GB optane drives keep that slow SAS2 controller seeming very snappy. Sure they're not "supported" on anything other than Intel..... 7th gen and later? But i've had no issues with it on AMD and older intel, sure it doesnt work the way intel wants it to work unless you have 7th/8th gen, but most server OSes can have storage tiers for caching and use them in much the same way.
I would like to see someone bring PMEM to the consumer market. I'd be more than happy with 1 channel of 64GB DDR4 2400, and 1 channel of PMEM if i could get 200GB of PMEM (in mixed persistant mode) for the same price as 64GB of DDR4
Also a prosumer. And I just want to see Optane as the next-gen product that eliminates loading screens......
@@andyhu9542 Ideally, i'd like tripple channel memory controllers on consumer desktop designed to work with 3 traditional channels, or 2+1 PMEM channel, but with the ability to do all PMEM. AMD at least will need tripple channel memory for their upcoming RDAN2 APUs, and if they go throuigh with the rumored 16 compute unit APU, then 3 channels probably wont be enough, even with DDR5 5500. Look at the 5500xt, a low end GPU with only 22 compute units, but it basically has the bandwidth of 7 channels of DDR4 4000 to keep those compute units fed.
I dont recall off hand but IIRC PMEM is as fast as DDR2, If AMD were to say, have a huge TSV cache that covered the whole die, instead of just sitting on top of the existing cache on die, then maybe the performance hit wont be as bad for going from 2 channels of DDR4 3200 down to 3 channels of basically DDR2 667(probably slower) Heck, having 400+MB of total cache on an APU might make RDNA2 usable at only 2 channels of DDR5 6400.
For reference, Valve engineers decided that 2 channels of DDR5 5500 would not be enough for that low end APU, so they gave it a whopping 4 channels, though, really they're 32 bit channels where as DDR4 i believe is 60+8, or 56+8 something like that
Super informative. Thanks
I would like to see someone rigging up some DDR4 Optane to an OMI CPU with one of those adapters.
POWER10 comes to mind.
That won't drag the system throughout down, except that some memory channels are now for Optane, not DRAM.
I would love to use optane ssds for my cfd simulations, but the markup is too high compared to regular ssds. If it was say 1.5x $/gig to an mlc ssd i would buy it, but not like this...
Yeah, it is a bit of shame about the pricing. Earlier on it was only around 4x the cost for a while but since NAND pricing has significantly dropped while Optane pricing hasn't really moved and has even increased slightly in some markets for some products, it seems around 10x now.
If you just need high write endurance, get a Micron 9300 MAX, they are extremely durable, 18 PB of writes on the 3 TB model. A pair of them in RAID 0, if you need more speed.
If you need no such endurance, there are many more good SSDs on the market, some of them are much faster.
@@TheBackyardChemist i am using 970 pros as scratch drives, my simulations can create around 200GB of data per day, but thats not always the case. just have to make sure everything is backed up and be ready to buy a new one when one of them dies...
something like the 9300max would make sense but i cannot justify the straight up costs of it :( (i mean, i could, if i had the capita)
@@Tyrim lol we had a 6 core Haswell E box generate 100 TB of writes in like 2 months. Ended up using HDDs in RAID0 until the 9300 max came out
@@Tyrim they make sata high endurance: micron max 5100/5200/5300 too
CXL and CCIX will ultimately bridge the road to non-x86 alternatives. I can't wait.
An adapter (probably needing an integrated controller) so you can get a couple of optane DIMMs in a PCIe slot to show up as a normal NVMe SSD would be a pretty handy thing once optane starts hitting ebay in volume. Kind of like the RAM drives of old, but more usable.
Also seems like intel need to work hard on DIMM capacity - if they could push that up, then a stick of optane and a stick of RAM would retake the capacity crown over two sticks of RAM. It's a lot easier to turn a profit when you're offering something no-one else can, rather than offering a budget alternative to RAM.
> a couple of optane DIMMs in a PCIe slot to show up as a normal NVMe SSD
That's exactly what Highpoint and other vendors of PCIe expansion cards are now offering aka "4x4" add-in cards ("AICs"). There is also a Highpoint "2x4" AIC for low-profile chassis.
It took Highpoint a while to make their AICs bootable, but that feature is now standard on some of their AICs.
We've had much success for a whole year already, using Highpoint's SSD7103 to host a RAID-0 array of 4 x Samsung 970 EVO Plus M.2 SSDs.
Their latest products now support PCIe Gen4 speeds.
p.s. I'm a big fan of Highpoint's hardware, because it works and it's very reliable.
It's their documentation that needed lots of attention and hopefully getting better with time.
@@supremelawfirm That's just M.2 to PCIe - I'm talking about DIMM slots, so it can use optane DIMMs rather than optane in M.2 form (or other SSDs)
@@maxhammick948 Thanks! I misunderstood your point above. I appreciate the clarification.
Also, I seem to remember a much older product aka "DDRdrive X1" , that held 2 or 4 DDR DIMMs in a PCIe add-in card. I don't believe it found much of a market, however.
Found it here: ddrdrive.com
NOTE WELL the short edge connector (looks like x1), which necessarily limits its raw bandwidth upstream.
If Optane DIMMs currently max out at DDR4-2666 - as shown by Patrick in this video - then the calculations I did below seem to favor -- by quite a big margin -- a 4x4 add-in card with 4 x Gen4 M.2 NVMe SSDs:
DDR4-2666 x 8 bytes = 21,328 MB/second raw bandwidth
16G / 8.125 x 16 lanes = 31,507 MB/second raw bandwidth
4x4 = x16 = elegant symmetry
(The PCIe 3.0 "jumbo frame" is 1 start bit + 16 bytes + 1 stop bit = 130 bits / 16 bytes = 8.125 bits per byte transmitted, hence the divisor above.)
I would choose a Western Digital Black SN850, partly because I prefer having a DRAM cache in each RAID-0 array member: 4 x DRAM cache @ 250MB = 1GB combined cache in RAID-0 mode.
With PCIe doubling the clock speed at every generation, it appears to be catching and surpassing modern DDR4 DIMMs in raw bandwidth.
p.s. I ran these numbers today, because for decades I have believed that DRAM was THE fastest memory available. Now, 4x4 AICs are proving to be a real challenge to that belief.
Does skylake (8124M) support optane with 610 Intel chipset? Is it really more endurable than nand given that memory operations are thousand time more frequent then disk operations ?
But how does it mine chia in region mode?
Newby here: Is there also a mode where the memory bandwidth is increased at the cost of lower available storage (similar to RAID 0)? It seems that 8 instead of 4 memory channels give (only) a 10/20% boost for a single CPU, but I can imagine a lot of options there, too?
What if you had these setup in a 4 socket per server (or cluster) with 100gb networking and using it as a SAN for a vm cluster?
The problem with optane dimms is not the performance nor the non volatile nature of them.
The problem is, most enterprises use virtualiization systems.
If you run a VMware cluster of say four dual socket xeons, powering about 50 Servers where two of those are running a productive database (aka no development or test systems). You do need to enable all of these four hypervisors access to optane storage... And on this point things get complicated.
First you need to sacrifice one memory channel for optane on each server, limiting a the amount of memory your hypervisors have access too and B sacrifice performance of the memory they have access to.
So basicly you are impeding the pe rformance of 48 VMs just by installing optane, it's not jet used at all.
And now what?
You got 4 Hypervisors with say 1TB optane storage each...do you use it localy? If the machine dies all vms will get migrated to other servers, except the production database which runs on your local optane disk? That you need that much and that fast that It was chosen to buy optane for it. This thing is now offline?
Or you use vsan with your optane drives to share it accros your VMware cluster, but this introduces overhead, latency and the high storage iops will be no faster than your blody ethernet will allow it to be...
Optane in this state is dead in most use cases.
You need to pair it with a storage server solution something that runs low latency connections to the hypervisors and has failover capabilities. Or vsan if your servers have enough IO to handle verry low latency rdma capable ethernet/infinyband connections.
But guess what high bandwit low latency looks like?
Correct
High speed ethernet (40-100GBit)
RDMA support on those nics
And a ton of fast storage drives in the system.... Something nvme ssds do already deliver for a lot less money.....
The vsan application was something I looked into in our environment.... And we got a little shafted by fujistsu/Intel with our servers.
Fist they do not have very much pcie capabilitys at all...and now the fun part: for some reason all pcie x8 an x16 slots share the same interrupt... Very neat if you want something to be fast...
Outside a virtual environment optane is fast but the non volatile nature of the storage stays in contrast to the potential failure of the host..... Oh and by "failure of the host" I don't meant it hast to explode to cause a problem... A network error, a temporary bug of some sort, everything disconnecting the thing from it's network will be enough to get you in trouble... No pyrotech needed...
The intercibect works great if the nodes are fpgas wich interface with each other with timing and coherant crosspoint switches in the back plane .
Optane seemed to start out at 90nm -- a cautious start. Wouldn't it be better at 3nm?? Is my memory correct on this?
So it's virtual memory fast enough you stop avoiding it?
How would the tradeoff look like between having PMEM in memory mode and having (a few, lets say 4) P5800X drives dedicated to Linux swap partitions? If most of the data in RAM is indeed cold or lukewarm, then maybe even that would be an acceptable compromise? And this would of course work in an EPYC system.
ScaleMP and Intel had a demo of this before the PMem modules came out
I'm legit curious about the fundamental physics behind Optane/Xpoint memory... If I recal some of the early documentation that I read, the way they described the technology sounded a lot like memristor technology, and honestly sounds a lot like Hewlett Packard's Memristor/Crossbar concept from 2007. HP abandoned it years later, due to difficulties manufacturing it. Right around that time, Intel came up with Optane.
The memristance effect was theorized in 1976, and some anomalous measurements as far back as the 1800s might be attributable to the effect, but not recognized at the time. There are a few hobbyists, including a UA-camr who has published videos on the memristance effect, but it wasn't till HP announced in 2007 that they had been working on a prototype memristor that interest really took off. The basic concept of memristance, is that the resistance of a memristor changes with the flow of current. Flow in one polarity increases resistance, and flow in the opposing polarity decreases resistance. The changes made to the materials are non-volatile, and highly durable. Reading a value alters the stored value, unless the opposing current to counter the read current isn't also applied.
What always got me with the memristor concepts, was that HP had conceived a way to not just use it as non-volatile, high durability memory, but also figured out how to perform massively parallel processing _in the crossbar array itself,_ effectively merging the memory with the processing unit. They had even come up with a way to use the Crossbar system to create neural networks. In later years, they pulled back from the more elaborate concepts, and settled on just trying to market it as the heart of what they dubbed "The Machine", which was just a server with massive amounts of encrypted non-volatile memristor RAM, that served as both working and storage memory. Really, no different than what Optane is now.
What's curious, is the initial descriptions of Optane (ones that I saw), really made it look like Intel was just doing memristor memory, though Intel has, as far as I'm aware, always denied that Optane had anything to do with the memristance effect.
3DXPoint is a phase change memory. It's resistance/memristance is not polarity related. I know specifically how the memory cell is read and written, but it's sufficient to say that the resistance of each cell is bistable, dependent upon a phase change sequence.
It is also bit addressable/readable/writable due to the cross hatch.
Indeed, it was disappointing when micron announced their mothballing/ termination of 3DXP and sale of that fab.
Forget pmem. With current high and ever increasing DIMM capacity/speed + NVME4/SAS24 low latency/high capacity/and increasing endurance drives, I find it hard to justify this technology and it's complexity which essentially requires software engineers to write for it. Optane SSD's may have their place though but only as a specialized endurance drives --- maybe --- for now anyways.
Great stuff
Could/do optane dimms benefit from ecc?
Yes
I can't find Optane DIMM power on memory slot power rail have heard energy in the structure on writes and efficient power reads but what is the difference with DRAM? Intel and sources will talk about SSD read and write power but what about DIMM? mb
interesting back in time to current apache/barlow/cooper . . . mb
Thousands of dollars more for XCL+r you got to be kidding me $400 a pop for % grade SKU split mirroring coming out of the fab in 1M unit orders if u accept Intel terms to use what u need and become a hyperscale or OEM broker dealer for what you don't want and will sell to others less the Intel Inside NRE allowance that of course u pocket for yourself, I mean u'r employer, well, everyone knows what I mean. mb
Is there any performance impact beyond lowered clock speeds if I want to toss, say, 2x PMem100 DIMMs into an existing optimally-configured 6-channel Skylake/Cascade Lake system? Like lowered peak bandwidth? Increased DRAM latency? Or does it need to be balanced, like RAM, so minimum 6x Pmem DIMMs for a Skylake/Cascade Lake platform?
There is an impact. With interleaving, going to 6x per socket means you have more modules interleaved than 2x. Also, memory/ DIMM population was a several minute segment that was pulled out of this already very long video. That is another area of high complexity.
Can we use Optane DIMMs as normal DDR4 ram?
No. We go into why in the video
I honestly think Micron realized that the technology wasn't catching on and they needed to exit the market before their losses ended up accruing with no profit in sight. I mean, if their customers were asking for 3d xpoint technology and they were able to ship volume, then I'm sure they would have decided not to exit the market. But I see Micron going into cxl technology because their customers are going to need memory that can communicate at different levels of their system. I can see cxl being useful for machine learning and other applications that have mixed workloads.
Is facebook actually utilising Optane in their Cooperlake systems?
Facebook has several Cooper platforms.
@@ServeTheHomeVideo development platforms r they really production? mb
Is it me or does the optane dimms in blue remind you the old DDR 2 heatspreaders before they became a thing.
The black ones remind me of several DDR3 heat spreaders we had on DIMMs and are constructed in a similar manner
@@ServeTheHomeVideo knew it I have some on this ECC ram in my duel opteron test bed lol
Loving the channel.you get all the great gear I'd love to tinker with I actually have the 905p 380gb m.2 and the 900p 280gb.
I'm about to move to AMD with a 5950x I'm hoping I can utilize optane as cache at very least.
@@shadowarez1337 I use an optane ssd on Ryzen-I don’t know about using it as cache tho, I know LTT has a video on it
@@ryanwallace983 I have primocache for software since it's been default for all my rigs. Thank you I'll give it a shot once get the 2TB Sabrent rocket Plus in mail
So it works like swap but managed by CPU, not the OS
How much is the cost savings with Optane DIMMs actually? The last time I looked at the larger Optane SSDs, they cost almost as much as the same amount of ECC FB-DIMMs, so my assumption with the Optane DIMMs is that, unless they're substantially cheaper than those large Optane SSDs, you might as well just use regular RAM in the server instead of Optane.
It seems to me that there are a bunch of ways that Intel could have sold Optane with lower bars to entry, but always chose to go with the approach that maximized the implementation complexity and cost. For example, Optane could work great in the consumer space as a transparent caching layer on an SSD. Pair a bunch of QLC with Optane and present it to the host system as a single drive, with the SSD controller handling the management. That could be neat, right? QLC cost with Optane performance? Well, Intel tried this, only the "integrated" products exposed the single SSD as two different drives (one Optane and one NAND) to the host system, and required the user to set up caching software to leverage it. Then they tried selling tiny Optane-only SSDs for caching, but they inexplicably locked them to certain Intel chipsets and still required external software to leverage them. Even Intel CPU owners were locked out if they had the "wrong" chipset. Talk about shooting yourself in the foot!
We are saving around $3.5k/ server with the Optane DIMMs mentioned in this video. That is ballpark 20% savings on the server.
@@ServeTheHomeVideo for what Optane capacity versus what DDR4 and SSD capacity?
Would be good for vast associative memory AI462 neural networks
in storage, Optane has great (low) latency compared to most NAND, which have large page and block sizes for cost. Samsung and one other? makes a small page/block NAND in which the latency is probably good enough in relation to Optane. As memory, if an operating system really used persistent memory, then it could have had some interesting uses? but this would take time to develop. Otherwise, the use case is a situation in which the system with max DRAM still have huge IOPS (1M IOPS) but much lower IOPS with the extra capacity possible with DRAM/Optane, and it is unclear that such a work load exists in sufficient numbers
I'm surprised Apple didn't use this for AppDirect mode in their M1 Max SoC. If anyone could optimise Optane it's Apple. Sure, 7000MB/s 8TB storage. But it's still Nand Flash
Power is too high for that. My M1 Max notebook can chew through battery already.
@@ServeTheHomeVideo I've only used Lenovo Legion 7 with the 160W RTX 3080, so I have no sense of perspective on powerconsumption 😉
OMG I had one! It was faster than normal SSD after that?! Is that NAND versus AND based gate? It was faster but not cheaper. It might actually be a FLIPFLOP. Hence it is...... non-volatile. Wicked.
Patrick: love this video as I do all your videos.
Question: you are the 1 in a million IT experts who would be able to answer this question off the top of your head: Have you encountered any systems that supported a fresh OS install to Optane DIMMs? I posted a similar question at another IT website reporting Micron's recent decision.
What originally came to my mind was a re-design of triple channel chipsets, which allowed the third channel to host persistent Optane DIMMs for effectively running an entire OS in a ramdisk that is non-volatile.
In other words, using Windows terminology, the C: system partition would exist on that Optane ramdisk.
To implement this hybrid approach correctly, DRAM controllers would need to operate at different frequencies, so as to prevent the problem you described which down-clocks all DRAM to the same frequency as the Optane DIMMs.
Yes, enhancements would also need to be added to a motherboard's BIOS, chiefly by adding something like a "Format RAM" feature which supports a fresh OS install, and subsequently detects if the OS is already installed in an Optane ramdisk running on that third channel.
FYI: I filed a provisional patent application for such a "Format RAM" feature, many years ago, but that provisional application expired.
My comment at another IT User Forum, FYI:
[begin quote]
In the interest of scientific experimentation, if nothing else, I would like to have seen a few radical enhancements to standard server and workstation chipsets, to allow a fresh OS install to a ramdisk hosted by Optane DIMMs.
Along these lines, one configuration that came to mind was those dated triple-channel motherboards: the third channel could be dedicated to such a persistent ramdisk, and the other 2 or 4 channels could be assigned to current quad-channel CPUs.
The BIOS could be enhanced to permit very fast STARTUPs and RESTARTS, and of course a "Format RAM" feature would support fresh OS installs to Optane DIMMs installed in a third channel.
By way of comparison, last year I migrated Windows 10 to a bootable Highpoint SSD7103 hosting a RAID-0 array of 4 x Samsung 970 EVO Plus M.2 NVMe SSDs.
I recall measuring >11,690 MB/sec. READs with CDM. I continue to be amazed at how quickly that Windows 10 workstation does routine maintenance tasks, like a virus check of every discrete file in the C: system partition.
p.s. Somewhere in my daily reading of PC-related news, I saw a Forum comment by an experienced User who did something similar -- by installing an OS in a VM. He reported the same extraordinary speed launching all tasks, no matter how large or small.
[end quote]
That would be an odd architecture. Using a high-value DIMM slot/ slots for a low-value OS drive. It would be easier to just use an Optane NVMe SSD.
@@ServeTheHomeVideo Aren't Optane NVMe SSDs limited to x4 PCIe lanes? I thought one major advantage of Optane DIMMs was their superior bandwidth i.e. parallel DIMM channels. M.2 and U.2 form factors are both x4. And, all of the Optane AICs I see at Newegg also use x4 edge connectors. Am I missing something important?
Should I be comparing Optane DIMMs with a RAID-0 array of 4 x Optane NVMe SSDs?
If I could afford 4 x Optane M.2 NVMe SSDs, I would be able to compare them when installed in our Highpoint SSD7103. The latter RAID-0 array currently hosts 4 x Samsung 970 EVO Plus m.2 SSDs.
Many thanks for the expert direction, Patrick.
I checked Highpoint's website, and their model SSD7505 supports 4 x M.2 @ Gen4 and it's also bootable, much like our SSD7103 which is booting Windows 10 AOK.
The latest crop of Gen4 M.2 NVMe SSDs should offer persistence, extraordinary performance, and enormous capacity, even though they are not byte-addressable like Optane.
As such, Optane M.2 SSDs are up against some stiff competition with the advent of Gen4 M.2 SSDs e.g. Sabrent, Corsair, Gigabyte and Samsung.
The performance "gap" between DIMM slots and PCIe x16 slots should close even more with the advent of PCIe Gen5.
From Highpoint's website, see:
HighPoint SSD7505 PCIe Gen4 x16 4-Port M.2 NVMe RAID Controller
Dedicated PCIe 4.0 x16 direct to CPU NVMe RAID Solution
Truly Platform Independent
RAID 0, 1, 1/0 and single-disk
4x M.2 NVMe PCIe 4.0 SSD’s
PCIe Gen 3 Compatible
Up to 32TB capacity per controller
Low-Noise Hyper-Cooling Solution
Integrated SSD TBW and temperature monitoring capability
Bootable RAID Support for Windows and Linux
Use it for Linux swap drive?
3:45 I love me a NAND baed SSD
I think Intel’s direction on memory is the way to go. If we could get rid of GDDR and DDR life would be amazing.
what is wrong with ddr?
For some reason UA-cam deleted my reply. It might be your name. It’s an added layer of complexity that is there because hard drives were a major bottleneck because of their speed. Soon load times will be zero with the absence of ram. Also the state of a pc will not be lost if there was a power outage or if it became unplugged.
@@ultraderek we have hybernation and it is quite error-prone. Also dont forget, optane is slower, and more expensive. Todays deficit of electronics should hint you how important price is.
@@ultraderek speaking of YT censorship, know this: lot of people answers to me, so if it deletes you, it means inequality of google censorship. Welcome to 2021!
This is SOOOOO complex.. I may be completely misunderstanding this, but my short takeaway is that they're doing something proprietary that works only with certain CPUs, with the benefit being no better than using an NVMe disk for the persistent storage and leaving your DRAM going as fast as possible, because to take advantage of the persistence effectively anyway, the program running on the system must be specifically written to take advantage of it? (which therefore limits the systems it will run on, since it wont work on non-Xeon systems).. Unless I'm seriously misunderstanding, I see this dying off like the "hard cards" of the '80s. That being said.. wow.. server tech has certainly come a long way...
Will it work with Ryzen? What about sequential memory scores?
Intel locked it down to the Intel chipsets.
Intel draging everyone down
Wish the ssds would have dropped in price they are great but super expensive
Wow, I didnt know that optane still existed in its original context still cause I thought it just turned into a name for intel ssd drives that I see in best buy once in a while lololol
It’s a shame Intel always ruin a technology with it’s insecure strategy. Should just open it up for platform independence, so people can use it anywhere like a hard drive. even with some licensing cost added, people lust over its unique property would gladly shelve out money
Excellent point! I never really understood why the first Optane M.2 SSDs only used an x2 interface, when almost every other M.2 SSD used an x4 interface.
Also, pitching it as a "cache" limited its appeal even further.
My speculations here may be worthless, but when first announced I was expecting a 2.5" Optane SSD to compete with other 2.5" nand flash SSDs that became widely available.
The endurance and very low latencies were really something to brag about.
The U.2 form factor never really took off for Prosumers, once motherboard manufacturers adopted the M.2 form factor + native RAID support for multiple M.2 slots.
And, imho, the "4x4" add-in cards spelled even stiffer competition for existing Optane products.
I also suspect that upper management must have experienced extra pressure to start recovering the high initial Optane R&D expenses, after the long delays before actual products were released.
Lastly, many of the performance claims for Optane turned out to be very disappointing, as I recall.
EXACTLY! "like a hard drive" = great analogy. Imagine what would have happened to the PCI Express standard if Intel had locked it up with proprietary IP licensing restrictions!
In economics, there is the property of "elasticity". If a 10% reduction in price results in a 20% increase in sales, then there is "elasticity" between price changes and corresponding changes in market demand.
I still feel that Intel would have experienced much larger demand for Optane with lower MSRPs (see my other comments above, for more issues that arose with Intel's early Optane products).
Similarly, the long wait for real products, and the glowing predictions that were not realized, left Prosumers without the WOW POWER they were expecting.
That was a real shame: one of the reasons for Intel's historic success has been the market feedback they have received from very intelligent and vastly experienced people like Patrick here.
Intel ignores people like Patrick at their great peril, imho.
It's a great innovation; just a shame that Intel's marketing and engineering groups did not "sync up" more closely.
@@supremelawfirm yeah! Like display technology, NAND is like OLED with burn in problem. Spindisk is like LCD, just inherently bad. Now intel Optane is like an Apple_Owned micro-led that will not work with anything other than the stupid Apple eco system.
@@jmssun Ah, yes: then came "VROC". I will always remember the exquisite gesture which Allyn Malventano made during a PC Perspectives video: he held up that Intel "dongle" -- that was required for certain modern RAID modes to work at all -- and didn't say another word. He merely let his dongle dangle at eye level.
The silence during that brief pause was so thick, you could cut it with a meat cleaver.
I think it was only this morning that I finally read, for the first time, that Intel is now planning to increase the lane count from 4 to 8 on their DMI channels. Intel's failure to do so much sooner was arguably one of THE main reasons why "4x4" add-in cards by other vendors quickly became popular adapters for hosting RAID arrays using multiple M.2 NVMe drives.
4 x 4 = x16 = elegant symmetry
Prosumers who had tried to configure fast RAID arrays downstream of current DMI links quickly hit a low ceiling a/k/a "MINNY HEADROOM" (the opposite of MAX HEADROOM).
Hey, Intel: THE REAL ACTION IS OVER HERE ON ALL THESE EMPTY x16 PCIe SLOTS ...
wired straight from these multi-core quad-channel MONSTER CPUs. DUUUUH!!
And the realization was slow to dawn on Intel that dedicated I/O Processors were not needed any longer, with so many multiple CPU cores almost idling from lack of work to compute!
By way of raw comparisons, in response to that dangled dongle AMD released free software for hosting very fast RAID arrays on multiples of these competing "4x4" add-in cards installed in Threadripper motherboards.
And, ASRock responded to me THE SAME DAY when I requested directions for configuring a 4x4 card with one of their latest Threadripper motherboards.
Just my 2 cents + an opportunity to vent. Hope nobody here is offended; no offenses intended.
@@supremelawfirm yeah, never understand their logic
I’m sure Apple could of done something clever with this tech - though seems unlikely with intel at the helm.
Being a computer engineer. The shared pool sounds complex af.
Just wait until Gen-Z. Probably more like CXL 2.0 (maybe later) but that is where we get shelves of memory connected to fabric and shared across multiple accelerators/ CPUs.
@@ServeTheHomeVideo thats sound insane but convenience at the same time...
Wait...so you're saying that intel cooked up a pretty cool technology that seemed to be implemented just so they could make more money, then a competitor came along and basically said 'No, do it our way, it's cheaper?' Where have I heard this before? *cough*ia64*cough*