The silliest part of optane was it reached a point where 32GB RAM sticks were cheaper than 16GB optane modules when bought new. I REALLY wanted an nvme optane ssd on my laptop that'd be dedicated memory just for heavy database workloads but it made more sense to upgrade to 64GB of RAM.
If they licensed the tech for other companies to make and manufacturer it, it would have gotten much wider adoption. Imagine is samsung or others used their fabs to make it. Huge money savings
@@squelchedotter They didn't really squander it. There was nothing to squander. It was an experimental architecture for which the necessary clever compilers were impossible to write, and was more of a f***-up that took way longer to die than it should have.
I don't know about the server side, but I know the consumer side was always a joke because it was so expensive. If you can't afford an ssd, but want your hdd to be sped up, it would make sense to have something like optane. . .but not for the price of a full ssd, why speed up a bad drive when you can just have a good drive to start with
@@n27272 If you could get 8/16gb for like $20 yeah that would be fine, but it was stupidly overpriced like $50 for 16gb, at that price you might as well just get more ram 2x8 ddr4 was also $50
4:40 IM Flash was in Lehi, Utah (pronounced lee-high not let-ee). It's quickly been changing hands. In 2019, IM Flash was disbanded and it just became Micron Technology Utah. Then, the fab was purchased by Texas Instruments just last year.
It feels to me like a big part of Optane's problem, was a lack of advancement from when it initially released. The technologies it meant to be sitting in the middle of the stack between kept advancing faster than Optane could keep up. I don't know how much of that was due to being hamstrung by being tied to this one fab, or a lack of investment in research, but it was simply too stagnant. I do hope in the future someone will take up the torch of fast byte-addressable persistent storage.
Honestly, targeting and marketing Optane as an intermediate between SSD and RAM might've been a death sentence. Too small a niche and price bracket sandwiched by rapidly improving, far more mature technologies. Not to mention the complex, Intel-only support for PMem.
It also didn't help that AMD server-chips became quite successful and will support WAAAAYYYYY more DDR3/4/5 memory (even the cheap chips support up to multiple TBs) compared to what Intel was offering, with its “wonderful” Xeon product differentiation.
@@JLGBinken Yup. I also don't think 512GB per DIMM for PMem was compelling enough density-wise, and at current prices 256GB DDR4 can be found for similar prices to 256GB PMem. Maybe they just couldn't manage it for gen2, but if they'd been able to go all the way up to 4TB modules (via DDR, CXL, or whatever low-latency interface), then we'd be talking 16x density instead of 2x.
What a bummer Intel did it to themselves NVDIMMS / PMDIMMS are amazing and would have been the future of laptops and server computing but Intel roadblocked themselves, limited their use cases themselves and then locked down the PMDIMMS to Intel only systems the same way NVDIMMS are locked down due to their insane hidden costs.
What they needed was a dedicated Optane bus with its own dedicated busses. Instead they limited it by requiring the CPU to access it through the DDR PHY. CPUs since Skylake have had their own dedicated Optane controller, but it was just never used…
Thanks Ian, please do a video on the Sapphire Rapids delay. It's very disappointing that Optane is biting the dust. As an employee of SAP, Optane PMEM was touted to us as a game changer for our SAP HANA in-memory Database. Now our Customers and us are left in limbo.
Sometimes we have to accept the harsh reality that no matter how cool and desirable a piece to technology is, it may not always be viable from a business stand point.
I’m on the consulting side and not involved in product development and I don’t have full visibility on what’s to come. From what I’ve heard, HANA heavily relies/uses Intel specific instructions and hence hasn’t been certified for AMD Epyc. AMD Epyc certification may happen in the future.
@@srikanthramanan I run a data center with all AMD Epyc CPUs on VMware. We do a lot of hosting for customer DEV/Demo SAP systems and that includes HANA. A few years ago I ran the HANA PRD performance test that can be used to certify hardware that isn't on the HCL. Our Epyc Rome and Naples systems passed with flying colors. I think the entire Intel Specific Instructions is nothing more than marketing saying "Intel has given us a boat load of money to keep this Intel only." @Zbigniew I would have loved to have it certified going back to Rome. AMD with the 8 channel DDR4 made a huge difference in RAM density compared to Intel. Now with ICL & SPR Intel has only gotten to 8 channel and FINALLY gotten rid of the L series CPUs to allow all their CPUs to access the full amount of RAM. However, once again Intel is behind the 8 ball in terms of density compared to AMD. With Genoa AMD is going to 12 channel RAM when Intel is still at 8 channel. When you are doing virtualization, RAM is much more important that physical cores when you are running dual socket 32c/64t CPUs. All my hosts have 1TB RAM and I wish I had another TB/host. CPU over provisioning by 25-30% or more is pretty easy when you have that many cores as the odds of not having available cores and a VM waiting for CPU is quite small. However, RAM over provisioning can have a MASSIVE performance impact at just 10% to the point that VMs crash.
The other way round. Intel is now sitting on some juicy patents with this and will sue anyone who attempts to bring something similar (and possibly better) to market.
@n n I don't think Intel did a good job of explaining what it is or putting it in a form that people wanted. It's definitely desirable but you think you already have it when you buy a normal SSD, you don't have what Optane provides.
@n n Yes, terrible marketing. I've often tried to improve computers using caching or small SSDs but the economic of this are poor and getting worse. First of all fit a decent amount of RAM that speeds up disk activity due to not using the disk as a substitute for RAM. Secondly have a large SSD for the OS or everything if you can afford that. Finally use the hard drive for bulk. Optane does fit in here but as the operating system drive rather than some kind of cache.
@n n In terms of what you'd want from solid state storage Optane has it, assuming it was bigger and cheaper. Nand Flash is a really messy compromise that only works because it's bigger and cheaper. Without all the go faster trickery added to it it's slow. Optane is naturally fast and does not wear out.
I didn't even know Optane was originally envisioned as an in-between RAM and CPU cache. Watching LTT and the like I thought it was just speedy cache for hard drives. Guess I learned something new today!
We really wanted to get Optane memory. I wanted our databases to be on it. But it is simply too expensive for what it does. Instead, we just bought 1TB of RAM instead. The tradeoffs are better for that than Optane. We cached the whole database in RAM, more or less. Longer starting time, but I will take that.
I would use RAM as a write cache, for a RAID NVME SSD primary access point. Data processing would be orders of magnitude higher. More cores, more PCI-E lanes.
Was part of the Optane problem the software stack, was the software just to slow. I did see that level1tech video about intel redoing the driver stack and got massive gains. Optane, I always wanted to use you but never had the chance. RIP 2022.
There is something to be said about programming models and legacy. A lot of big data applications, engines, databases already use storage optimized data structures, packed data and block oriented management. In case of Java they even dump memory safety to unlock their performance goals. Also a major trend is clustering disfavouring beefier stand-alone machines. And as a programmer it is quite a feat finding that your app or concrete workload benefit of a unique sweetspot provided by just one manufacturer. The computing landscape is vast though.
In my opinion the constant delays of Xeon SP killed optane. It wasn't too early it was too late. When it was initially planned to release there was no CXL and nand flash was still expensive and the write endurance was an issue. Now the price premium cannot be justified.
It's pronounced Lee-high instead of Leh-he. I was at BYU in Provo and at STEM Fairs Micron would come and really push for interns or full time engineers to come and work at the plant. They had marketing material and everything, and the idea at the time was to speed up slow hard drives, which made sense at the time when ssds were still somewhat expensive
Honestly, even when RAM increases their transactions per second (referred to wrongly as GHz, their latency timing goes up by the same amount. RAM has not reduced latency, only increased bandwidth, nearly at the rate it increases with size.) You could have 128GB memory, and 24TB in NVME.
The processor cache, would be the new RAM, for intake from the SSD array. You'd have terabytes of accessible memory from a Raid array of many PCI-E lanes. (All of it, nearly at once.) The RAM would do write caching and when data change is pending.
I know you're going to say RAM can give you the data sooner. If it has it, sure. But Terabytes of all data being accessible, is a total game changer. CPU cache will be a factor.
I was genuinely excited for the 3DxPoint stuff years ago, imagine no need for mass storage via NVMe, SATA or SAS for Workstations and NV storage via memory slotted in DIMM.
there's always a need for storage. You can't load everything on the internet in local memory. Sadly, looks like HBM2 is also heading to the same outcome. Apple didn't use HBM memory to achieve the same throughput using LPDDR5X
@@RunForPeace-hk1cu Yes, there is always a need for more storage. What I meant was for something like a local machine/workstation, a few hundred gigs to a few terabytes (based 2) having as fast as RAM non-volatile storage is a good thing. Having it lower/faster latency or similarly as low/fast latency as well is also good. Completely side stepping the need for another layer of cache (RAM in this case) can be beneficial for more speed and efficiency, especially when RAM needs continuous power to keep state compared to, well... non-volatile.
I have to say, the 16GB Optane + 1TB HDD combo in my 2019 Lenovo Ideapad 330S worked surprisingly well. Hardly any of the usual symptoms of running recent versions of Windows off a conventional harddrive.
Ever since I heard about Optane years ago when it was first introduced, I’ve always suspected that it wasn’t going to go anywhere and would eventually disappear. I think that the major problem was that your average person, and even your average PC enthusiast, just didn’t really understand it. Intel did a lousy job of marketing that technology. Year after year, it remained a mystery to most folks, and so there was never a grassroots buy-in for it.
To me the product never really lived up to the marketing. Initially intended as cache, but it's supported platforms were limited and it didn't even really perform well compared to just replacing HDD with ssd instead
Again people are talking about different Optane products. You are talking abut the caching SSD. Yes that was pointless. The more expensive add-in models were awesome tho.
Optane initially sounds amazing, but as it's slower than DRAM and data centres keep servers up, running, it seems too niche to gain the scale necessary in the market to be viable. You can memory map in data from disk files for example, write accelerators can use battery backed DRAM, SSDs can save state powered by capacitors. So it really needed to deliver on the speed, density and cost hype; while also being ultra reliable. That sounds like a tall order for a new product facing mature technologies with much more R&D behind them.
we are using it in our main database server and i love it. (storage variant) Sad to see it go ,was hoping that in a couple years it would be cheap enough that i could place it in the plant database servers.
Abstraction? folks: Cache is like your nightstand where you put what you are currently reading, your personal library/bookcase are more like ram, the public library is your Hard Drive.
Enterprise customers really liked Optane, and were willing to pay where time is money, and/or where boatloads of RAM was too expensive for the use case. There were reports put out available to read online, published by enterprise users of Optane. Also database teams really love it, as mentioned in the video. I have it in my pc, it's literally the best upgrade I've ever done, in terms of pc responsiveness.
I would've loved a nice Optane setup. I have ideas for other ways of running it than the Memory and App Direct modes with the kernel just allowing two different mmap for RAM vs. Optane mapping. Allowing it to work like normal memory but with the application deciding what goes where
From beginning: we have storage device and CPU, we have ram and caches to lower time to access for data read/write from this storage. Because this is Von Neumann architecture ;)
Non-Von Neumann architectures are unsuitable for general purpose computing. They are being adopted for GPUs, media encoders, network/storage/custom accelerators... But they would not suffice for a CPU, which is still effectively necessary because (1) all the code that has ever been written still needs to be run and (2) it is simply totally infeasible to train programmers to target a dozen completely different compute paradigms just to do basic functions - multithreading is already too hard!
A Harvard architecture has separate data and instruction caches. Like basically all modern CPUs have in L1. Modern processors are hybrids and caches are an implementation detail, compiler need to know about them for optimum performance but it will work without caches at all.
@@FreeOfFantasy separate data and instruction in storage system was in computers too, but it nightmare, so treating instruction as data in storage was great move, but as you wrote, L1 is splitting it because this close to processing units, it good for optimization. But if you want processor could work without caches, but it would have 1/1000000 of performance or less. Interesting architectures are design for AI, because you know flow of instructions and data before (Neural Network topology) so you may put right data at right time, without requesting it.
For completeness I feel you need to do a deep dive into HPs 'The Machine' and its Memristor memory technology. Its birth, gestation and then quietly being announced as dead on arrival as well, in not so many words. 😄
Ah, so that’s what it was called. I was trying to remember the name but all I could think of was “The Cube” or “The Solid” or something along those lines.
It was clear from the start it would fail! As the whole idea was terrible and guided by a blind ideology. That is, removing the distinction between the persisted state and the working state. Every tech/product that tries this, will fail. Only those that trust hard-and software to never produce any error, or otherwise unwanted result, believe it will work and try to make it!
Intel had squandered Optane by making it only a glorified HDD cache for consumers and Intel exclusive. Had they opened the tech to AMD and let it compete as an SSD, they would have probably been more successful.
Bit late to this, but we invested HEAVILY on optaine, and intels sudden cancellation of it has caused a headache of huge proportions. All the servers were gonna now HAVE to scrapp because we can no long rely on the supply chains. Invested since 2016 - 18.72 Million Scrap Value Today - Shreddies. Still I can finally get rid of the last of our intel systems.
IDK, are high optane fuels really worth the cost? I heard somewhere you can add your own additives to your gas tank to make it high optane, the brand I was recommended was called liquid shwartz and when put in an electric camper van makes it go plad.
I know it wasn't really a consumer product to begin with, but I would have really liked to have a moderately-sized PCIE 4.0 Optane drive. But they never came...
Pity they haven't tried going with an altered use-case - instead of developping CPUs which work with DIMM slots they could have prepared a more SoC comprehensive package with eg. 32-128+GB of "on package" high speed memory, eg. HBMe, for the compute intensive layer of data storage with the DIMMs delegaated to serving the purpose of NVDIMMs/Optane DCPDIMMs for slightly slower but larger mem pools and/or fast storage, whilst delegating all the PCIe lanes to extra controllers/accelerators/functionalities... a kind of a switch or shift in technological stacks that could have made success or at least that could have questioned the status quo. Heck, maybe somewhere down the line nVidia decides to buy the IP and they manage to extend their Grace+Hopper product stack with the proviso of the high speed RAM on package that's vastly superior to DIMMs, which then would serve the purpose of large mem pool providers for the compute intensive workloads DC (and possibly workstation) businesses are targeting.
I was really excited when optane came up in Anandtech benchmark giving consistent random r/w speed and very expensive. I really thought their 2nd gen and future optane products will make affordable and high capacity across any platform say armv64,amd64 or risc. This would make optane worth investing but everything is out of the window for consumers
4:40 *Lehi ("Lee-hi"), Utah; not "Leti/Lehee". That once "IM Flash" fab is literally blocks from my house. Texas Instruments owns it now and makes analog semi's there.
I would use RAM as a write cache, for a RAID NVME SSD primary access point. Data processing would be orders of magnitude higher. More cores, more PCI-E lanes.
The processor cache, would be the new RAM, for intake from the SSD array. You'd have terabytes of accessible memory from a Raid array of many PCI-E lanes. (All of it, nearly at once.) The RAM would do write caching and when data change is pending.
I send my condolescence to pmem. Near-RAM latency and byte addressibility has some potential if only the price had not been this ridiculous. Its use cases are very narrow for this price point. The pmem will be back some day when its killer application rises and that market becomes big enough to lure competition from big players.
We really need an upgrade to NAND, I really hoped optane would fill the gap. The durability and seemingly effortless increase in channels/bandwidth is just so damn good.
If it was cheaper than dram, then licensing it out to ssd manufactures to use optane chips instead of dram would have been neat. It's pretty sad its coming to a close, it had a lot of potential. Would have loved a large ssd like that I could hammer away at and never kill.
Whoever came up with the 16Gb Cache drives was a moron and ruined the perception and branding of optane. They should have sold them as enterprise grade scratch drives for creative workflows.
@@edwardtan1354 for enterprise for crazy money... this is why I think it would be good to open it, because more people working on this technology, cheaper should become.
@@m_sedziwoj I mean if LTT has it... then it could be consumer or enterprise solutions... so we will never know... but with Micron backing out with no one to manufacture the 3DXP memory chips the next best solution is to use NVMe software solutions
I hope Intel licence they patents for 3D X point technology, because for me this technology don't get enough development to take place, and where are places (specialized) where it would be much better than flash.
@n n do you even know what is different in this technology compare to flash (NAND)? It more similar to transition from vacuum tubes to transistors, than from DUV to EUV. Ofc maybe it would never perform as we hope, but without research you will never know. And a lot of performance we know left on table, if I remember correctly they don't even go to read full line, only one cell at once.
@@TechTechPotato Why did Intel push the 16gig cache drives so hard, despite desktop users wanting SSDs? These drivers work wonders in games such as Star Citizen..
They should have targeted just replacing NAND flash completly, which simplify a lot of things. But to do that you need big capacity and low cost. Another option would be to use it as a journal or write ahead log for file system or databases, accelerating writes, but then you could do the same with just RAM and battery backup. Also now it is killed, it would be nicest if they open all related patents.
Optane should have been a "Prosumer" type of product that boosted SSDs performance. For enterprise a drop in replacement for RAM at half the cost and no worry for power loss would have been an amazing value. You wouldn't have to keep every single server on a UPS.
I've got a workload that absolutely thrashes disk if it's not cached in RAM. And it's much easier to dump a couple TB of Optane into a box than it is to get a couple TB of RAM. I guess I just hope Samsung Z-NAND catches up.
I've only seen a single Z-NAND model in retail and that was years ago. Im not sure Samsung even makes it anymore. Kioxia still makes XL-FLASH that is essentially the same (both are SLC).
Seems to me Intel was lying to investors about Optane's future even after Micron sold Lehi Utah fab to Texas Instruments. Instead of coming clean, they perpetuated this ambiguity for almost a year.
Optane reminds me of Intel's Itanium. That shit was ahead of its time in a sense, plus it was failing at points precisely due to breaking new ground. Now the Optane situation has an eerily similar vibe. Its not my money I know, but they really should have stuck with it, just like Intel lost the initial x64 to AMD as they werent ready to even minimally maintain the damn thing.
Itanic was always awful, an attempt by Intel to have a 64bit monopoly. AMD64 did what programmers wanted, was able to run 32bit binaries native so evolved Windows forward without relying on magic optimising compilers to VLIW that were impossible to write.
@@RobBCactive "Itanic was always awful" It was way faster and more efficient. the "awful" part was that it broke with x86 and initially only had very slow x86 emulation. But it solved many of the problems that x86-64 still has. "without relying on magic optimising compilers to VLIW that were impossible to write." Yeah, just that compilers now are way more complicated and capable than ia64 ever required.
Itanium was chocked by low bandwidth and Intel put too much on software pipelining (versus just loop unrolling, whole cyclic vs acyclic scheduling story) and full prediction, but probably the worst was all mess with bundles. But floating point arithmetic abilities and algorithms Intel developed with HP for division and square root were top notch and one of greatest achievements, unfortunately, mostly unknown to general IT public.
Would have liked it in my personal rig but the price was insane while persistent memory was not available. It was either get a 32GB optane SSD to "speedup" a HDD, or get a 250 GB SSD and still use it to speed up the HDD or just directly use it. We would have liked it at work but again - was not available for laptops, hard to get for servers and then they cut it off. We have workloads that do only sporadically requires loading a lot of data - right now this is all on normal SSDs and you get significant slowdowns but adding more memory is just not worth it (direct costs as well as power) or for the bigger instalments just not possible (1.5TB limit).
I just recently had my Windows install on an H10 basically nuke itself. I was using Optane as a a cache for the SSD portion, and for whatever reason that relationship got corrupted or something and now it won't detect the main SSD portion. I need to get an m.2 enclosure and see if I can salvage my files from another computer. Left a bad taste in my mouth honestly. The little 32gb M.2 Optane drives were nice for FreeNAS installs but the caching/acceleration feature seems to be designed really poorly IMO since it seems like running RAID 0. If something goes wrong with one portion or there is a software bug with the relationship it all falls apart.
Only if we could have non volatile memory fast as cache and as close as possible to the processor... unfortunately this is at least for now unrealistic, but its clear that memory access speed/latency and bandwidth is crucial as Apple has confirmed with their Mx chips. Processors need to get as fastest unified memory as close to them as possible. Seeing it to be one day as non volatile would be miracle, but who knows, maybe we will get there...
I know it wasnt the initial plan, but I HONESTLY think for consumers, rather then treating it as ram, it should have been used to accelerate HDD. Gaming focused drives, like Western Digital black, could easily justify costing 50 dollars more on the 4-6TB models, by adding 16-32GB of optane on the drives. and still be cheaper then a 2TB SSD, so people could actually download a couple of games from their steam library without breaking the bank for a large SSD.
Lookup Intel Terahertz, 1000GH CPU with 10,000 times less leakage, lower power and ran cooler with their "special sauce" they said. Yet 21 years later it's still not used?
I was excited to see some enterprise database performance benefit, but the stuff that really helps the performance tends to be just fine with volatile memory.
Unfortunately this is all too common for Intel. They launch a promising / technically interesting product, doesn’t get the immediate commercial success they were hoping for, they cut losses and cancel the follow up. Knights Landing, Compute Cards, Kaby Lake-G, Lakefield, the list goes on.
The impression I got with Intel's Optane memory is that they thought it was something to be worshipped and never let out of the box. And that's a shame because they could have made a ton of money with everyone benefiting. But they just kept the price up as high as possible with mainly smaller memory capacities.
I don't see what the point of Optane is now that we have nvme drives that have transfer rates of several gigabytes per second. Any money you put towards Optane can be put into more DRAM or larger/faster SSDs, depending on your application. I didn't see much of a purpose when it was announced and I'm not surprised that they're shuttering the product line now.
Optane DCPMM had few serious issues. It required more expensive high memory limit Xeon CPUs. It wasn't really that much cheaper - i'd say that this 2x less price per gb was compared to lrdimms or 3ds lrdimms, which are much more expensive than standard rdimms. Last but not least the performance in comparison to memory is abbysmal - there is a whitepaper from Lenovo regarding the tests for SAP Hana. Afair IT showed 50% less speed. All in all if you need more memory than standard x86 can give you, the power PC looks really well.
Only couple of year's i wanted an Optane module to speed up my HDDs. Because at this time, a big ssd was much to expensive for me. Then came amd with it's software solution , where you could use every cheap small 128GB SSD to speed up the HDD by using it as cache. This I also didn't by ;-) At the end (four years later) I bought my new PC with a real 500GB ssd, and since then (2018?), i use only SSD in PCs, and HDDs in the NAS :-)
This may be stupid, but when I hear "phase change", I am reminded of Panasonic's PD and DVD-RAM discs, also extinct and also based on some kind of phase change technology. Like Optane, those discs had a reputation for rewritability and high reliability. That marketing claim is oft repeated as fact, whereas in my entirely anecdotal and personal experience, those discs were the most error-prone and unreliable optical discs that I have used. When I compare Intel's initial claims to their subsequent SSD endurance rating, coupled with their unwillingness to provide samples to Ian, I've got to wonder whether there isn't more going on here.
Yep, I tried to use different rw optical disks; they never let you rewrite reliably so were more wasteful than the stacks of WORM disks that became commomly used as coffee coasters. The marketing claims vs reality, reminds me of another recent Intel story.
It's a pity SKHynix isn't and Micron didn't do more with the 3DXPoint Tech; Performance-wise the Optane Memory-based SSDs were the fastest Storage short of a RAMDisk in the m.2 Form Factor, even if the difference between Optane and TLC/QLC wasn't highly perceptible to the average End User (who would've been on a Spinning Disk or SATA-based SSD at the time anyway).
There where plain and simply no good software ideas on the operating system level allowed to come through inside intel to make this kind of persistent memory to come through and become commonly used. Pricing was far too high, and performance turned out to be not really competitive. Huge clients avoided jet another attempt to get them on to a proprietary leash like the DDR-T stuff. However still the main reason why this technologie died was the fact that the technology turned out to not being able to compete with the advances in flash memory.
Looked all over the internet & couldn't find a clear answer to my question. Hoping you might shed some light. I've decided to try to breathe some new life into my laptop that has only 8 GB soldered RAM by adding the Optane Memory Series (what you call cache drive) into a second M.2 slot for multitasking: Chrome/Adobe Acrobat/Office running all the time in parallel eating up all my memory. I wanted to get the 32GB version of the Optane Memory but bumped into this on a non-Intel website & couldn't find a confirmation or disproof of it anywhere else: "You must use the M10 version of Optane Memory. The original version of Optane Memory will only be supported in a desktop environment". Do you happen to know whether a non-M10 Optane Memory Series drive should work on a laptop? If not, I'll have to stick to the 16GB version as supplies are scarce at this point. Thanks in advance! P.s. I know Intel has stopped supporting this product for consumers & am ready to experiment with setting this thing up. P.p.s. thanks for the video!
Any size M.2 drive that physically fits will work, whatever capacity of drive. They say that only the 16 GB is supported on the basis of their validation and guarantee, but the beauty of open standards is that whatever conforms to the standard should work effortlessly. But just to confirm here - the Optane cache drive won't add DRAM - it'll just be seen as another storage device. Intel called it 'optane memory', but it's just storage. Might as well add in 1TB M.2 drive.
With cheap Optane modules available and all new motherboards featuring a ton of NVME slots, I was looking forward to building a new 13th Gen system using it and really tweaking a 13600k system for productivity. Will 13th gen support Optane still ? or is it dead on the desktop now also ?
I don’t understand why they don’t license the technology out if they were to kill the tech.. this will revolutionize this entire industry, and they can make tons
Problem was price and 2nd in line was capacity. If price would be 1/2 of DRAM, story would be different, let not even imagine that it would be cheaper to make than flash memory...
A real myriad of issues that plagued such a great technology :( Would've been incredible if 3DXPoint could've continued development. The IOPs, latency, and endurance were truly incredible. Tons of issues with the Micron agreement on the fab you mentioned and all sorts of development issues in general...
Good idea, a niche product that was hobbled by an Intel lock in. Add the massive investments into DRAM and flash manufacturing and R&D that weren't available for Optane because of the proprietary lock in, it just got crunched from products from both sides. Even if it were not locked into the Intel ecosystem, the success of an Optane like product would not be guaranteed... it's just that the proprietary ness of it definitely guaranteed its death.
@@Raivo_K the Optane SSDs were crippled by the i/o bottleneck, the promise was near DRAM latency and speed nanoseconds, not microseconds going through an OS stack designed for disk drives.
Memory mapping in data from persistent memory is interesting, but as it requires software changes and the main advantage is persistence at cost of speed, I guess most people will simply stick with RAM and precautions against power failure. The cost advantage they promised in the initial hype simply was not delivered; that would be key to wide adoption but overcoming the economies of scale in DRAM & NAND was a tall order.
Is it good as of 2023 as a caching drive in front of a big cheap SATA SSD? I am an average consumer pondering buying one of those inexpensive sticks being sold now on Newegg/E-bay. Is it worth it for a SATA SSD not an HD drive. Anyone, please? I have no need for NVME just interested in speeding up old SATA technology on the cheap. My mobo supports Optane.
Seems like Optane was in a really awkward spot: Existing software is optimized for slow SSDs/fast RAM, and cannot easily take advantage of an Optane intermediate layer without serious adaptions. But, no-one is going to do these optimizations, if Optane is not already a well-established technology... Intel should have probably focused on those handful of niche-cases where the Optane benefit is largest, and then try to expand from there, instead of this mixed strategy they seem to have done here.
Add-in card SSD's did not require any special layers or optimization. These work well even on the newest Ryzen systems. Cache models require primocache and DIMM's are not compatible anyway.
I also sport a 32GB Optane Module as a caching solution to my 1TB drive and boy when you are dealing with a spaghetti coded game it actually works wonders 3-5 minute wait times down to 1-2 minute wait times shows it has its use case but not everyone can afford its use cases even then AMD's solution also got scrapped half way (least its most likely open source so people can pick up the slack) but now you can just turn your NVMes into a caching solution due to better software compatibility meant there was something
The silliest part of optane was it reached a point where 32GB RAM sticks were cheaper than 16GB optane modules when bought new. I REALLY wanted an nvme optane ssd on my laptop that'd be dedicated memory just for heavy database workloads but it made more sense to upgrade to 64GB of RAM.
They're selling for like five bucks on eBay out of China.
@@aarrondias9950 you signing up to put 5 dollar china special ebay components in your PC?
@@Adierit already have. I've had no issues with em either.
@@Adierit Different purpose different solution. I would not mind to store my Japanese love action videos on those cheapo hardware.
If they licensed the tech for other companies to make and manufacturer it, it would have gotten much wider adoption. Imagine is samsung or others used their fabs to make it. Huge money savings
Optane: the most revolutionary technology Intel squandered since i960.
Yep. Dammit, Intel.
Everything intel does is a failure if its proprietary.
Intel is just a crazy circus of budgets and smart people working on projects that will never be funded enough to survive.
They also squandered Itanium
@@squelchedotter They didn't really squander it. There was nothing to squander. It was an experimental architecture for which the necessary clever compilers were impossible to write, and was more of a f***-up that took way longer to die than it should have.
I don't know about the server side, but I know the consumer side was always a joke because it was so expensive. If you can't afford an ssd, but want your hdd to be sped up, it would make sense to have something like optane. . .but not for the price of a full ssd, why speed up a bad drive when you can just have a good drive to start with
At consumer level 8/16gb of this stuff would be perfect for keeping inactive chrome tabs imo
@@n27272 for 5$ sure
otherwise you get 120GB SSD for 15$
@@n27272 If you could get 8/16gb for like $20 yeah that would be fine, but it was stupidly overpriced like $50 for 16gb, at that price you might as well just get more ram 2x8 ddr4 was also $50
Yeah agreed it was a terrible value proposition…
Less than $80 caching SSD was too expensive?
I'm really honestly sad that they're discontinuing it.
years later, someone is going to dust this off, and sell it as a brand new idea.
maybe apple will brand it as revolutionary, and invented by apple
@@acasccseea4434 Primocache can do similar with any SSD, Optane being Kaby Lake + locked was a mistake.
@@acasccseea4434 im sure intels lawyers will be all over them if they do it even a second before the patent expires
@@glenwaldrop8166 Intel Optane DDR-T modules are very different from regular SSDs, DDR-T doesn't ware with usage
@@acasccseea4434 If AMD are smart, they'll buy the patents/rights and run with it.
My dad worked on the original 3D XPoint team back in ~2013, and this is the only video I have seen on this topic that gets what it is 100% correct!
4:40
IM Flash was in Lehi, Utah (pronounced lee-high not let-ee).
It's quickly been changing hands. In 2019, IM Flash was disbanded and it just became Micron Technology Utah.
Then, the fab was purchased by Texas Instruments just last year.
How is Utah if I moved get out of California as a IT Tech Manager/Software DevOps guy?
It feels to me like a big part of Optane's problem, was a lack of advancement from when it initially released. The technologies it meant to be sitting in the middle of the stack between kept advancing faster than Optane could keep up. I don't know how much of that was due to being hamstrung by being tied to this one fab, or a lack of investment in research, but it was simply too stagnant.
I do hope in the future someone will take up the torch of fast byte-addressable persistent storage.
That actually makes a lot of sense.
Especially as they couldn't fully leverage the tech due to connection tech limitations.
Honestly, targeting and marketing Optane as an intermediate between SSD and RAM might've been a death sentence. Too small a niche and price bracket sandwiched by rapidly improving, far more mature technologies. Not to mention the complex, Intel-only support for PMem.
It also didn't help that AMD server-chips became quite successful and will support WAAAAYYYYY more DDR3/4/5 memory (even the cheap chips support up to multiple TBs) compared to what Intel was offering, with its “wonderful” Xeon product differentiation.
@@JLGBinken Yup. I also don't think 512GB per DIMM for PMem was compelling enough density-wise, and at current prices 256GB DDR4 can be found for similar prices to 256GB PMem. Maybe they just couldn't manage it for gen2, but if they'd been able to go all the way up to 4TB modules (via DDR, CXL, or whatever low-latency interface), then we'd be talking 16x density instead of 2x.
@@JLGBinken "and will support WAAAAYYYYY more DDR3/4/5 memory"
Sure. Source?
@@ABaumstumpf Just look at and compare the specs.
@@flashmozzg So no actual argument just making shit up - you do you.
What a bummer Intel did it to themselves NVDIMMS / PMDIMMS are amazing and would have been the future of laptops and server computing but Intel roadblocked themselves, limited their use cases themselves and then locked down the PMDIMMS to Intel only systems the same way NVDIMMS are locked down due to their insane hidden costs.
Yield were probably horrifyingly bad
What they needed was a dedicated Optane bus with its own dedicated busses. Instead they limited it by requiring the CPU to access it through the DDR PHY. CPUs since Skylake have had their own dedicated Optane controller, but it was just never used…
Thanks Ian, please do a video on the Sapphire Rapids delay. It's very disappointing that Optane is biting the dust. As an employee of SAP, Optane PMEM was touted to us as a game changer for our SAP HANA in-memory Database. Now our Customers and us are left in limbo.
This is major for Oracle and VMware customers as well, sad days
Sometimes we have to accept the harsh reality that no matter how cool and desirable a piece to technology is, it may not always be viable from a business stand point.
What about certifying SAP Hana on AMD ? Genoa seems to have cxl memory included
I’m on the consulting side and not involved in product development and I don’t have full visibility on what’s to come. From what I’ve heard, HANA heavily relies/uses Intel specific instructions and hence hasn’t been certified for AMD Epyc. AMD Epyc certification may happen in the future.
@@srikanthramanan I run a data center with all AMD Epyc CPUs on VMware. We do a lot of hosting for customer DEV/Demo SAP systems and that includes HANA. A few years ago I ran the HANA PRD performance test that can be used to certify hardware that isn't on the HCL. Our Epyc Rome and Naples systems passed with flying colors. I think the entire Intel Specific Instructions is nothing more than marketing saying "Intel has given us a boat load of money to keep this Intel only."
@Zbigniew I would have loved to have it certified going back to Rome. AMD with the 8 channel DDR4 made a huge difference in RAM density compared to Intel. Now with ICL & SPR Intel has only gotten to 8 channel and FINALLY gotten rid of the L series CPUs to allow all their CPUs to access the full amount of RAM. However, once again Intel is behind the 8 ball in terms of density compared to AMD. With Genoa AMD is going to 12 channel RAM when Intel is still at 8 channel. When you are doing virtualization, RAM is much more important that physical cores when you are running dual socket 32c/64t CPUs. All my hosts have 1TB RAM and I wish I had another TB/host. CPU over provisioning by 25-30% or more is pretty easy when you have that many cores as the odds of not having available cores and a VM waiting for CPU is quite small. However, RAM over provisioning can have a MASSIVE performance impact at just 10% to the point that VMs crash.
It's heartbreaking to see Optane go away. Damn, Intel...I hope they license it out to other manufacturers.
The other way round. Intel is now sitting on some juicy patents with this and will sue anyone who attempts to bring something similar (and possibly better) to market.
I think Micron still have it.
@n n I don't think Intel did a good job of explaining what it is or putting it in a form that people wanted. It's definitely desirable but you think you already have it when you buy a normal SSD, you don't have what Optane provides.
@n n Yes, terrible marketing. I've often tried to improve computers using caching or small SSDs but the economic of this are poor and getting worse. First of all fit a decent amount of RAM that speeds up disk activity due to not using the disk as a substitute for RAM. Secondly have a large SSD for the OS or everything if you can afford that. Finally use the hard drive for bulk. Optane does fit in here but as the operating system drive rather than some kind of cache.
@n n In terms of what you'd want from solid state storage Optane has it, assuming it was bigger and cheaper. Nand Flash is a really messy compromise that only works because it's bigger and cheaper. Without all the go faster trickery added to it it's slow. Optane is naturally fast and does not wear out.
I kept hearing that optane was best suited for the CXL implementation so its upsetting to hear that it'll be stopped just before CXL will be used
I didn't even know Optane was originally envisioned as an in-between RAM and CPU cache. Watching LTT and the like I thought it was just speedy cache for hard drives. Guess I learned something new today!
Ah yes, Linus Tech Tips: the pinnacle of pseudo-professional technology demonstration.
We really wanted to get Optane memory. I wanted our databases to be on it. But it is simply too expensive for what it does. Instead, we just bought 1TB of RAM instead. The tradeoffs are better for that than Optane. We cached the whole database in RAM, more or less. Longer starting time, but I will take that.
The idea of using Terabyte SSDs as RAM memory, is a genius idea.
People have and will again do it.
I would use RAM as a write cache, for a RAID NVME SSD primary access point.
Data processing would be orders of magnitude higher.
More cores, more PCI-E lanes.
Notice I reversed the RAM and NVME SSD roles.
Thanks for the take on this. 3DXPoint always felt significantly hamstrung by interface (hardware and software) to me.
It was hamstrung by being defective by design.
Was part of the Optane problem the software stack, was the software just to slow. I did see that level1tech video about intel redoing the driver stack and got massive gains.
Optane, I always wanted to use you but never had the chance. RIP 2022.
Wendell is gonna be sad :(
There is something to be said about programming models and legacy. A lot of big data applications, engines, databases already use storage optimized data structures, packed data and block oriented management. In case of Java they even dump memory safety to unlock their performance goals. Also a major trend is clustering disfavouring beefier stand-alone machines. And as a programmer it is quite a feat finding that your app or concrete workload benefit of a unique sweetspot provided by just one manufacturer. The computing landscape is vast though.
In my opinion the constant delays of Xeon SP killed optane. It wasn't too early it was too late. When it was initially planned to release there was no CXL and nand flash was still expensive and the write endurance was an issue.
Now the price premium cannot be justified.
It's pronounced Lee-high instead of Leh-he. I was at BYU in Provo and at STEM Fairs Micron would come and really push for interns or full time engineers to come and work at the plant. They had marketing material and everything, and the idea at the time was to speed up slow hard drives, which made sense at the time when ssds were still somewhat expensive
I'm digging the K6-2 CPU in your Cache/Latency diagram 😀
AMD is gone, you love TSMC now?
Honestly, even when RAM increases their transactions per second (referred to wrongly as GHz, their latency timing goes up by the same amount. RAM has not reduced latency, only increased bandwidth, nearly at the rate it increases with size.)
You could have 128GB memory, and 24TB in NVME.
The processor cache, would be the new RAM, for intake from the SSD array.
You'd have terabytes of accessible memory from a Raid array of many PCI-E lanes. (All of it, nearly at once.)
The RAM would do write caching and when data change is pending.
I know you're going to say RAM can give you the data sooner.
If it has it, sure.
But Terabytes of all data being accessible, is a total game changer.
CPU cache will be a factor.
I was genuinely excited for the 3DxPoint stuff years ago, imagine no need for mass storage via NVMe, SATA or SAS for Workstations and NV storage via memory slotted in DIMM.
there's always a need for storage. You can't load everything on the internet in local memory.
Sadly, looks like HBM2 is also heading to the same outcome. Apple didn't use HBM memory to achieve the same throughput using LPDDR5X
@@RunForPeace-hk1cu Yes, there is always a need for more storage. What I meant was for something like a local machine/workstation, a few hundred gigs to a few terabytes (based 2) having as fast as RAM non-volatile storage is a good thing. Having it lower/faster latency or similarly as low/fast latency as well is also good. Completely side stepping the need for another layer of cache (RAM in this case) can be beneficial for more speed and efficiency, especially when RAM needs continuous power to keep state compared to, well... non-volatile.
1. company comes up with cool new technology.
2. company doesn't know how to sell it.
3. the technology dies.
4. thanks, patents 😠.
I have to say, the 16GB Optane + 1TB HDD combo in my 2019 Lenovo Ideapad 330S worked surprisingly well. Hardly any of the usual symptoms of running recent versions of Windows off a conventional harddrive.
Ever since I heard about Optane years ago when it was first introduced, I’ve always suspected that it wasn’t going to go anywhere and would eventually disappear. I think that the major problem was that your average person, and even your average PC enthusiast, just didn’t really understand it. Intel did a lousy job of marketing that technology. Year after year, it remained a mystery to most folks, and so there was never a grassroots buy-in for it.
I to this date have no idea what is it
To me the product never really lived up to the marketing. Initially intended as cache, but it's supported platforms were limited and it didn't even really perform well compared to just replacing HDD with ssd instead
Again people are talking about different Optane products. You are talking abut the caching SSD. Yes that was pointless. The more expensive add-in models were awesome tho.
Yes, at the enterprise level it was amazing for the latency /iops, but for almost everyone else it soon became redundant at best.
Optane initially sounds amazing, but as it's slower than DRAM and data centres keep servers up, running, it seems too niche to gain the scale necessary in the market to be viable. You can memory map in data from disk files for example, write accelerators can use battery backed DRAM, SSDs can save state powered by capacitors.
So it really needed to deliver on the speed, density and cost hype; while also being ultra reliable. That sounds like a tall order for a new product facing mature technologies with much more R&D behind them.
It sucks so much that Optane never made it to a price where a consumer version of the P5800X would have made sense.
we are using it in our main database server and i love it. (storage variant)
Sad to see it go ,was hoping that in a couple years it would be cheap enough that i could place it in the plant database servers.
Abstraction? folks: Cache is like your nightstand where you put what you are currently reading, your personal library/bookcase are more like ram, the public library is your Hard Drive.
I'm sad they discontinued it, the 905p was punching a lot above its weight in random r/w, would've loved to see a consumer version of the p5800x
Enterprise customers really liked Optane, and were willing to pay where time is money, and/or where boatloads of RAM was too expensive for the use case. There were reports put out available to read online, published by enterprise users of Optane.
Also database teams really love it, as mentioned in the video.
I have it in my pc, it's literally the best upgrade I've ever done, in terms of pc responsiveness.
I would've loved a nice Optane setup. I have ideas for other ways of running it than the Memory and App Direct modes with the kernel just allowing two different mmap for RAM vs. Optane mapping. Allowing it to work like normal memory but with the application deciding what goes where
This was a really good idea, and I'm surprised to see them discontinue it. It SHOULD have been a game changer
"Liqid might be shedding some tears"
Me too honestly
From beginning: we have storage device and CPU, we have ram and caches to lower time to access for data read/write from this storage. Because this is Von Neumann architecture ;)
BTW why we have RAM is more about past and how storage device look like (punch cards :D) than is required today (swap), but is different topic.
Non-Von Neumann architectures are unsuitable for general purpose computing. They are being adopted for GPUs, media encoders, network/storage/custom accelerators... But they would not suffice for a CPU, which is still effectively necessary because (1) all the code that has ever been written still needs to be run and (2) it is simply totally infeasible to train programmers to target a dozen completely different compute paradigms just to do basic functions - multithreading is already too hard!
A Harvard architecture has separate data and instruction caches. Like basically all modern CPUs have in L1. Modern processors are hybrids and caches are an implementation detail, compiler need to know about them for optimum performance but it will work without caches at all.
@@FreeOfFantasy separate data and instruction in storage system was in computers too, but it nightmare, so treating instruction as data in storage was great move, but as you wrote, L1 is splitting it because this close to processing units, it good for optimization. But if you want processor could work without caches, but it would have 1/1000000 of performance or less. Interesting architectures are design for AI, because you know flow of instructions and data before (Neural Network topology) so you may put right data at right time, without requesting it.
For completeness I feel you need to do a deep dive into HPs 'The Machine' and its Memristor memory technology.
Its birth, gestation and then quietly being announced as dead on arrival as well, in not so many words. 😄
Ah, so that’s what it was called. I was trying to remember the name but all I could think of was “The Cube” or “The Solid” or something along those lines.
It was clear from the start it would fail!
As the whole idea was terrible and guided by a blind ideology.
That is, removing the distinction between the persisted state and the working state.
Every tech/product that tries this, will fail.
Only those that trust hard-and software to never produce any error, or otherwise unwanted result, believe it will work and try to make it!
Intel had squandered Optane by making it only a glorified HDD cache for consumers and Intel exclusive.
Had they opened the tech to AMD and let it compete as an SSD, they would have probably been more successful.
Bit late to this, but we invested HEAVILY on optaine, and intels sudden cancellation of it has caused a headache of huge proportions. All the servers were gonna now HAVE to scrapp because we can no long rely on the supply chains.
Invested since 2016 - 18.72 Million
Scrap Value Today - Shreddies.
Still I can finally get rid of the last of our intel systems.
IDK, are high optane fuels really worth the cost? I heard somewhere you can add your own additives to your gas tank to make it high optane, the brand I was recommended was called liquid shwartz and when put in an electric camper van makes it go plad.
I know it wasn't really a consumer product to begin with, but I would have really liked to have a moderately-sized PCIE 4.0 Optane drive. But they never came...
Yeah there was one M.2 version based on PCIe 3.0 (22110 form) but the rest were either 2.5" or PCIe cards.
Pity they haven't tried going with an altered use-case - instead of developping CPUs which work with DIMM slots they could have prepared a more SoC comprehensive package with eg. 32-128+GB of "on package" high speed memory, eg. HBMe, for the compute intensive layer of data storage with the DIMMs delegaated to serving the purpose of NVDIMMs/Optane DCPDIMMs for slightly slower but larger mem pools and/or fast storage, whilst delegating all the PCIe lanes to extra controllers/accelerators/functionalities... a kind of a switch or shift in technological stacks that could have made success or at least that could have questioned the status quo. Heck, maybe somewhere down the line nVidia decides to buy the IP and they manage to extend their Grace+Hopper product stack with the proviso of the high speed RAM on package that's vastly superior to DIMMs, which then would serve the purpose of large mem pool providers for the compute intensive workloads DC (and possibly workstation) businesses are targeting.
I was really excited when optane came up in Anandtech benchmark giving consistent random r/w speed and very expensive. I really thought their 2nd gen and future optane products will make affordable and high capacity across any platform say armv64,amd64 or risc. This would make optane worth investing but everything is out of the window for consumers
4:40 *Lehi ("Lee-hi"), Utah; not "Leti/Lehee". That once "IM Flash" fab is literally blocks from my house. Texas Instruments owns it now and makes analog semi's there.
I would use RAM as a write cache, for a RAID NVME SSD primary access point.
Data processing would be orders of magnitude higher.
More cores, more PCI-E lanes.
Notice I reversed the RAM and NVME SSD roles.
The processor cache, would be the new RAM, for intake from the SSD array.
You'd have terabytes of accessible memory from a Raid array of many PCI-E lanes. (All of it, nearly at once.)
The RAM would do write caching and when data change is pending.
optane dcpmm was a cool feature for server system for sap/4hana but very expensive. we will see what brings cxl 2.0
I send my condolescence to pmem. Near-RAM latency and byte addressibility has some potential if only the price had not been this ridiculous. Its use cases are very narrow for this price point.
The pmem will be back some day when its killer application rises and that market becomes big enough to lure competition from big players.
We really need an upgrade to NAND, I really hoped optane would fill the gap. The durability and seemingly effortless increase in channels/bandwidth is just so damn good.
I got to #3 and my brain began to hurt, and it just kept going... Intel, what were you thinking?!
With loss of the manufacturing center and the development center I don't see what else could have happened.
Minor correction: The Micron fab is in Lehi (pronounced lee-high) Utah.
If it was cheaper than dram, then licensing it out to ssd manufactures to use optane chips instead of dram would have been neat.
It's pretty sad its coming to a close, it had a lot of potential. Would have loved a large ssd like that I could hammer away at and never kill.
Whoever came up with the 16Gb Cache drives was a moron and ruined the perception and branding of optane. They should have sold them as enterprise grade scratch drives for creative workflows.
We know that Intel have terrible management... And closing it to only own platform is problem too.
Agreed. Even in comments here and elsewhere people lament Optane for being terrible because of these cache drives that no one needed or asked for.
thing is didnt they have 1TB optane drives? or 250GB but literally is a PCIE card
@@edwardtan1354 for enterprise for crazy money... this is why I think it would be good to open it, because more people working on this technology, cheaper should become.
@@m_sedziwoj I mean if LTT has it... then it could be consumer or enterprise solutions... so we will never know... but with Micron backing out with no one to manufacture the 3DXP memory chips the next best solution is to use NVMe software solutions
I hope Intel licence they patents for 3D X point technology, because for me this technology don't get enough development to take place, and where are places (specialized) where it would be much better than flash.
@n n do you even know what is different in this technology compare to flash (NAND)? It more similar to transition from vacuum tubes to transistors, than from DUV to EUV. Ofc maybe it would never perform as we hope, but without research you will never know. And a lot of performance we know left on table, if I remember correctly they don't even go to read full line, only one cell at once.
Good video, what I really would like to know is:
How does this effects Aurora? Wasn't it supposed to sport Optane in conjunction with SPR+HBM2??
I don't remember Aurora having Optane as well. I know Optane was meant to be supported with SPR+HBM, but I don't think Aurora was having any
@@TechTechPotato Why did Intel push the 16gig cache drives so hard, despite desktop users wanting SSDs? These drivers work wonders in games such as Star Citizen..
They should have targeted just replacing NAND flash completly, which simplify a lot of things. But to do that you need big capacity and low cost. Another option would be to use it as a journal or write ahead log for file system or databases, accelerating writes, but then you could do the same with just RAM and battery backup.
Also now it is killed, it would be nicest if they open all related patents.
Wendel from Level1Techs is certainly very sad since Intel announced this...
Optane should have been a "Prosumer" type of product that boosted SSDs performance. For enterprise a drop in replacement for RAM at half the cost and no worry for power loss would have been an amazing value. You wouldn't have to keep every single server on a UPS.
I've got a workload that absolutely thrashes disk if it's not cached in RAM. And it's much easier to dump a couple TB of Optane into a box than it is to get a couple TB of RAM. I guess I just hope Samsung Z-NAND catches up.
I've only seen a single Z-NAND model in retail and that was years ago. Im not sure Samsung even makes it anymore. Kioxia still makes XL-FLASH that is essentially the same (both are SLC).
Seems to me Intel was lying to investors about Optane's future even after Micron sold Lehi Utah fab to Texas Instruments. Instead of coming clean, they perpetuated this ambiguity for almost a year.
Optane reminds me of Intel's Itanium. That shit was ahead of its time in a sense, plus it was failing at points precisely due to breaking new ground. Now the Optane situation has an eerily similar vibe. Its not my money I know, but they really should have stuck with it, just like Intel lost the initial x64 to AMD as they werent ready to even minimally maintain the damn thing.
Itanic was always awful, an attempt by Intel to have a 64bit monopoly. AMD64 did what programmers wanted, was able to run 32bit binaries native so evolved Windows forward without relying on magic optimising compilers to VLIW that were impossible to write.
Eh, I disagree. Optane is just better, however it's expensive. Itanium sucked in many ways
@@RobBCactive "Itanic was always awful"
It was way faster and more efficient. the "awful" part was that it broke with x86 and initially only had very slow x86 emulation.
But it solved many of the problems that x86-64 still has.
"without relying on magic optimising compilers to VLIW that were impossible to write."
Yeah, just that compilers now are way more complicated and capable than ia64 ever required.
@@ABaumstumpf that was one system everyone hated, it was overpriced and killed by better tech once Intel recognised their failure.
Itanium was chocked by low bandwidth and Intel put too much on software pipelining (versus just loop unrolling, whole cyclic vs acyclic scheduling story) and full prediction, but probably the worst was all mess with bundles. But floating point arithmetic abilities and algorithms Intel developed with HP for division and square root were top notch and one of greatest achievements, unfortunately, mostly unknown to general IT public.
Would have liked it in my personal rig but the price was insane while persistent memory was not available. It was either get a 32GB optane SSD to "speedup" a HDD, or get a 250 GB SSD and still use it to speed up the HDD or just directly use it.
We would have liked it at work but again - was not available for laptops, hard to get for servers and then they cut it off. We have workloads that do only sporadically requires loading a lot of data - right now this is all on normal SSDs and you get significant slowdowns but adding more memory is just not worth it (direct costs as well as power) or for the bigger instalments just not possible (1.5TB limit).
I just recently had my Windows install on an H10 basically nuke itself. I was using Optane as a a cache for the SSD portion, and for whatever reason that relationship got corrupted or something and now it won't detect the main SSD portion. I need to get an m.2 enclosure and see if I can salvage my files from another computer. Left a bad taste in my mouth honestly. The little 32gb M.2 Optane drives were nice for FreeNAS installs but the caching/acceleration feature seems to be designed really poorly IMO since it seems like running RAID 0. If something goes wrong with one portion or there is a software bug with the relationship it all falls apart.
Maybe i should snag some optane before they go fully extinct.
Collectors item for sure.
Only if we could have non volatile memory fast as cache and as close as possible to the processor... unfortunately this is at least for now unrealistic, but its clear that memory access speed/latency and bandwidth is crucial as Apple has confirmed with their Mx chips. Processors need to get as fastest unified memory as close to them as possible. Seeing it to be one day as non volatile would be miracle, but who knows, maybe we will get there...
I know it wasnt the initial plan, but I HONESTLY think for consumers, rather then treating it as ram, it should have been used to accelerate HDD.
Gaming focused drives, like Western Digital black, could easily justify costing 50 dollars more on the 4-6TB models, by adding 16-32GB of optane on the drives. and still be cheaper then a 2TB SSD, so people could actually download a couple of games from their steam library without breaking the bank for a large SSD.
Lookup Intel Terahertz, 1000GH CPU with 10,000 times less leakage, lower power and ran cooler with their "special sauce" they said. Yet 21 years later it's still not used?
Optane was in my planned upgrade path, I have a 9900K and a big Samsung SATA SSD
Can we say CPU registers are also a type of storage? I mean registers do store values.
I was excited to see some enterprise database performance benefit, but the stuff that really helps the performance tends to be just fine with volatile memory.
The fab is located in Lehi, not Leti.
Unfortunately this is all too common for Intel. They launch a promising / technically interesting product, doesn’t get the immediate commercial success they were hoping for, they cut losses and cancel the follow up. Knights Landing, Compute Cards, Kaby Lake-G, Lakefield, the list goes on.
386-SX, Iridium...
...the freshly launched GPUs (supposedly)...
I really would love a p5800x ... 4k$ is just too insane.
There is the 400GB 1,5k model.
@@Raivo_K That's no good in 2022.
The impression I got with Intel's Optane memory is that they thought it was something to be worshipped and never let out of the box. And that's a shame because they could have made a ton of money with everyone benefiting. But they just kept the price up as high as possible with mainly smaller memory capacities.
I don't see what the point of Optane is now that we have nvme drives that have transfer rates of several gigabytes per second. Any money you put towards Optane can be put into more DRAM or larger/faster SSDs, depending on your application. I didn't see much of a purpose when it was announced and I'm not surprised that they're shuttering the product line now.
Well, Optane DRAM allows for multi-TB with quick QD1 perf. Ideal for databases or HPC
Optane DCPMM had few serious issues. It required more expensive high memory limit Xeon CPUs. It wasn't really that much cheaper - i'd say that this 2x less price per gb was compared to lrdimms or 3ds lrdimms, which are much more expensive than standard rdimms. Last but not least the performance in comparison to memory is abbysmal - there is a whitepaper from Lenovo regarding the tests for SAP Hana. Afair IT showed 50% less speed. All in all if you need more memory than standard x86 can give you, the power PC looks really well.
Only couple of year's i wanted an Optane module to speed up my HDDs. Because at this time, a big ssd was much to expensive for me.
Then came amd with it's software solution , where you could use every cheap small 128GB SSD to speed up the HDD by using it as cache. This I also didn't by ;-)
At the end (four years later) I bought my new PC with a real 500GB ssd, and since then (2018?), i use only SSD in PCs, and HDDs in the NAS :-)
Ian, can you tell me what brand U.2 to pcie adapter you are using for your Optane SSD's?
If I remember correctly Optane/3D Xpoint wasn't developed by Intel rather it was something they acquired by buying its developer.
Just when I thought I knew everything! ;) Great job Ian.
This may be stupid, but when I hear "phase change", I am reminded of Panasonic's PD and DVD-RAM discs, also extinct and also based on some kind of phase change technology. Like Optane, those discs had a reputation for rewritability and high reliability. That marketing claim is oft repeated as fact, whereas in my entirely anecdotal and personal experience, those discs were the most error-prone and unreliable optical discs that I have used. When I compare Intel's initial claims to their subsequent SSD endurance rating, coupled with their unwillingness to provide samples to Ian, I've got to wonder whether there isn't more going on here.
Yep, I tried to use different rw optical disks; they never let you rewrite reliably so were more wasteful than the stacks of WORM disks that became commomly used as coffee coasters.
The marketing claims vs reality, reminds me of another recent Intel story.
It's a pity SKHynix isn't and Micron didn't do more with the 3DXPoint Tech; Performance-wise the Optane Memory-based SSDs were the fastest Storage short of a RAMDisk in the m.2 Form Factor, even if the difference between Optane and TLC/QLC wasn't highly perceptible to the average End User (who would've been on a Spinning Disk or SATA-based SSD at the time anyway).
They killed it their self by not making it work on all pcs. Could of made billions off of that
this is the simple FACT, and very true 🤣🤣🤣
There where plain and simply no good software ideas on the operating system level allowed to come through inside intel to make this kind of persistent memory to come through and become commonly used. Pricing was far too high, and performance turned out to be not really competitive. Huge clients avoided jet another attempt to get them on to a proprietary leash like the DDR-T stuff. However still the main reason why this technologie died was the fact that the technology turned out to not being able to compete with the advances in flash memory.
Kinda suprised that cxl wasn't the breakout tech for 3d cross point.
Looked all over the internet & couldn't find a clear answer to my question. Hoping you might shed some light.
I've decided to try to breathe some new life into my laptop that has only 8 GB soldered RAM by adding the Optane Memory Series (what you call cache drive) into a second M.2 slot for multitasking: Chrome/Adobe Acrobat/Office running all the time in parallel eating up all my memory.
I wanted to get the 32GB version of the Optane Memory but bumped into this on a non-Intel website & couldn't find a confirmation or disproof of it anywhere else: "You must use the M10 version of Optane Memory. The original version of Optane Memory will only be supported in a desktop environment".
Do you happen to know whether a non-M10 Optane Memory Series drive should work on a laptop? If not, I'll have to stick to the 16GB version as supplies are scarce at this point.
Thanks in advance!
P.s. I know Intel has stopped supporting this product for consumers & am ready to experiment with setting this thing up.
P.p.s. thanks for the video!
Any size M.2 drive that physically fits will work, whatever capacity of drive. They say that only the 16 GB is supported on the basis of their validation and guarantee, but the beauty of open standards is that whatever conforms to the standard should work effortlessly. But just to confirm here - the Optane cache drive won't add DRAM - it'll just be seen as another storage device. Intel called it 'optane memory', but it's just storage. Might as well add in 1TB M.2 drive.
With cheap Optane modules available and all new motherboards featuring a ton of NVME slots, I was looking forward to building a new 13th Gen system using it and really tweaking a 13600k system for productivity. Will 13th gen support Optane still ? or is it dead on the desktop now also ?
I don’t understand why they don’t license the technology out if they were to kill the tech.. this will revolutionize this entire industry, and they can make tons
They could have used it as cache on their ssds or something (since it was so small in capacity). What a bummer to loose this awesome tech though.
They did and it was the worst of all 3 Optane product lines (cache, SSD and memory).
They never found a problem for their solution.
Problem was price and 2nd in line was capacity. If price would be 1/2 of DRAM, story would be different, let not even imagine that it would be cheaper to make than flash memory...
SAP and similar use cases absolutely loved this stuff. It just never became affordable.
A real myriad of issues that plagued such a great technology :(
Would've been incredible if 3DXPoint could've continued development. The IOPs, latency, and endurance were truly incredible. Tons of issues with the Micron agreement on the fab you mentioned and all sorts of development issues in general...
Good idea, a niche product that was hobbled by an Intel lock in. Add the massive investments into DRAM and flash manufacturing and R&D that weren't available for Optane because of the proprietary lock in, it just got crunched from products from both sides. Even if it were not locked into the Intel ecosystem, the success of an Optane like product would not be guaranteed... it's just that the proprietary ness of it definitely guaranteed its death.
You are talking about the DIMM module version. The add-in SSD's did not have any such lock in.
@@Raivo_K the Optane SSDs were crippled by the i/o bottleneck, the promise was near DRAM latency and speed nanoseconds, not microseconds going through an OS stack designed for disk drives.
Memory mapping in data from persistent memory is interesting, but as it requires software changes and the main advantage is persistence at cost of speed, I guess most people will simply stick with RAM and precautions against power failure.
The cost advantage they promised in the initial hype simply was not delivered; that would be key to wide adoption but overcoming the economies of scale in DRAM & NAND was a tall order.
Is it good as of 2023 as a caching drive in front of a big cheap SATA SSD? I am an average consumer pondering buying one of those inexpensive sticks being sold now on Newegg/E-bay. Is it worth it for a SATA SSD not an HD drive. Anyone, please? I have no need for NVME just interested in speeding up old SATA technology on the cheap. My mobo supports Optane.
I'm still waiting for RAM that is flash based and non-volatile and huge in storage....
That's what Optane was
@@dlarge6502 No, it was at least a couple orders of magnitude slower than RAM.
Seems like Optane was in a really awkward spot: Existing software is optimized for slow SSDs/fast RAM, and cannot easily take advantage of an Optane intermediate layer without serious adaptions. But, no-one is going to do these optimizations, if Optane is not already a well-established technology...
Intel should have probably focused on those handful of niche-cases where the Optane benefit is largest, and then try to expand from there, instead of this mixed strategy they seem to have done here.
Add-in card SSD's did not require any special layers or optimization. These work well even on the newest Ryzen systems.
Cache models require primocache and DIMM's are not compatible anyway.
I also sport a 32GB Optane Module as a caching solution to my 1TB drive and boy when you are dealing with a spaghetti coded game it actually works wonders 3-5 minute wait times down to 1-2 minute wait times shows it has its use case but not everyone can afford its use cases even then AMD's solution also got scrapped half way (least its most likely open source so people can pick up the slack) but now you can just turn your NVMes into a caching solution due to better software compatibility meant there was something
15:20 you think Intel would make this part of their B2B rent a server business model