The strange code morphing CPU inside the Sony VAIO U1
Вставка
- Опубліковано 25 лип 2024
- In the year 2000, a small company called Transmeta Corporation released a CPU that challenged Intel's Pentium. This new CPU was called Crusoe and emulated x86 CPUs by 'code morphing'. The Sony VAIO U1 was one of a handful of computers that contained this radical new design in CPU technology. One advantage of the Crusoe was it's extreme low power efficient operation. Giving it an advantage in very small computers while providing very long run time on batteries.
Chip Hall of Fame: Transmeta Corp. Crusoe Processor
spectrum.ieee.org/tech-histor...
Transmeta speed debate - damned lies and benchmarks?
www.theregister.co.uk/2000/10...
Image credits;
commons.wikimedia.org/wiki/Fi...
commons.wikimedia.org/wiki/Fi...
commons.wikimedia.org/wiki/Fi...
commons.wikimedia.org/wiki/Fi...
Stan LePard - Windows Welcome music
urlzs.com/whARX - Наука та технологія
quick and efficient emulation is always so hard. the fact that the crusoe was able to run complex instructions translated into a single simple line in a single cycle is ridiculously impressive.
so sad that this kind of technology wasn't explored more though the 2000s.
Apple's M chips are the closest we have now. Though, it isn't entirely the same.
VLIW was explored in the early 2000s, almost every silicon manufacturer went through a phase of wanting to use it. Intel had their infamous Itanium architecture and ATI/AMD had TeraScale in the GPU space. There was also a couple of smaller players that tried their hands on VLIW. The problem is, it's just not a very good design choice and going out of order with multiple super scalar execution ports is just easier for everyone involved.
@@JustJustSid because it is very hard (maybe even impossible) to make good automatic compiler for VLIW. You need to predict the data, set the branch prediction in advance... x86 does it for you on hw. And transmeta was only one doing it in HW on real data without making the programmers and compilers to do the optimizations in advance. Which seems like the best strategy in hindsight after the Itanium and radeons HD5000
Nvidia bought Transmeta und used their technology to create a series of efficient and performance-wise extremely competitive ARMv8 CPUs "2014-2018: Denver 1&2, aswell as Carmel". After that, they switched over to ARM's regular CPU templates without ever publically stating a reason. Maybe they encountered the limits of this technology when it comes to performance scaling or they just deemed that in the current market, making use of such a special core design isn't worth the extra effort. Anyhow, maybe it'll get picked up again one day.
@@vogonp4287 Apple's chips are just ridiculously wide and use big caches and instruction reordering windows. They are engineered for lower clockspeeds, fabbed on the latest nodes, ship many fixed-function accelerators with tight software integration, and thus, run extremely energy efficient. Their CPUs are designed such that as many different micro-OPs (instructions -> macro OPs -> micro OPs -> buffer&reordering -> dispatch&execution) as possible can be in-flight at any given time, but they are dispatched independently from each other.
VLIW performance on the other hand stands and falls with being able to dispatch a large enough piece of work to saturate all execution ports at once. If that cannot be done, a part of the CPUs execution units will remain idle. That bubble cannot be filled with other work. When using Transmeta's approach, you get a software instruction reorder buffer that's measured in megabytes instead of # of entries, which helps alleviating that problem. Having a good, deterministic (regarding compilation time) compiler design and a wide enough spectrum of instructions tailored towards fully covering the combined equivalents of many ARM/x86 instructions are another must though. These things have to be balanced against each other, because more possible instructions also means more compiler complexity.
Great video! I was the fab product engineer for the Crusoe processor designs back in the day. That project was a boatload of fun. Got to work with a lot of really talented folks at Transmeta and even got to meet Dave Ditzel and Linus once. I had a Crusoe based VAIO Picturebook on my desk for a couple of years and still have VHS copies of the press announcements somewhere. Thanks for bringing back the memories!
Wow! thanks for sharing. The ideas behind these CPUs were amazing. Well done to you and everyone involved for shaking up the CPU market.
wow :D
oh shoot, if it's not too late, you should try to archive that VHS tape! i haven't seen it posted on youtube or the internet archive. all you need is a VCR and a cheap lil a/v capture card, i happen to have those on hand myself and it's a quick n dirty way to digitize VHS tapes!
@@EeveeEuphoria I still have them, but I don't believe it would be within my right to post them.
@@rafterrattler561 At least keep a digital version of it to yourself, don't risk losing them :)
"If Intel don't keep innovating, then they'll surely lose their place as the world's foremost CPU company over the next few years as changes continue to happen in the industry."
My lord this quote aged like fine wine.
I JUST POSTED THE SAME. The Sage here got it 100% right.
@es-zw3mg i mean 13th gen is insane on comparison and only really happened due to amd catching then off guard with server tech, and 10 years of 14nm.
The lack of innovation is why AMD is now a competitive player on the server market.
Transmeta was able to make the CPU optimize the code by itself which is really amazing. The limitation is clearly storage, they can't have the code morphing hogging all the hard drive for code optimization.
What if it would only need to be morphed once, replacing the original code?
I imagine it would all be done in RAM or even cache or dedicated on-die memory.
@@widgity It takes allocates 16MB RAM from the system to use as a cache.
I'd love to see what performance it got with a SSD
If Linux and Transmeta coexisted together I think we could have reached much higher levels of optimization between silicon and code.
Transmeta's VLIW/code morphing technology lived on with Nvidia's Project Denver which did a similar thing with ARM code translated to the processor's internal instruction set and optimised over time.
Some of Denver's engineers previously worked for Transmeta and there were plans to make it run x86 code, but they were unable to license the latter from Intel.
But even here Nvidia differs from Crusoe in having hardware decoders to give the binary translation a reasonable performance floor.
Do u know about e2k-architecture?
Similarly the modern M1 Macs do a similar thing in software, unlike the earlier Rosetta 1 which interpreted PPC instructions on x86 the current iteration actually optimises an x86 binary on the fly and permanently stores the converted ARM variant. Amiga did something similar when they switched architectures
@@theharbingerofconflation also the M1 has some x86 compatibility hardware, which makes them fast with only a ~30% loss from native arm.
interestingly they also have javascript specific instructions
the itanium architecture was vliw, but failed due to lack of support.
I think unless there's leaps in battery tech, at some point we'll see more risc/arm chips with x86 backward compatibility, or x86 coprocessors, at least in the mobile market, because of the performance/W gain.
@@satibel x86 will close in on Apple's chips, they're not THAT impressive.
I wish Transmeta had succeeded. My local hackerspace had contact with some of the Transmeta engineers and ended up in possession of a couple of racks worth of multi-unit clusters. We never really managed to get the whole cluster to boot, alas. But it was neat having someone working on a fascinating piece of history.
Hopefully you still have some of the equipment and didn't throw it away?
@@broklee I am no longer associated with that hackerspace, and have no idea what happened to it.
Back in the day I purchased a PCG-U3, second hand from Japan, it was already obsolete when I got it, but the form factor and thumb operation felt like the future. Good memories :)
I always loved the look of the black U3 and wanted one, still do!
When you watch companies like GPD basically make the same thing today but with far worse design, fit and finish, you have to show these guys who were literally writing the future blind, mad respect.
I have a 16:9 Fujitsu Lifebook with a Crusoe processor running at 933 MHz.
I got it on a local flea market for 8 dollars.
I managed to put 3 new cells into its broken down battery. It even has a DVD drive inside, with 512 MB RAM, and ATI GPU, and Ali chipset.
the crusoe is amazing. the fact that it rivals a 500 mhz p3 recompiling software on the fly is amazing
Yes, but it also has certain issues that almost make it unpractical for certain markets.
For example, it hates dynamic recompilers like those used in emulators, it either crashes the emulator itself, or at best, gave no performance improvements, which makes it pretty bad for emulation.
Source : I actually owned a laptop with a Transmeta Crusoe processor. Due to those bugs the 700Mhz Crusoe I had would perform, at best like a Pentium II 400Mhz processor on emulation (Made worse that dynamic recompilation doesn't work). Pretty great at everything else though.
@@StriderVM JIT compilation was only just starting to become important though, and I don't believe there were very many implementations that mattered. So, maybe being incompatible with JITs would have been excusable at the time; and Transmeta could have worked with the handful of JIT manufacturers to expose a direct API to Transmeta's morphing layer? Of course that's all hypothetical. Time moved on from there.
On the other hand, the fact that dynamic code generation resulted in buggy execution has me worried. That sounds exactly like the type of problem that would have security implications. If you think that side-channel hardware bugs such as Spectre are a major pain, then I don't even want to think about what problems you run into when you can race the morphing software.
It is still a technology from the future. Imagine distributing only LLVM IR code and the CPU itself running a JIT compiler, the most efficient JIT compiler possible, making for better performance as it runs.
@@StriderVM Ironically the hardware is doing the dynamic recompilation itself, perhaps with a simpler emulator that just loaded and interpreted the instructions instead of double recompiling them.
@@gutschke "side-channel hardware bugs such as Spectre are a major pain" only for multi-tentant cloud providers. Why are we even sharing cache ? that's insanity, we need better isolation if you are going to run different tentant code in the same physical CPU.
Honestly these small fines are so dumb, really should scale to the size of the corporations. Intel should have been fined billions imo
should be 50% of their profit AT LEAST. 250M is pocket change for intel.
@@simontay4851 i'm thinking more like 90% of the profit of every patent infringing processor sold, until the infringing technology was removed.
@@KOSMOS1701A
Or 200% of their profits.
Intel basically admitted to stiffing multiple years of potential technology advancements by their competitor by simply infringing copyright on new technology and making them shut down by economies of scale.
We could have had a magnitude faster consumer CPU by now.
Intel should’ve been dissolved and the CEO + board of directors federally prosecuted for market manipulation at a MINIMUM.
This was always an interesting concept. They basically took the on-the-fly instruction translation pioneered by NexGen/AMD and made a unique solution of their own: instead of being locked into hardware, the CMS's features could be updated with new instructions (such as MMX, 3DNow!, SSE, etc) and run on the same chips as before. Theoretically, any ISA could be emulated by the Crusoe if the correct CMS was loaded, but nothing except x86 was released in a shipping product.
Process which CPU uses called "Binary Translation" or "Binary Compilation" it consists of 3 stages. First is emulation with 100 to 30 perfomace degradation. On this stage binary counters are collected for each x86 instruction. On second stage hot code regions are compiled to native transmeta by naive and simple compiler, each basic block of code separatly. On 3rd stage super hot loops are translated into higly optimal native code.
VLIW is the opposite of a RISC machine. It has multiple Instruction Units. Look into the MultiFlow Trace and Cydrome Cydra 5 for predecessor systems. The Cursoe was the first VLIW that solved the problem of how to extend a VLIW architecture - they used a JIT (Just in Time) compiler to create optimized code for their VLIW machine - Something the Multiflow Trace and Cydra5 didn't know about (since they were products of the mid-80s.)
AMD introduced a microcode architecture which did a somewhat similar thing, by putting more logic of the architecture in microcode software, no optimization though and still quite complex, Rosetta on Apple ARM is most comparable I guess, and it works great
This reminds me of Android's JIT profiling engine, which attempts to improve the performance over time, in addition to compiling the app into native CPU code ahead of time to eliminate dynamic compilation overhead. Transmeta was doing this more than a decade ago!
Except Android also caches this optimization result so that it can be reused again.
Most JITs do the tier up thing, including all major JavaScript engines, the JVM, and some CPU emulators I think
Android no longer pre-compiles binaries. Instead, P-code is delivered in the package, and compiled at install time.
@@LMB222 I thought they were compiled to ARM these days by the developer.
I couldn't help but draw a parallel to this and the way Wine/Proton's DXVK works to translate Direct3D API calls in to native Vulkan on systems that do not offer native DirectX in software. Specifically, the Vulkan Shader cache, pre-compiles the DirectX calls in to a lookup table (Cache) so that when the application makes that call, it can translate it on the fly to vulkan much faster than if it had to compile that instruction on the fly - the result is a significant performance boost for that application. If you don't have pre-compiling enabled, the application will noticeably stutter and lag for a while as this information is gathered on the fly, but after some time performance will improve.
Whoa, the XP OOBE music. I haven't heard that for years and years and years. Nostalgia factor 10.
What a blast from the past! I had forgotten the Crusoe. When I saw the thumbnail it all came back to me. I don’t think I ever knew that Linus Torvalds has been involved with TransMeta. Very cool!
"that thing was going to revolutionize the world - until it didn't"
The idea of x86 code translation the Crusoe was doing wasn't anything new. Code morphing from x86 to something "RISC" started with NexGen. Intel did its own thing in the Pentium Pro. This implementation survives today
What Trasmeta wanted to do was run apps on a recompiler so instruction scheduling and execution could be made in away that made CPU core simpler. This idea has only been tried again by Elbrus (who did make a much better x86 compatible Crusoe-like CPU) and NVIDIA (who made Project Denver, which was in the Nexus 9 tablet)
it was killed purposefully, intellectual ventures owns the patents. guess who owns a 20% stake in that company? none other than edward jung the co-founder of intel.
there's a picture.
Damn, 1.5w avg for back then it's even incredible it can run windXP 😁
Back in like 2004 I had a PIV 3.8ghz and that thing ate quite a bit of wattage and always run hot on some tower cooler with like 3 or 4 thick heatpipes with a 120mm fan, I tried like 4ghz but was melting and could never work it out at that frequency😁
Very interesting piece of cpu history. I wonder, if these chips took off, if they would have started making software that runs natively and bypass the x86 emulation. How fast would it have ran?
In any case, it seems like a relatively painless way to switch architectures.
That’s a really interesting question. Unfortunately not much is known because it was proprietary.
In the end it seems that Apple silicon has answered the question this posed - thatx86 emulation can be done, and done well, with low power consumption and incredible performance, and convince developers to port their code to native for better performance. The approach might be quite different but I think given enough time and development Transmeta would have gotten really close to native performance.
The transmeta cpus were never designed to be able to directly run application or operating system code. Their native code format doesn't have protected memory or supervisor mode or anything like that - those features of standard processors were actually emulated. It also wasn't *really* designed to switch architectures. A bunch of the native instructions were designed so that e.g. flags from performing certain mathematical or comparison operations were the same as native x86 instructions would produce, to reduce the overhead in emulation. That said, i do recall hearing about them making a prototype chip that could run java bytecode directly.
@@kepstin Wow, very interesting! Thanks for the info.
It seems like emulating security features would open up a big new attack surface for malware. Not that doing it in hardware has been much better (Spectre and Meltdown, I'm looking at you, lol).
@Akshay - Yes! I have heard amazing things about Apple Silicon. I'm very eager to see what Intel and AMD's answer to it will be.
Seems likely they would have kept updating the internal architecture to further optimize things. When it's a private instruction set you have more flexibility in changes.
Apple complaining about not having enough space on their macs to put a headphone jack, while this mini PC has a whole army of inputs on its back lid...
Oh wow. I remember reading all about the Transmeta Crusoe chip back in the day, but I had completely forgotten them in the intervening decades until finding this video in my YT suggestions.
Great video, first time viewer.... you had me with the Xp setup music... so much nostalgia 😅
I remember hearing about the Crusoe CPU many years ago when i was bored and investigating the history of CPUs. nice to hear about it in detail
That looks so unique & high-quality the code CPU's or something like that are massive in this snap never saw this.
Actually Transmeta want first at CISC translated into their own RICS approach
AMD did it first with K5, then NexGen did it with Nx586 and unreleased Nx686, which AMD ultimately have released as AMD K6 with Intel GLT bus
Even Intel did this with Pentium II and mP6 RISC core inside
The difference is that all those impermeable are fixed. Transmeta did it Dynamic.
I just learned something that I never knew I didn't know! Cheers from Detroit, great video!
That lined up transition at 5:31 was impressive!
As someone who has been aware and following this kind of tech for two decades, this is good stuff!
What a beautiful little machine. You can tell they implemented the XP vibe in the looks of the blue one. I personally like the black one more
I also prefer the black one. Not easy to find though.
Sony with incredible form factor and those thumb gizmos.
It's amazing how far battery technology has progressed.
Lithium ion will never be ideal, but dang, it's encouraging seeing how far the tech has come. That battery is way way bigger than smartphones with a similar capacity battery.
That cpu was emulating the code of a pentium III and at better speed .Still own a vaio PCG-C1MV - Crusoe TM5800 works amazing also good for vintage gaming with 8Mb ATI Rage
I had a Sony Vaio C1 picturebook using the same processor. Was absolutely amazing. The tiny thing could play Counterstrike for hours and I'd bought a bonus quad capacity battery that took its runtime to insane lengths.
The processor was bonkers and plenty of games didn't like being run on it. But I loved that damn picturebook.
I remember reading about Transmeta and the Crusoe, and I absolutely wanted one. I waited for them to be released, but by the time I went back to look them up, they were already out of business.
Fascinating chip, I remember hearing about it back when it was new but of course its applications - very expensive, very tiny subnotebooks - were not really on my radar. As a CPU geek though I wish I'd known more about it.
Since it's essentially emulating x86... was there any plans to emulate other architectures? not that there was any huge demand for this beyond maybe some for PPC at the time but it's an interesting idea.
There was talk of a hybrid chip that could do either x86 or PowerPC. I'm certain this was part of Transmeta's planned strategy.
@@JanusCycle they were able to run java byte code on metal.
ALMOST on metal ;)
If we took into account all the people in the comments that had something to do with this processor or PC itself we'd be using it today.
My experience with it? Saw it in a dusty display in CompUSA, saw the price and moved on. Or maybe it was the Libretto. Who knows.
Efficon and G1 Transmeta's also made it into heaps of thin clients. I've found some old Fujitsu Futro's at work that had them and got them back to running full desktop O/S's. Lots of fun. Down-side ..add a fan. Running compile jobs for NetBSD, Linux and such and having a blast killed one.
This sounds a lot like what Apple is doing with their M1 chips - emulating x86 on a RISC cpu, with good performance and stunning power efficiency as a result. Shows that the idea had merit. I'd have loved to have seen this cpu line survive and compete with for example the ridiculous P4 chips in laptops at the time.
thanks for the summary
I had a Vaio Picturebook with the 867MHz Crusoe CPU back in the early 2000's. It was a really special machine. I recall a particular college computer science professor who scoffed at the Crusoe and didn't believe me when I said I had a Sony-branded laptop that featured it! I had to bring it in for show and tell.
imagine if they succeed, we would've had RISC processors everywhere
The inginiering that went to make this laptop is insane for a product intended for Japanese market only
Nice shot of some D&D dice at 2:06 there! :)
Wow, hearing the name Crusoe awakened deep memories of late night message board conversations. Haven't thought about Transmeta in many, many years.
Great video i really enjoyed the content. I have a Sony Vaio Picturebook C1-MSX that has the same CPU but at 867MHz. Great little machine.
Sharp MM10 is based on Transmeta processor too. It looks and operates nice.
My friend's friend bought the C1 but returned it after (our trip to CES in 2001) to the store as it was too slow for him
Great video!!
Friend had a laptop with one. Was so excited to see it turn into an amiga or zx spectrum at the flip of a switch, alas that wasn't to be.
Tysm I kept a thin client with a transmeta not just for nostalgia but because it's a pretty nice x86 box.
I have Sony C1 with Crusoe processor. Very interesting machine.
That code morphing was very fascinating
Love the U2 cut in :)
This feels like how apple is now giving intel a run for their money in the ultrabook / pro notebook space with the M2 CPUs.
Using the Windows XP welcome music as the intro and outro music, nice. I considered the same thing for my channel a few years ago, but was worried about copyright stuff. I've since uploaded the entire song to my channel, so it'd probably be fine, but still.
Awesome video by the way, I just had to say that lol.
Absolutely lovely video. Thanks for talking about this. I wonder if this approach might be a good idea for emulating high level consoles on the MiSTer, where there just isn't enough logic to implement the original chips outright, but where also those chips are not going to be in real time step with the code, so emulating them in a cycle accurate fashion doesn't matter.
Unfortunately Transmeta didn't release enough fine details about how they did this. I'm glad modern projects like MiSTer exist, it's really important to preserve original hardware in this way.
@@JanusCycle oh, I mean just the general approach. I'm a programmer, I'm fairly sure they did JIT optimization and compilation, kind of how HLE (High Level Emulation) works for N64, or how Java works. The thing is with Java you have a high level representation of the code when compiling, whereas with HLE and (likely) Transmeta, you have to first decompile the code a little. Not too much, but just enough to see some of the more high-level structure, so you can reason about the code better. JIT became very popular by the end of the 90s: Symantec demoed their JIT compiler for Java in 1996. It was new and revolutionary and shook things up thanks to the performance gains - Java did not have JIT before. It would stand to reason that Torvalds & Co went "hmm, we can do this for all programming languages by putting it on the CPU" and one silicon development cycle later we had the Crusoe in 2000. After writing this, I read some other comments that mentioned running Java directly. Well that goes in line with what I said here.
the picturebook 2nd version also had one - came out a few years earlier. I have the earlier 400Mhz intel one.
Those sweet days when Toshiba ruled were truly wonderful. It felt like anything could happen and innovative solutions would win the market.
Even today Intel struggles to compete at low watt processors. It's time to dump all x86 processors in favor of what Apple did with M series, ARM with hardware transcoding of x86.
I have a similar laptop with a Transmeta processor, but from Compaq. TC1000 model. Very cool.
Definitely code optimizations lives today in some kind of vulkan shaders for proton games in steam for linux, when you can improve set of shaders when you playing and have bigger fps that on windows with this hardware
I have been working on similar cpu design features since the 80s . This was a beginning of a great idea and of course Intel who sought to steal an idea failed to make it grow right .
I like and want the machine you presented I hope to be able to make something like it soon .
This is similiar what the Nx586 CPU from Nexgen did in 1994.
It's interesting to note that Apple with Rosetta 2, transcoding Amd64 (aka x86-64) code to ARMv8 code with their secret proprietary sauce on their CPU (automatic memory fences and endianness management), is totally different and could only cover user-space code (no kernel space, nor kernel-level drivers), and only because this code is properly segregated from data on 64 bit OSX and macOS that is not the case on Windows XP and generally in modern Windows when running old 32 bit code.
That's why Rosetta 2 is very efficient, around 85% performance-level of AMD64 code compared to native ARMv8 code except for copyrighted AVXs, but also why it's of no use to run a full fledged Amd64 Window or Linux, Docker using QEmu to provide awkward (sic) compatibility at the expense of performances.
never heard about this, very interesting
Nice to see the old Telecom logo on the battery.
Man, that Transmeta font/logo just *screams* 90s!
That's such an impressive CPU design.
Literally every Intel CPU since around the Pentium 2 is code morphing, where the x86 frontend is morphed into some internal RISC instruction set.
Great video
Ayy! That’s the old Telecom logo on the bottom of the laptop! Haven’t seen it in years.
Intel and Underhand tactics, name another more iconic duo
Nvidia and greed
@@thebyzocker was about to type exactly this
HP made some thin clients based on knees and then they're later efficeon processors. They're pretty neat
very interesting to see a Telecom Australia logo on the bottom. I wonder what they were used for?
I would've loved to have one o these back in middle school and highschool.
I did have an Acer Windows XP netbook, but it was 16:9 I assume newer than this. So this was a little before my time owning my own computer.
I like the gamepad grip this has though, if that numb was comfortable to move the mouse with I'd honestly want something like this today.
You might like the GPD Pocket line of computers!
Nice music choice, been a while
I wonder, is this similar to how Qualcomm emulate the X86 code with its Snapdragon?
It's crazy that Linus was part of it
I see you're a man of culture as well 🌌 @Wallpaper I've seen it in person...
Thanks for this nice flashback! I remember that I tried to invest something like 100-200 €$ in Transmeta when they were new, hoping they would once replace Intel as #1 and my investment would be worth thousands. But nothing is left of it.
Oh Fractint, how I miss thee... 👍😎
I REALLY REALLY WANT A MODERN VERSION OF THIS. Something that runs both windows and android so I can connect up to my phone service and do texts while still being able to do PC level productivity.
for texting you don't really need android tho, just add a WWAN modem to your PC and use that? Or are you talking about a weird messaging app that only exists for Android?
Interesting that is a Toshiba screen. Sony and Toshiba were direct competitors.
You have a piece of history there
Great video! Is the sticker on the back that says "the art of stealth" yours? where did you get it?
That sticker was given to me many years ago by a graffiti artist.
age ago i remember this transmeta cpu news.. idk why this kind of cpu not had a good future...
9:44 My fellow Australian's should recognise that logo...
Amazing. I’m not an expert but I think that’s what Apple is doing as well. They baked in some translation x86 into Apple Silicon on the new macs. I could be wrong.
Yes. Although unlike this earlier implementation, the core functionality (kernel, OS, drivers, etc) have been rewritten to run on arm64 (the apple silicon architecture), and only runs x86 code when an older app is used with just-in-time (JIT) compilation where the instructions are translated in real time to arm code that usually isn't as fast as native x86, but does get the job done for most applications.
OK, what is that song.
Was it some sort of Microsoft late 90's educational CD? Atlas? Encarta?
Windows XP installation theme, If you were lucky enough to have the Windows XP installer detect your sound card.
Wow great vido i love sutch hidden gems
Very interesting
The form factor reminds me of a cyberdeck.
I like how Sony for decades thought I'd be okay to not include batteries inside their portable devices.
I think it was a sneaky way to make their devices look smaller and lighter but showing advertisements and listing size and weight numbers without the battery attached
By the way without battery it's looks great even today.
@@fatcat7msk7ru Well, imagine if Apple was to launch their new iPhone, and it didn't came with an internal battery. Pretty sure that thing would really look nice, at least until you connect the external battery.
For rehabilitation, they later had devices with internal gumstick batteries, and the external attachment was optional, only if you wanted to use AAs.
great video, period
It'd be interesting to see how fast actually good software written for RISC runs.
Transmeta is not RISC, it's VLIW
Yes but there is still an underlying processor to do the actual work which is probably RISC.
@@ky5666 the underlying processor is VLIW. x86 is translated to VLIW. That's how it works
This looks so similar to the GPD win max
Remember Digital's FX!32 that was introduced in 1996?
wow, real hardware engineering genius