Haha I actually think it's quite clever. The problem is the style of programming is so different and it's difficult to get away from sectioning off pieces of code that can be added without effecting the whole system. So competent developers needed to be hired who could understand the way this architecture worked, which were few and far between. It definitely seemed like it made pushing games out a bit harder.
I sometimes watch stuff like this knowing that I won't understand a lot of it. But I still find it interesting. I'm amazed at the technology people invent.
I did SPU coding on PS3 for AAA title. The real issue was algorithms had to be setup differently than on Xbox360 with symmetrical unified memory arch. On spu it was a dance of dma ing your data while concurrently performing compute. It could directly address the RSX space so you could directly patch the gpu which was nice. So now let’s talk about the ps2?
That's awesome!!! And yeah, that seemed like it would be the biggest challenge. Most programmers don't have to worry about stuff like this. Parallelism and asynchronously tasking things can definitely be a challenge at first. I didn't know that about the GPU though! That's cool. And yes, one of the next in the series will be the PS2!
I've been wanting to do programming on the Cell, but I don't know what to download to get started (things like SDKs). Could you point me in the right direction?
@@reaktorleak89 not it would not, it would be under-power crap, a single SPE is 32 Glops so 8 spe's would be 256Gflops which is a single amd zen 2 core with AVX256 can do, people with zero knowledge about computing over-hyped it for some reason..
looking back what Sony should have done is delay the PS3 to Nov 2007 and double the main ram to 512MB. The RSX should have been changed to a designed derived from the GeForce 8000 series so they could get hardware with unified shaders and they should have also doubled the vram to 512MB. The extra year would have also given the engineers more time to great more libraries and documentation for developers but it later came out that Sony made the PS3 hard to program on purpose so devs had to take time to make really good looking games. Same thing happened with the PS2 but back then Sony had no real competition so everyone adopted the console. xbox 360 really but a dent on sony's plan
Bad idea, because there is not way Sony could gave 2 years of advantages for Xbox 360. The PS3 would been dead if came out in Nov 2007 The PS3 was supposed to came out in 2005, but it was delayed for one whole year, Not to mention it was launched in November 2006 only in North America and Japan while it was launched on PAL region in March 2007 Plus 256MBs of VRAM did make sense, when high end GPUs in 2005 had 256MBs to 512MBs of VRAM
@@Charles8777-od4kj the PS3 couldn't have lobbys with chat because it only had 256mb of main ram. releasing it in nov 2006 but doubling the main ram would have helped
@@tHeWasTeDYouTh If anything i don't thinks devs said that the biggest technical drawbacks of PS3/Xbox 360 was the RAM when the PS2 had only 36MBs of RAM in 2000. Just think about the fact that the Xbox 360 would had 256MBs if Gears of War didn't existed.
This is great, better than most content on this I see, and I'm glad you detailed a number of things. But there was a few bits I was hoping you'd highlight more. The design philosophy of the SPE, and the corporate political dynamics that fueled it, is fascinating to say the least. It was a partnership between IBM/Toshiba/Sony. The former two were vying for it to be a next-gen number crunching powerhouse to use in all sorts of applications and fields. Why on earth Sony bought into that when they needed a general purpose processor? I have no clue. I would love to have been a fly on the wall as management relayed across corporate lines their respective engineers' dreams. You have to understand, the processor did not turn out as they'd hoped. Sony initially intended to not include a GPU at all! It was going to be the magic of SPEs carrying Mr Frodo up the mountain using the power of friendship or something. SIMD processing works great for audio/video streaming, so game workloads only makes sense, right? It was very wrong. It frankly should have been obvious it was wrong. And by the time they admitted they were wrong, they were too deep into R&D to start over. So IBM and Toshiba kind of... abandoned it. They cut their losses, and handed off an unfinished product that Sony, despite what their floating-baby-powered PR would lead you to believe, did not want. So a GPU that was less powerful than the 360's was glued on, of which devs leaned on *heavily*. Ever since, the popular rumor has been "developers just didn't know how to unlock the magic of the esoteric Playstation". Those kinds of takes made discussing the nuance behind it all very difficult to have. Just because you're working with a framebuffer doesn't mean SIMD processing is a saving grace. You're not spending your time fiddling with hue; the opportunities for applying the same operation across lots of data isn't as common as you may first assume. You're detecting collisions, you're mapping textures, you're moving lights. These scenarios are fantastic for thread level parallelism, but terrible for data level parallelism, and SIMD is all about the latter. That distinction can't be emphasized enough to the uninitiated. Not to mention, the last minute decision to slap a GPU on the thing left a few bugs. For example, MSAA had to be performed at certain arbitrary resolutions. If your game didn't run at native 1080p (most didn't), and your game was 16:9, you had to waste memory upscaling before running built-in MSAA. And if you had multiple render targets, it wouldn't work period! This is why MLAA was so popular on the machine, which unfortunately meant you got a bunch of weird edge blurring. I should reign this back in for a moment. You touched upon it, but I wish the impact was discussed in more detail. The whole mindset behind Cell was "RISC, but moreso". It was taking the idea to its logical extreme. Branch prediction? Out-of-order execution? Speculation? Nope, that's now the burden of software engineers and compiler authors! All in the name of raw silicon. It was as stripped down an ISA as could be. If you weren't careful and a cache miss occurred, or if you predicted a conditional wrong, you can branch off for a mind-boggling number of cycles before you realize you've got garbage. The developers should just write safer code then, right? Yeah, this concept would've been fine and dandy if Sony was known for providing well supported, well documented, and well built tooling and development software. They were not known for this. Microsoft, for all its flaws (and boy are there many), did not have such a reputation. It was, in contrast, a revelation. But when developing for Sony's system, lots of folks would just batch work as much as they could and cross their fingers. And this was because, get this, they had no SPU debugger! One was only eventually handed out because a first party dev got fed up with such nonsense and made one themselves. Sony's hubris led to the idea that developers were putting up with this for some reason beyond market dominance. Hint: They were only putting up with it due to market dominance, of which the PS3 did not have. The dev backlash was strong, and unfortunately the perception from consumers was "these devs sound lazy or incompetent if they can't unleash the system's power". That isn't to say Microsoft carved a market share on good dev relations alone. They were pushing the envelope in hardware design too; unifying vertex and pixel shaders was fantastic foresight, and working with rDRAM was like having an L4 cache, which made people very happy. The funny part is... Sony/IBM was arguably right, but perhaps too ahead of its time. GPGPU has been making huge headway lately, now that we have much better cache coherency and well supported middleware and tooling. As well, memory locality (arguably the thing Cell got right most) has become a bigger deal for ISAs in general, to the point where understanding NUMA topology is critical for any enterprise performance work. Anyway, my apologies, I'm not so much criticizing your video, I really enjoyed it! But if you (and your audience) wanted to hear a bit more of the story behind the data, well, here you go. :) And for the record: I own a PS3, and prefer it to the 360!
It took me quite a while to get through this, but wow! Thank you for taking the time to write this out! Very fascinating and I didn't take this as criticism at all! This is color for the video.
You are right in everything except one think. EDram in X360 was small, around 10 mb. Which was good at start but through the generation it's become limiting, demand was 2-3 times higher and without heavy tiling (degrading assets) it was unusable.
For IBM and Toshiba to get involved with this was a big deal ....Toshiba putting the Cell processor in a TV was the last thing i saw about the use of the CELL.
nope it was overrated, whoever designed it pushed all the complexity to developers instead of handling it in the architechture, even a basic CE graduate can design a CPU faster than anything, but it'll be hell to write code for...
@@Steelix371 both xbox 360 and ps3 deliver good looking games late gen, unless you are sony fanboy, you can't say ps3 games were better looking than xbox, specially when probably the most good looking game GTA 5 ran like doo doo on ps3..
"When SPUs are properly utilised the CELL can do many incredible things" "Gran Turismo was delayed 3-4 years." Yeah about that, video game developers don't have that kind of time. It's why consoles like Saturn also died. Complicated but powerful hardware doesn't mean better games it only means longer dev times. It's an interesting machine to use but not in this field.
Somewhat business point : Sony should 1) have had developed some Libraries or even languages to make it easier to use. 2) Sony and IBM should have sold unlocked versions for scientific use even at somewhat higher price.
Yeah, it almost looks like Sony should have built a full game engine and possibly delayed the PS3 to 2007 to improve its thermals and get the GPU clocked higher. Maybe added more RAM? 512MB at the time was standard, but 1GB could have really future proofed it. Blu-ray drives would have dropped significantly in cost too.
Just found your website and love it, would like to see the pro's and cons of programming on Sega Genesis someday if you would, thanks a bunch, I love these kind's of technical vids.
Two weeks ago i watch that video and i was like: "how can a console can be hard to develop." This week i'm trying to make sprx mod menu for games with visual studio and ps3 sdk. I understand what you trying to tell we after i see documents at sdk. However thanks for video.
Can you do an explanation for the xbox 360, and maybe the wii? There isn't enough mention of the other two consoles of that gen. It would be nice to know how their architecture varied from their competition.
Cool video. I have found a lot of data that mentions the SPUs were design for geometry workloads. Ninja Gaiden Sigma even runs those there and the whole "not IEEE754 compatible" FPUs were due to this. Sony had a library to do this, so if you think SPUs would be always abstracted, it made sense. They are also fine to process video or AI loads. But Sony needed a pixel pusher and ended up with duplicate vertex hardware on the GPU. It would also be a very underpowered system that was very late due to Cell delays. As Gabe said, this and 360 were released at a time where programmers were barely doing dual threaded code on PC, much less something as complex as this. I dont think the GPU is underpowered. Real world results show a lot of PS3 games running at 1080p and even the dashboard, where the 360 was stuck at 720p dash. It had a lot of rasterization hardware but lacked in the interconnect to the CPU and the stiff memory partitioning that wasnt a problem on the 360.
I mean honestly, I would have loved learning this hardware! It would have been a challenge, but I think programmers sometimes get caught up in code and only code. Code is a tool to solve a problem, the elegance is in the design, code holds no intrinsic value. The programmer's brain is the one that's responsible for it. So conventional methods of programming did not work well with this architecture and programmers would get frustrated, when in reality it is their job to learn the architecture. Mike Acton put it best with the PS3.
@@ZygalStudios, I thought there was a Homebrew scene for PS3? Am I mistaken? If you made homebrew applications then that'd be awesome I think. But yeah, that is basically what I said, I don't necessarily blame the hardware (Although it is concerningly complicated to the point where emulation of it is just a nightmare, even by PS2 standards) but more poking fun out of how nobody wanted to touch it cus of how much was going on.
@@ZygalStudios I think there are differences between passion programming and professional programming. When working professionally programmers are pushed to be quick so they can get products out. Training and learning certainly helps, but it put burden on company resources (cost of training, loss of workers hours, etc). This is why most developer stick to tried and true method and sometimes looks stiff to homebrew and hobby programmers. If you look at modern platforms (Android iOS, UE, Unity, etc), they always include ease of development as their main selling point. Because in this era, if you're late to the market you'll get left out. ps: even Sony themselves admit this and choose x86_64 + GPU combo for the PS4 and PS5.
Aren't there now some kind of A.I programming tools like OpenAI's Copilot that could perfectly optimise code for the SPE's? would love to see what the ps3 is capable of in 2021.
Sony wasted too much time on the Cell processor. The Ps3 was thrown together at the last minute with an old gpu. Even with the SPUs it was outclassed by the 360 in every way.
Am assuming we now have far superior hardware that beats the cell processor ? or maybe if we mixed the current hardware we have with the cell architecture, will we have some wicked system ?
God awful architecture tbh. Up there with netburst. Good console with great games though! (I heard Little Big Planet is nice) But dear GOD Sony really dropped the ball here. Also, a lot of video games don't benefit too greatly from high output SIMD performance (except with stuff like graphics and really complex AI's). All your netcode for multiplayer? Yeah, thats not benefiting from SIMD. Neither are any GUI subroutines, save file saving, input management, general game logic, etc. The PS3 is a product and victim of the transition from single core to multicore computing. The industry didn't go with asymmetric multicore designs, we went with symmetric multiprocessing. Its just... better to have multiple fully fledged cores with each being equal. Much more flexible and easier to code.
If I want to study about this not specifically Cell processor but computer processors in general, what would be the course for this in college? I used to be fascinated by computer technology but not mentally focused to learn on what they made of or how it is made and work differently from other conventional PC processors. Anyway, you sound like nerdy videogamedunkey, no offense. Awesomely detailed video by the way even though I have no knowledge about computer science.
🤣🤣🤣😅😅 thank you for pointing that out. No offense taken, I take that as a compliment. Processors in general? Typically I would start with a digital logic course, followed by an intro to microprocessors course. ABET unis will have both!
@@jonathanm.ollerjr.6486 Do you think I should make content on computer/electrical engineering as a career? What to look into and what's available based on university studies?
5:35 OH BOOOOY WHOOOO WINDOWS BOI -- the hardware gave you a challenge to learn and expand horizons... but no... you wanna sell what you've been selling for 30 years made of the backs of Idtech/quake engine
GPU Devs: That shit is bad!
SPU Devs: That shit is good!
All Devs: What the shit is that processor!?
Haha I actually think it's quite clever. The problem is the style of programming is so different and it's difficult to get away from sectioning off pieces of code that can be added without effecting the whole system. So competent developers needed to be hired who could understand the way this architecture worked, which were few and far between. It definitely seemed like it made pushing games out a bit harder.
@@ZygalStudios every developer was used to some paradigm. Sony did not provided support for change.
I sometimes watch stuff like this knowing that I won't understand a lot of it. But I still find it interesting. I'm amazed at the technology people invent.
Always cool to see and appreciate technology :)
Thanks for coming by!
I did SPU coding on PS3 for AAA title. The real issue was algorithms had to be setup differently than on Xbox360 with symmetrical unified memory arch. On spu it was a dance of dma ing your data while concurrently performing compute. It could directly address the RSX space so you could directly patch the gpu which was nice. So now let’s talk about the ps2?
That's awesome!!!
And yeah, that seemed like it would be the biggest challenge. Most programmers don't have to worry about stuff like this. Parallelism and asynchronously tasking things can definitely be a challenge at first. I didn't know that about the GPU though! That's cool. And yes, one of the next in the series will be the PS2!
Thanks for stopping by :)
I've been wanting to do programming on the Cell, but I don't know what to download to get started (things like SDKs). Could you point me in the right direction?
@@nime6631 learn c and python, then go and download the SDKs and tools and IDEs.
@@cammy399python wasn’t used for ps3
Sony originally wanted the PS3 to ONLY have the CELL processor, the addition of a GPU was partly the reason for the delay from 2005 to 2006.
Imagine having to code a performant 3D engine only running on the CELL.
Absolutely psychotic.
But it would have been amazing!
@@Thelango99 PS3 would have a library of like 200 games then
@@reaktorleak89 not it would not, it would be under-power crap, a single SPE is 32 Glops so 8 spe's would be 256Gflops which is a single amd zen 2 core with AVX256 can do, people with zero knowledge about computing over-hyped it for some reason..
looking back what Sony should have done is delay the PS3 to Nov 2007 and double the main ram to 512MB. The RSX should have been changed to a designed derived from the GeForce 8000 series so they could get hardware with unified shaders and they should have also doubled the vram to 512MB. The extra year would have also given the engineers more time to great more libraries and documentation for developers but it later came out that Sony made the PS3 hard to program on purpose so devs had to take time to make really good looking games. Same thing happened with the PS2 but back then Sony had no real competition so everyone adopted the console. xbox 360 really but a dent on sony's plan
Bad idea, because there is not way Sony could gave 2 years of advantages for Xbox 360. The PS3 would been dead if came out in Nov 2007
The PS3 was supposed to came out in 2005, but it was delayed for one whole year,
Not to mention it was launched in November 2006 only in North America and Japan while it was launched on PAL region in March 2007
Plus 256MBs of VRAM did make sense, when high end GPUs in 2005 had 256MBs to 512MBs of VRAM
@@Charles8777-od4kj the PS3 couldn't have lobbys with chat because it only had 256mb of main ram. releasing it in nov 2006 but doubling the main ram would have helped
@@tHeWasTeDYouTh If anything i don't thinks devs said that the biggest technical drawbacks of PS3/Xbox 360 was the RAM when the PS2 had only 36MBs of RAM in 2000.
Just think about the fact that the Xbox 360 would had 256MBs if Gears of War didn't existed.
Oh yeah sure sure, they should have waited a bit more and base the cpu on the r9 5900x and the gpu on the rtx 3090
@@blar2112 RTX 4090 was the only way to go for the PS3
This is great, better than most content on this I see, and I'm glad you detailed a number of things. But there was a few bits I was hoping you'd highlight more.
The design philosophy of the SPE, and the corporate political dynamics that fueled it, is fascinating to say the least. It was a partnership between IBM/Toshiba/Sony. The former two were vying for it to be a next-gen number crunching powerhouse to use in all sorts of applications and fields. Why on earth Sony bought into that when they needed a general purpose processor? I have no clue. I would love to have been a fly on the wall as management relayed across corporate lines their respective engineers' dreams.
You have to understand, the processor did not turn out as they'd hoped. Sony initially intended to not include a GPU at all! It was going to be the magic of SPEs carrying Mr Frodo up the mountain using the power of friendship or something. SIMD processing works great for audio/video streaming, so game workloads only makes sense, right?
It was very wrong. It frankly should have been obvious it was wrong. And by the time they admitted they were wrong, they were too deep into R&D to start over. So IBM and Toshiba kind of... abandoned it. They cut their losses, and handed off an unfinished product that Sony, despite what their floating-baby-powered PR would lead you to believe, did not want. So a GPU that was less powerful than the 360's was glued on, of which devs leaned on *heavily*. Ever since, the popular rumor has been "developers just didn't know how to unlock the magic of the esoteric Playstation".
Those kinds of takes made discussing the nuance behind it all very difficult to have. Just because you're working with a framebuffer doesn't mean SIMD processing is a saving grace. You're not spending your time fiddling with hue; the opportunities for applying the same operation across lots of data isn't as common as you may first assume. You're detecting collisions, you're mapping textures, you're moving lights. These scenarios are fantastic for thread level parallelism, but terrible for data level parallelism, and SIMD is all about the latter. That distinction can't be emphasized enough to the uninitiated.
Not to mention, the last minute decision to slap a GPU on the thing left a few bugs. For example, MSAA had to be performed at certain arbitrary resolutions. If your game didn't run at native 1080p (most didn't), and your game was 16:9, you had to waste memory upscaling before running built-in MSAA. And if you had multiple render targets, it wouldn't work period! This is why MLAA was so popular on the machine, which unfortunately meant you got a bunch of weird edge blurring.
I should reign this back in for a moment. You touched upon it, but I wish the impact was discussed in more detail. The whole mindset behind Cell was "RISC, but moreso". It was taking the idea to its logical extreme. Branch prediction? Out-of-order execution? Speculation? Nope, that's now the burden of software engineers and compiler authors! All in the name of raw silicon. It was as stripped down an ISA as could be. If you weren't careful and a cache miss occurred, or if you predicted a conditional wrong, you can branch off for a mind-boggling number of cycles before you realize you've got garbage.
The developers should just write safer code then, right? Yeah, this concept would've been fine and dandy if Sony was known for providing well supported, well documented, and well built tooling and development software.
They were not known for this.
Microsoft, for all its flaws (and boy are there many), did not have such a reputation. It was, in contrast, a revelation. But when developing for Sony's system, lots of folks would just batch work as much as they could and cross their fingers. And this was because, get this, they had no SPU debugger! One was only eventually handed out because a first party dev got fed up with such nonsense and made one themselves.
Sony's hubris led to the idea that developers were putting up with this for some reason beyond market dominance. Hint: They were only putting up with it due to market dominance, of which the PS3 did not have. The dev backlash was strong, and unfortunately the perception from consumers was "these devs sound lazy or incompetent if they can't unleash the system's power".
That isn't to say Microsoft carved a market share on good dev relations alone. They were pushing the envelope in hardware design too; unifying vertex and pixel shaders was fantastic foresight, and working with rDRAM was like having an L4 cache, which made people very happy.
The funny part is... Sony/IBM was arguably right, but perhaps too ahead of its time. GPGPU has been making huge headway lately, now that we have much better cache coherency and well supported middleware and tooling. As well, memory locality (arguably the thing Cell got right most) has become a bigger deal for ISAs in general, to the point where understanding NUMA topology is critical for any enterprise performance work.
Anyway, my apologies, I'm not so much criticizing your video, I really enjoyed it! But if you (and your audience) wanted to hear a bit more of the story behind the data, well, here you go. :)
And for the record: I own a PS3, and prefer it to the 360!
It took me quite a while to get through this, but wow! Thank you for taking the time to write this out!
Very fascinating and I didn't take this as criticism at all! This is color for the video.
You are right in everything except one think. EDram in X360 was small, around 10 mb. Which was good at start but through the generation it's become limiting, demand was 2-3 times higher and without heavy tiling (degrading assets) it was unusable.
Keep going with this type of content, very interesting.
Thanks man! I'm planning on it. Appreciate the support!
@@ZygalStudios so the reason that the ps3 was difficult to develop for was just bad memory management for the processors as well as bad ipcs
For IBM and Toshiba to get involved with this was a big deal ....Toshiba putting the Cell processor in a TV was the last thing i saw about the use of the CELL.
Just watch a GameCube tear down ...IBM chip is present in the system
Cell Processor was underused and underrated.
nope it was overrated, whoever designed it pushed all the complexity to developers instead of handling it in the architechture, even a basic CE graduate can design a CPU faster than anything, but it'll be hell to write code for...
@@niks660097oh yeah? ua-cam.com/video/44HpssocH4c/v-deo.html
@@niks660097and yet the ps3 deliver some of the best looking games on it lmao
@@Steelix371 both xbox 360 and ps3 deliver good looking games late gen, unless you are sony fanboy, you can't say ps3 games were better looking than xbox, specially when probably the most good looking game GTA 5 ran like doo doo on ps3..
Needs to come back
You should keep doing stuff like this as I'd love to see a video on the sega saturn
That's the next video I am working on :) thank you!!
"When SPUs are properly utilised the CELL can do many incredible things"
"Gran Turismo was delayed 3-4 years."
Yeah about that, video game developers don't have that kind of time.
It's why consoles like Saturn also died. Complicated but powerful hardware doesn't mean better games it only means longer dev times.
It's an interesting machine to use but not in this field.
Somewhat business point : Sony should
1) have had developed some Libraries or even languages to make it easier to use.
2) Sony and IBM should have sold unlocked versions for scientific use even at somewhat higher price.
Yeah, it almost looks like Sony should have built a full game engine and possibly delayed the PS3 to 2007 to improve its thermals and get the GPU clocked higher. Maybe added more RAM? 512MB at the time was standard, but 1GB could have really future proofed it. Blu-ray drives would have dropped significantly in cost too.
@@reaktorleak89 well yes and no. Delay might have killed the business but to have PS3+ which is backwards compatible with PS3 it would be good.
@@reaktorleak89 That engine could have been open sourced or STL least run at Open VG / VL. Thus attractive to makers.
Well Sony developed libraries and game engine specificaly for PS3 called Phyre, but that was halfway through generation.
@@lukasl3440 which is kind of late. Also the border ecosystem was at stake.
Easily the best explanation I've seen. About time somebody with hardware development knowledge breaks down various consoles.
can you make a video on consoles that were supposedly easy to develop for like the dreamcast, gba, gamecube, or ps1
Yes! Absolutely
Just found your website and love it, would like to see the pro's and cons of programming on Sega Genesis someday if you would, thanks a bunch, I love these kind's of technical vids.
1:44 imagine YandereDev port his game to PS3
Oh gosh no…
@@yancgc5098 ark survival evolved levels of bugs
Human lifespan is too short for this to be even imaginable.
My biggest question, is it possible to use the emotion engine as a Coprocessor to the Cell?
Two weeks ago i watch that video and i was like: "how can a console can be hard to develop." This week i'm trying to make sprx mod menu for games with visual studio and ps3 sdk. I understand what you trying to tell we after i see documents at sdk. However thanks for video.
Nvidia RSX was obsolete should went for a more powerful GPU at the time.
Not only it was obsolete, it's power was reduced for cost efectiveness.
Well PS3 was $599 but costed $840 to manufacture mostly because of Blu-ray driver.
Great series, thank you for making them!
Can you do an explanation for the xbox 360, and maybe the wii?
There isn't enough mention of the other two consoles of that gen. It would be nice to know how their architecture varied from their competition.
Cool video. I have found a lot of data that mentions the SPUs were design for geometry workloads. Ninja Gaiden Sigma even runs those there and the whole "not IEEE754 compatible" FPUs were due to this.
Sony had a library to do this, so if you think SPUs would be always abstracted, it made sense.
They are also fine to process video or AI loads. But Sony needed a pixel pusher and ended up with duplicate vertex hardware on the GPU.
It would also be a very underpowered system that was very late due to Cell delays.
As Gabe said, this and 360 were released at a time where programmers were barely doing dual threaded code on PC, much less something as complex as this.
I dont think the GPU is underpowered. Real world results show a lot of PS3 games running at 1080p and even the dashboard, where the 360 was stuck at 720p dash.
It had a lot of rasterization hardware but lacked in the interconnect to the CPU and the stiff memory partitioning that wasnt a problem on the 360.
Sony didn’t design the cell processor at all, it was a collaboration between IBM and Toshiba. Amazing video though, I learned a lot!
I love how the PS3 hardware can be summed up as: "This is really good...but I have no idea how the hell you work this thing so I hate it!"
I mean honestly, I would have loved learning this hardware! It would have been a challenge, but I think programmers sometimes get caught up in code and only code. Code is a tool to solve a problem, the elegance is in the design, code holds no intrinsic value. The programmer's brain is the one that's responsible for it. So conventional methods of programming did not work well with this architecture and programmers would get frustrated, when in reality it is their job to learn the architecture. Mike Acton put it best with the PS3.
@@ZygalStudios, I thought there was a Homebrew scene for PS3?
Am I mistaken?
If you made homebrew applications then that'd be awesome I think.
But yeah, that is basically what I said, I don't necessarily blame the hardware (Although it is concerningly complicated to the point where emulation of it is just a nightmare, even by PS2 standards) but more poking fun out of how nobody wanted to touch it cus of how much was going on.
@@TheGreatBackUpVIDEOS Yes there is! And yep, exactly. You provided some good satire :)
@@ZygalStudios I think there are differences between passion programming and professional programming. When working professionally programmers are pushed to be quick so they can get products out. Training and learning certainly helps, but it put burden on company resources (cost of training, loss of workers hours, etc). This is why most developer stick to tried and true method and sometimes looks stiff to homebrew and hobby programmers. If you look at modern platforms (Android iOS, UE, Unity, etc), they always include ease of development as their main selling point. Because in this era, if you're late to the market you'll get left out.
ps: even Sony themselves admit this and choose x86_64 + GPU combo for the PS4 and PS5.
You take years to master it but after the system lifecycle's over you're stuck with a useless skill @@ZygalStudios
I don't know if you said this but the PS3's CPU is more powerful than the Xbox one and PS4 CPU.. Talking raw processing power only.. nothing else.
Aren't there now some kind of A.I programming tools like OpenAI's Copilot that could perfectly optimise code for the SPE's? would love to see what the ps3 is capable of in 2021.
Sony wasted too much time on the Cell processor. The Ps3 was thrown together at the last minute with an old gpu. Even with the SPUs it was outclassed by the 360 in every way.
Lovely video. Subscribed.
Killzone 2 use all spu, but only 60%, 60 %!! Amazing!! 😮
doing conditional things without using if standments is not that hard. you simply compute 2 sets of data and keep the correct set of data.
Will you ever consider doing one of these great episodes for the Neo Geo AES console?
Am assuming we now have far superior hardware that beats the cell processor ? or maybe if we mixed the current hardware we have with the cell architecture, will we have some wicked system ?
thanks for the video
very interesting
How come people don't make a modern version of the Cell?
The Cell was a dead end failed architecture
God awful architecture tbh. Up there with netburst.
Good console with great games though! (I heard Little Big Planet is nice) But dear GOD Sony really dropped the ball here.
Also, a lot of video games don't benefit too greatly from high output SIMD performance (except with stuff like graphics and really complex AI's).
All your netcode for multiplayer? Yeah, thats not benefiting from SIMD. Neither are any GUI subroutines, save file saving, input management, general game logic, etc.
The PS3 is a product and victim of the transition from single core to multicore computing. The industry didn't go with asymmetric multicore designs, we went with symmetric multiprocessing.
Its just... better to have multiple fully fledged cores with each being equal. Much more flexible and easier to code.
If I want to study about this not specifically Cell processor but computer processors in general, what would be the course for this in college? I used to be fascinated by computer technology but not mentally focused to learn on what they made of or how it is made and work differently from other conventional PC processors.
Anyway, you sound like nerdy videogamedunkey, no offense. Awesomely detailed video by the way even though I have no knowledge about computer science.
🤣🤣🤣😅😅 thank you for pointing that out. No offense taken, I take that as a compliment.
Processors in general? Typically I would start with a digital logic course, followed by an intro to microprocessors course.
ABET unis will have both!
@@ZygalStudios Thank you! 😀 Subscribed!
@@jonathanm.ollerjr.6486 Do you think I should make content on computer/electrical engineering as a career? What to look into and what's available based on university studies?
Yes. That it'll awesome too!
Probably something like computer architecture or operating systems.
I did see a slide once that time to first pixel in game development was 6 months on PS2, and 2-3 years on PS3.
5:35 jesus christ all i hear is wah wah wah i have to learn stuff
5:35 OH BOOOOY WHOOOO WINDOWS BOI -- the hardware gave you a challenge to learn and expand horizons... but no... you wanna sell what you've been selling for 30 years made of the backs of Idtech/quake engine
🤣
@@ZygalStudios btw.. subbed.. stellar content... keep going
awsome
youtube removed my like on this video so i've readded it :)
WHY U NO SIMD
alr man i think i understand that but again WTF WAS THAT IAM NOT TOUCHING THAT SHIT I MEAN WHO THE FUCK THINK TO MAKE THAT MONSTROSITY
Yea fuck that haha
i don´t undertand anything :D
now you have the current generation which is just a garbage lineup of cross platform mediocrity because there is no new hardware innovations