Casey is spot on. The mainstream dev culture values deep knowledge of convoluted APIs (Android, iOS, JS-of-the-week, Docker, K8, etc.)--which over time is a waste of brain space because these things come and go--whereas general understanding is what endures.
@@jordixboyin the 6 years I've been my current company has overhauled the virtualization/containerization stack 3 times. Docker isn't immune to the "brand new blazin' fast JS framework" disease
I love the take on learning. I’ve been self taught, hit a wall where I didn’t know what I didn’t know, and took a bootcamp course covering a wide breadth of tech. Now I still know nothing, but I know what I don’t know, and am actually building stuff!
The game Turing Complete is fantastic, it's a puzzle game with a circuit simulator where you start with basic logic gates and boolean algebra, and it works up to you building a very simple computer with 1 byte instructions, and then a more advanced one with 4 byte instructions and a few address modes. It's still in early access but there's about 30-40 hours of gameplay just in the campaign.
1:05:33 The idea of userland drivers is really interesting. The Nintendo Switch has something like this: iirc the graphics driver, SD storage, and other drivers are all just regular user processes with a higher permission level. They have a very small (custom!) micro-kernel (only using around 500KB of kernel space for the binary), but basically everything is driven by these userspace drivers. Then each user process does IPC via syscalls, and the kernel just kinda mediates. Their IPC interface is a mess, but the rest of the system is relatively simple. I think this is probably driven by the need of security audits like Casey was talking about.
37:30 "taught that it's virtuous to not know how a computer works" . This is what has frustrated me so much - people bragging about ignorance and how they can just pay AWS or ChatGPT as if that's actually "smarter" that knowing basic information you need to host a web app.
The thing is that for efficiency and productivity we specialize. You wouldn't deem it necessary for a fast food employee to learn how to cook like a chef because even if they knew they are working with cost, time, equipment and human resources constraints that won't allow them to put that skill to use. Like most industries, development requires acceptable employees in abundance and people who actually know what they are doing are necessary but in much smaller numbers. Programming like all trades is a skill that is honed, some people will stagnate but some will expand their knowledge with time. Casey has had time to hone his skill, he also basically saw the industry grow from nothing to a huge complex mess. People tend to overvalue the things they know and undervalue the things they don't. Here are a few front-end subjects, that are complicated, very useful and have absolutely nothing to do with how a computer works: optimizing CLS, working against industry giants performance bottlenecks (cloud fonts, gtm for examples that immediately come to mind), caching, critical CSS loading... People closer to the metal tend to undervalue these skills, and a 25 year old developer can only know so much. As Casey mentioned, docker is just a tool, a webdev these days needs to understand hundreds of tools. Docker sure, but also the stuff they are dockerizing, linux, nginx, a sql server, a nosql server, redis, rabbitmq, their programming language, html, css, javascript, how the cloud works, ssh, HTTP rest, XML, SSL certificates, SMTP servers... People really only understand a subset of everything they need to use for their jobs and I get that understanding the CPU can be low on the list of things to learn, most people won't be doing much bulk processing, it's always fun to ask people what kind of "big data" they worked with, the workloads are usually ridiculous. So we all have gaps in our knowledge and need to get stuff to production, what I'll agree with is that people shouldn't be proud of not knowing but I'll always tells devs to use what they know over what they don't if there are actual deadlines and costs behind the thing because it's their asses on the line.
@@jeremiedubuis5058 "So we all have gaps in our knowledge and need to get stuff to production, what I'll agree with is that people shouldn't be proud of not knowing but I'll always tells devs to use what they know over what they don't if there are actual deadlines and costs behind the thing because it's their asses on the line." Exactly, people like Casey saying there is no excuse not to know assembly language or how a CPU works if you are capable of understanding how React or Docker works are missing the point. I do not understand people who claim to be proud of not knowing, in fact I have not met such a person really during my time at software engineering. But, for a e.g. a frontend Web engineer, knowing how a CPU works is not going to pay the bills the same way knowing React does.
@@Doriyan If you know how a CPU works, it'll take minutes to tell whether a JS library is bad or not, simply by reasoning from its performance. Because lacking performance, bloat and lacking maintainability all strongly correlate. That's a very demanded skill. So it _does_ pay the bills to know fundamentals, because it helps you make informed decisions, even if you work 99% in high-level code. Learning to understand the CPU to a reasonable level is not a full-time job. It's taking a few weeks course or similar, and then you just know important fundamentals that will likely last decades. It should've just been curriculum, and I'm not sure why it isn't so many places - including at my Uni.
@@Muskar2 I call that plausible but unrealistic in practice. In order to understand, in *few minutes* whether some bunch of code is well optimized or not you need to understand each: the code itself, the problem it is trying to solve and the efficient ways to solve that problem. Additionally, when working with interpreted languages you need to understand how the interpreter itself runs under the hood. Can it be done? Sure. But a couple of minutes? I'm not so sure. Some of the libraries these days are massive and you could not even read majority of the moving parts in just few minutes, not to mention understand it properly. You can spot stupid stuff like unnecessary taxing string operations and triple for loops etc. but that is quite surface level in my view and has little to do with knowing how a CPU works in detail. Call me cynical but for the majority of programmers in the kinds of jobs I have been in the past decade or so (cranking out Web App features on an assembly line at non-FAANG megacorporations) doing that based on a short course on CPU architecture is just too much to ask. Maybe you exist on a different level of the field where that would make a difference but on that I will have to agree to disagree. For what it is worth, understanding how a CPU works and working with assembly code *was* part of my university curriculum but I find that it did not help me much as far as doing actual work in the field I ended up at. But I could be wrong, perhaps I did benefit from these fundamentals unconsciously and do not understand the point of view of someone who did not receive such education. However, in my experience, I don't generally ask myself "is this code as optimal as it can be regards to its CPU utilization" when writing things, since that is not valued at all from the perspective that we get our marching orders from -- unless it is visibly hindering the app, which happens occasionally. Unfortunately the spirit of the game is pumping out as many features as possible to please the higher ups and that will not change from developer side of things.
@@Muskar2 Strongly agree. Knowing how a computer works at a more basic level absolutely will make your code better. The idea that you "write the code first then optimize it later" is something I almost exclusively see from people who are not very good at programming. While there something to be said about premature optimization, it's equally true that if you know how a computer works you can write fairly optimized code *on the first try*, in whatever language you choose. Seeing people talk about how "actually it's okay that I'm specialized and I only know how to load cloud fonts in react is actually very complicated okay" really just sounds like cope from people who are either unintelligent or simply lazy.
@@autismspirit Well, they did talk about gamedev on the first few minutes, but maybe Casey intended to attract a more general audience to his new project.
36:00 - it's definitely a culture thing. The Product Manager doesn't care about speed as long as long as it's not a feature. We live in the world where features trump performance almost always.
@@nryle Laws of nature prevents a 10GoblinHz from existing, but we already have Troll_uOps and even Quagmire_uOps due to SIMD, multi-threading and Instruction-level parallelism.
Visual Studio is still slow. I use it all the time for work. It’s a little frustrating. Still not as bad as MS SQL Studio 😊 Also, it sounds like Assembly is kinda like Latin. Nobody writes it anymore, but being able to read it is still quite useful for many applications.
no, assembly is not latin. everything is powered by assembly, even if it's not written directly into it, but latin is mostly not used. i don't think there is a good analogy with languages, but i guess you could say assembly is to programming what derivatives are to physics. almost everything will include one, even if it's mostly hidden at the introduction and intermediate levels
Reading assembly, especially small chunks, is easy. Understanding what is happening in larger chunks is more difficult. Actually writing anything useful and/or interesting with assembly is very hard and takes some creativity. The idea that assembly is "complex" is a misunderstanding of what is difficult. The language itself is not complex (with a few exceptions like understanding memory ordering guarantees), but the solutions used can sometimes be hard to understand because of the creativity involved.
I am in physician and hobby in code for quite a while, more recently I have been working graphics apis, webgl/gpu. I will honestly say that understanding how a proper rendering pipeline works and understanding how shaders and how to transform data in 3d space, how to communicate between cpu and gpu, has probably been more enlightening to my personal knowledge base then all of medical school. It has even better solidified my understanding of our neurophysiology, for instance the way in which our retinas process data and the way in which those signals propagate throughout your brain is to get the right image out of the data, it is similar to how you transform multiple matrices to properly project an object into 3d space. Even just internalizing the concept of what exactly a "transform" is and thinking about things so reguraly in terms of matrices has been quite the trip for me. I think honestly think that everyone would benefit from understanding what the process it to get a triangle to render on the screen takes. 3D rendering is exhilarating, nothing has ever got me to open my old physics textbooks outside of academic need, but thinking about new rendering techniques makes physics exciting. I hated linear algebra in undergrad, now im starting to think linear algebra is the most important math to properly understand to take any field foward. And although I am not an expert in coding, I can only imagine that if coders understood the sheer amount of data that is able to rendered in miliseconds and re-rendered over and over again to produce 3d sceens, it would help them fundementally transform how they thought about approaching their domain of knowledge.
The story of my Python framework for pipelines… I am always thinking about how many iterations some types of computations will take and where to offload it to get the acceleration I know the hardware can provide. Ironically, I have not had to incorporate cython yet (3 years later) because understanding the code-hardware interaction has made my program thousands of times faster than anything running the last decade before I showed up in my production environment.
I'm not a developer. I'm not aspiring to be one and I certainly don't want to be one... but here's the thing. I work with an ERP software that is slow af, for which the company I work for pays about 100k/year for support. I had to write my own program that interacts with the same database in order to be more productive. And before you judge me, I did create a ticket to the software company but their reply was: "That is a limitation of the SQL Server engine." That was a bunch of malarkey. In just half a day I had setup a little program that worked 10x faster than their shitty software. It just had to update, delete or create some rows in the database.
I imagine you used a basic library like C#'s SQLDataReader and they used an ORM like EntityFramework. I've written a lexer for SQL and a partial ODBC driver. Even SQL is actually pretty complicated for what it does, and if you don't need its core features like fault tolerance, your application can be made additional orders of magnitude faster than circumventing the wasteful ORMs. Let's say you're querying a Customer "row" with e.g. 4kB data. That's less than the L1D cache on all modern server hardware (typically 32kiB+ per core) and most client hardware, so you can process it very quickly (sometimes even in a few cycles depending on the work) and fetching it cold from memory is only going to be ~300cycles which is roughly ~0.00007ms compared to SQL which can easily take a few ms to finish a simple 'SELECT * FROM Customers WHERE id = @CustomerId'.
@@Muskar2 I used C++ with SQLAPI++. You're probably right and they might be using an ORM. Just to put things in perspective: their software updates 1 row with 3 fields in about 10 seconds. My solution is almost instant. And there aren't any additional queries being run so it must be the code (I ran a trace using MSSQL Server Profiler).
I asked a dev senior to me on my team once about why we don't focus more on performance and was told "Oh we don't need to worry about performance because we can always throw more resources at it."
@@ChamplooMusashi no code is the best code, just not for your pay check. Zero bugs, less complex, easy to maintain, better for security, super performant.
"Honey, why are you pouring the milk out into the sink?" "Because it's cheap" If it is running on an end users machine it's not even your compute you are throwing away.
My first Visual Studio was VS6, back in 1998 and I thought it was slow. If you tried Squeak Smalltalk it loads instantly and Smalltalk is at the top of Dynamic Languages next to Common Lisp, however Common Lisp is a lot faster than Smalltalk.
This was a super interesting episode! I think that Casey is right in a lot of ways that the root of slow software is lack of education / cultural issues. Something he didn't mention though, that falls under the category of culture I think, is the over-emphasis on avoiding "premature optimization" and the obsession with TTD. It's regularly preached that we should hack together the first thing that comes to mind to get our feature delivered and only consider improvement when stakeholders complain about something.
Premature optimization is a real isssue. The problem is that they turn "don't prematurely optimize" into "never optimize" You get told "the feature works? Ok move onto the next one" before you can go back and fix the performance issues. You're supposed to hack together a prototype to prove the concept and then fix it so it's actually made properly. But instead, prototypes get pushed as finished products for the sake of maximizing short term profits.
For my computer engineering degree, I had a semester long course where we each built during lab a thermostatically controlled thermoelectric heater/cooler with lcd screen with keypad using a freescale hc11 processor, and we had to do everything in assembly language. I do think it's a valuable experience.
Actually the reason there behind so much slow software is exactly that of being taught it isn't important. There is a single phrase that exemplifies this attitude. "Premature Optimization." You can't go from a bubble sort to quick sort by simply improving it. It has to be written from the start. The performance has to be considered at the start and the aspects of what makes code fast and what is important have to be considered from the get go.
33:30 out of curiosity what search terms do I need to use in order to find this demonstration? I'd 100% believe it, but I want to find the actual video because I feel like that's a pretty visceral example of the concept. (it's easy to say "no, seriously, computers are so fast most things should be happening instantly" but actually showing someone the difference tends to make it hit home just how bad a lot of modern code has gotten)
The comparison Casey makes about game designers is very similar to what you can make about programmers. You can say that the average programmer just repeats the same things over and over but you'd expect any programmer with a bit of experience to be able to come up with the architecture of the program, not just copy paste from stack overflow. With game design it's the same. Any experienced game designer should be able to come up with original game ideas, sometimes very experimental, sometimes more grounded in what players already know but with an original twist. But then just like in programming you have a lot of specialization: - You have game designers who can write gameplay code to some degree and focus more on prototyping, which is especially important in action games where you need to tweak controls a lot so the experience feels right - You have game designers who can't code but are really good at coming up with complex systems with lots of stats and numerical interaction. This will be your excel guy tweaking formulas - You have people who are not so much into mechanics or coding but know a lot about narrative. This will be the guy writing quests or branching stories. Those are some examples and yeah, just like in programming, if you work on a multi-million project, unless you're one of the top guys then your contributions are very small and heavily structured beforehand.
one of the things I noticed a few years ago (but maybe is better now) is devs not caring about performance because 'computers are fast' and 'memory is free' like no, these things all add up to make modern computers wayy slower than they could be when everyone has this mindset. It's even making its way into game dev. Also apps wanting to be flashy rather than performant and easy to use
The Spotify app is really bad, on my 165Hz monitor you can see it drop frames when scrolling. Something about whatever React UI or Chromium they're using is absolutely horrible, and this is on a Ryzen 7 3700x and RX 6800. Finally, after upgrading to a Ryzen 9 9950x a 500$ CPU it doesn't drop frames when scrolling (mostly), I bought the CPU for other things of course but I can't imagine how bad it is on the average laptop. And yes, it is running with hardware acceleration and everything I tried EVERYTHING enabling Vulkan and messing with a bunch of flags it's just that slow. Seemingly, the web app has the same problem as well, but in Firefox it seems to be a lot better which I find to be a odd outlier as Chromium is usually faster. Should I care? Likely not, but it's mildly infuriating as I can easily tell
If I understood the point you were trying to make around the 50 minute mark, it's that having these 'all-signing, all-dancing' frameworks/languages that are very high level means that you end up using whatever it is they've given you, and that's often not really what best solves your problem and *only* your problem. This is where a huge amount of performance problems come from, because you end up using two or more complicated functions to do something, because you needed some of the behaviour from each - say, finding the largest value while also removing zeros from a container of unsigned ints. Many programmers would write sort(container) # sort container slice = container.find_last(0) # finds last instance of 0 container = container[slice:] # Copies the container so that we overwite the 0s with the non-zero numbers, and shrink container accordingly largest_value = container[-1]; # Take last element of the container - we could have done this immedately after the sort But if you're writing the code yourself, this is unnecessary. Create a new container of the same length, iterate over your initial container and copy over any non-zero number, while also keeping track of the largest number you've seen. (You use more memory, but it's not complicated to implement at all. You can also just overwrite the previous values in your own container by some offset, incremented each time you encounter a zero, and do it in O(n) operations and O(1) memory, or a number of other solutions, but I picked a very simple one for this example). But this is not encouraged in higher-level languages. You're meant to use the built-in stuff, rather than roll your own. The problem is that, as your building blocks get more complicated, then the chances are that someone has built some construct that is *exactly* what you need goes down enormously. You can chain them together, as described above, but this is where slow code comes form. In more simple languages, like ASM or C, you pretty much have to roll it yourself, so you'll write something that is precisely what you need - and in terms of productivity, I can write the above for loop in C faster than I can look up which functions to call on a python list in the example code above (how do I find the last 0, anyone?). FWIW, Python is probably one of the biggest culprits of this, because of how things like simple for loops in python are so slow, and you're therefore massively encouraged to use the built-in 'special' functions that will execute C. These combinations of functions are doing far more work than you actually need, but using the interpreted for-loop will be even slower so you're basically screwed either way.
In my experience, those data science problems are actually part of where the average devs perform decently, because they often care about Big O notation. But it's the wasteful abstraction boilerplate and bloated APIs that hide the vast overgeneralizations and busy work, take finicky inputs, often poorly documented etc. E.g. OOP objects that are just/mostly functions/methods are a typical source of boilerplate in my experience. I'd strongly recommend trying to minimize lines of code in your codebase (i.e. don't do SRP and short functions). Particularly taking something with several layers of wrapper functions and seeing if it can be inlined for its callers (and keep repeating that until the code's logic prevents you from doing so). You'll quickly find that the code was doing the same preparation work multiple times (perhaps even overlapping database/network queries), and that code understanding drastically increases. But immutable "pure" functions that are reused can stay separate and small if they need to be (i.e. often those dubbed 'utility functions').
1:05:49 It's worth noting that, in Windows at least, most of the drivers are in user space. Only drivers which require (or want) performance live in kernel space, at this point video drivers is the only big one.
I SO MUCH ENJOYED this podcast! What a chad Casey is. And really insightful questions. Huuge inspiration and lots of food for thought. Keep up the good work Backend Banter!
Very good point about the current state of affairs (hw, firmware, os bloat) precluding any innovation. Truly new General Purpose OSes are extremely unlikely to come to be, because of the baggage of useless stuff that needs to be reimplemented in order for it to be even minimally viable.
The altruistic part is so important on a much larger scale as well, and regarding waste in general. We show no care because there are so many resources. Right up until we've squandered all and it's time to think about economy because otherwise we're dead in the water. In software, people don't care because hardware is cheaper than people. It would take conscious effort to start making things efficient, because "it's the right thing to do", especially because it'll probably be more expensive in the short term. The market rewards immediate costs optimization though, so we're here
He keeps mentioning how much easier it is to understand assembly and gives examples like, "This instruction adds two numbers together". Yea he's right and that is simple but he doesn't address the real difficulty of reading assembly and that is understanding or identifying the high level objective of the assembly code you're looking at. You could look at a function that implements Fibonacci sequence in assembly and be able to pretty quickly identify what each instruction is doing, but how long would it take before you figure out that the function is doing the Fibonacci sequence? Compare that to looking at an implementation in javascript or some high level language and most people familiar w/ Fibonacci would probably identify it immediately. I'm sure someone well versed in assembly could probably identify it almost immediately as well but that's because as they quickly gloss over each line of assembly they are maintaining the mental model or structure of the block of code and when they reach the last line understand what it's doing as a whole. That part takes time and practice in the language. You also need to know how programs are structured to better help understand what's going on. When you see a register being added too, is it part of the logic of the function or is it increasing the stack pointer to create space for local variables? However I think his point does stand. Even if all you can do is interpret assembly instructions line by line, it's still very valuable when you need to debug a crash or something, you can quickly see that some memory location was 0 indicating a bug in the code, or look at the generated assembly output of some calculations to see that it wasn't vectorized or something w/o having to interpret the entire block as a whole. Great episode, will def. be checking out his substack to help stay more informed myself :)
I don't think he's advocating trying to understand a program just by looking at the assembly. I think the scenario Casey is imagining is where you have the high level code and you ask the question "what is this actually doing?" for some reason (like performance) so then you read the assembly to check. Like, being able to select the "go to dissasembly" option in Visual Studio to see what the C compliler actually output and understand it.
This is what assembly analysis tools like godbolt are for. They tell you which function lines up with which asm so you dont have to figure any of that out yourself as a human. Some can even put in comments in the asm to explain some of the more complex bits of it. The point of learning to read the asm is to know that when its compiled, you can see why its so much slower than it could be by seeing it doing needless operations you can get compiled out with some tweaks to the source. And with these tools, you can make source tweaks and see how it impacts the asm to make the speed better or worse, etc.
57:30 god... i'm deep in that struggle, right now I am in a deep refactoring session because another dev just pilled on sh*t on top of some other sh*t, I've just delete an entire class that was useless, felt great, but also do some cleaning from time to time....
You can tell Casey has been ready to vomit everything on his mind for a while now lol. I used to watch his game dev on twitch a while back and he's the first person that sort of showed me how in depth coding can get (this was before I was a programmer).
thank you very much for these low level talks! I love them so much, it's hard to express my feelings with words! Thank you very much agian, awesome channel, amazing podcasts!
I'm a substack subscriber of Casey's and I'm very looking forward to watching the pain he mentions about moving off of substack. His pain is my joy, but also I get to learn.
That classification of Just In Time learning and where it does and doesn’t apply is so clutch. I’ve been doing this as a professional for almost 20 years and he absolutely nailed something that’s been bugging me for years - what should/shouldn’t I be spending my time learning? What’s okay to learn JIT style and what should be directed learning?
For the choice of assembly language I'd recommend RISC-V, for the simple reason of lack of banging your head against hysterical raisins. Also vector code is much more readable than SIMD but drives home the same important points. Where things get complicated on that level is understanding how modern CPUs execute code, superscalar, pipelining, speculation, all that goodness. But for a basic understanding all that's necessary to know is "modern CPUs are complicated behind the veneer of pretending to be a microcontroller, that's why you're seeing compilers produce funny code sometimes", writing code that out-performs compiler output is a specialisation of its own nowadays.
I'm in a company that uses 10 different languages or framework, with either SQLite SSMS Azure etc.. little bit of docker here and there and now we're starting to use dart to port over our apps so there hasn't been time to really delve into the nitty gritty of each language but it is definitely a nice feeling being able to code and understand what each one does, maybe not in the most proficient way but enjoyable
The "Day in the life of" video genre points to a trend that plenty of software developers are "just good enough to pass an interview and schmooze their superiors, but not good enough to consistently deliver good work". Unfortunately, a whole lot of these kinds of developers were laid off in mass by these FAANG companies at the cost of "accidentally" laying off so many good developers as a way of getting rid of their seniority status AND their really high salary to adjust to the reality of a non-zero interest rate environment that created this mess. As a result, the barrier of entry is higher simply to have better developers than ever before to clean up code that is now messier than and yet AI will be the big scapegoat for the decades worth of bad decisions by higher ups and lack of care for writing software with hardware constraints. Nope, I'm not blaming programming languages for this stuff as bad logic can be written in even the best of languages like Jai. There's a huge problem with requirement analysis that makes it hard for anyone to properly solve problems without looking up what a rando said on Stackoverflow or the latestLLM hotness.
Well, also think that the very nature of the industry that these devs exist in encourage this type of behavior. Being a legitimately good programmer doesn't get you much unless you're a savant, then of course you do whatever you want. For the rest, it's deliverables, not a collection of improvements on an old code base that in aggregate make the app run 10x as fast, silky smooth, respond exactly as expected, and encounters no errors. That's not a shiny new feature, that's just "polish" So why spend time honing your craft when your work will be thrown out or shipped off to a new developer at the whim of the 5th new project manager in as many years?
Companies don't measure software projects on how good their software is, only on how many features they can cram into the marketing page. The problem isn't the programmers per se, the company culture forces them to work a certain way.
@@rumble1925 At that point, they may as well just use an AI to save potential software developers from their torment. Then again, maybe the corpos get off on the developers suffering from low pay and chronic stress since that's apparently worth more than stress saved from berating the developers 24/7.
I wish interviews were easier than the jobs. I just went through a process to measure how much BS they would throw at me. I did 6 steps, 5 of which were leetcode and live coding tests. I aced them until I reached the last step, where they forced me to solve an algorithm in C#, a language that is not even part of their stack according to conversations with the company developers. Well, I'm not an expert in C#, never really used it, 15 minutes wasn't enough for me to bypass the language specifics and the crappy online IDE these tests use lol, even though I could solve this problem easily with some other languages. The industry is full of hacks taking decisions, from HR to C-suite. I don't need a new job, but I sure feel bad for anyone trying to get one in the area. From the number of unemployed engineers I've seen on LinkedIn and job posting platforms in general, I could argue that all these barriers are artificial and the scarcity is not true.
Dear god, I work in the high arcanes of ERP consulting. I could talk for hours about how shitty HR is and managers and company leadership and hiring processes are. In fact, I was even fired from a company for refusing to hire based on DIE skin color and gender. A vastly inferior tech person with good schmoozing skills is far more likely to get hired over a competent, but shy or socially not as skilled candidate. It's sad.
The problem is not only the individual programmers and the education they receive, but also how we are organized to collaborate on projects and how the whole software ecosystem works. A programmer may be tasked to use a specific library or framework to integrate with other team's or third party code, without any documentation or source code on the performance implications of using a certain sequence of APIs. Maybe fetching an element using a getter methid is an O(1) in memory, maybe it requires a network roundtrip, if the API is poorly documented it is difficult to know. It is "easier" if the team is in control of the whole stack, but this might not be the case
Casey is spot on, but the sad thing is: there is nearly no way to explain it to the newer generations. In '87, you could start your 32bit multitasking machine in 0.5 sec. It went downhill from there. Why ? In the 60, X was a theoretical landscape discovered and mapped by geniuses In the 70, X was a practical landscape explored by very intelligent craftsman In the 80, X was a new frontier explored by crazy adventurer with no rules. In the 90, X became a business opportunity plundered by moronic fools. In the 00, X was the "startup" era of consolidation of moronic behaviors. In the 10, X is an industry generating billions from the crappiest toxic product In the 20, X you cannot stop hearing about "best practice" and "agility" from the most ignorant fools, with no technical knowledge, no ethics, and an *actual* waterfall mindset. Let's ChatGPT take over, there is nothing to be salvaged there (especially intelligence) Replace X by any discipline you want
What drives me nuts is that we don’t seem to be waiting on anything. I have 20 cores sitting idle, any one of which will boost nearly double if a single-threaded workload happens. NVMe SSD, sitting idle, yet Everything. Takes. Seconds. To. Do. Anything.
I feel like the performance argument is such a trivial tech problem. Yes, there is complexity in things like React frameworks or Python, but you can still solve performance problems. It doesnt really matter the teck stack you use. You just need to know when it is a problem and when it isn't. The reason why you might not know is not because you dont understand a very low level tech (assembly for example), instead its because your coding doesnt lead you to test performance, now THAT's the problem. The problem is developers are not being taught how to test. I don't mean being a very thorough QA engineer testing a webpage. I mean understanding why you test and the history of testing and how it is involved in each and every software you touch, and how there are a lot of implicit standards for what a tested library does.
I don't think you need to know all about software testing to do basic performance tests. It just comes with general low level programming knowledge. You should know not to waste memory and time creating unnecessary copies of the same obejct, for example. Unfortunately, modern high-level languages obfuscate this issue.
I cant stand the current standard of what it means to be a "dev", or "software engineer". Linking apis is not engineering or development, its arts and crafts. No disrespect but that is not the same as developing architectures and systems. since i have gotten into programming i have had the understanding that if i want to be better i need to know how everything works both on a high level and low level, and It has made me so much more confident when programming. and has made switching code environments and scopes easier and allows me to communicate better.
ARM system initiative is not trying to simplify hardware, it’s simply trying to incentivize use of ARMs CPUs in data centers instead of AMD/Intel, the rest of the systems are the same buses, same units same everything.
8 днів тому
1:10:30 have you heard about the computer hardware and software from scratch project: serenum? the creator started it bc of your 30mil line problem video. he doesnt have billions of dollars afaik :D
When Casey was talking about creating a far simpler Operating System that doesn't require millions and millions of lines of code, it made me think of the ARM based Operating System called 'RISC OS'. It has a desktop environment etc but only runs on a single core and is written is about 60% ARM assembly, 40% C code. It would probably be a very good base for coders such as Casey to look at is he was serious about using a smaller/simpler OS than did not use Linux/Unix/UEFI etc as a base.
I never understand anything about anything when it comes to computers. How does electricity turn into hello? How does electricity turn into hello on a screen and then writing vector.2 = anim "jump" make a 3d sculpture make visual illusions and do a backflip at 9000 fps?
Managers will always take new slow features now over faster features eventually. People assume it'll be a need to rewrite the app in a new stack before fixing it in the current one.
Man! I totally agree about the CSS ! I can’t handle it, makes me frustrated but c++ and rust is just fine! I don’t find that to be a problem. Conclusion - Frontend is hard
From about 25:00 this is one of the several reasons why I don't consider Python a programming language, it's more of a scripting language, a glue language. The best python code does next to nothing in Python, other than making real code talk to one another in a heterogeneous setting or when someone isn't really a developer.
I really enjoyed this episode & the points made! Love the term just-in-time learning! However, I can't help but think that all that "knowing how what you're writing will translate after transpilation/compilation" should be tackled on an LSP level in your code editor, or even by static analysis, than by memorizing it.. even if you do understand all the layers, you can still miss something here & there because of some edge case with the engine or you just being plain tired. Whereas if it showed "hey, this chained .map().filter().sort() is probably not a good idea. Have you considered a .reduce() instead?" it'd be massively more helpful and teach the developer along the way... or maybe it's just me blabbering 😄
Which is necessary, because only the programmer(s) will know what kind of problem they're solving. Trying to solve all problems - including problems that wouldn't ever occur to your software - is exactly the kind of complexity you'd want to avoid. The benefit of high-level code is a productivity increase. But it doesn't need to be at the cost of including mountains of unnecessary failure points or severely degrading control flow.
Casey might not be a famed game developer, but as far as game programmers are concerned, he is definitely in the top 5 most recognizable, mostly due to his online presence.
I am completely self taught and have a good understanding of C, Python, and Java yet I am yet to meet another programmer who I can sit down with and just write and discuss code. I want to work in this field but I have no one to talk too
If someone makes such a chip that is more simple then you'll be back to square one because complexity is easy to add fast with new features which will be built on top of the simplicity. Advertising is also a challenge, if something is so simple that it doesn't need certain features, then you can't use those buzz words to promote it, and comparison charts will have to put an "X" on that row for your product. I think the real solution is to move towards a world where people are passionate on the projects they work on beyond the incentive of money prioritizing fast pace development over design.
For a number of reaasons, I feel like shooting for the mobile phone market with a completely fresh OS would be easier than the desktop market. Simplicity of basic apps (calling, texting, photo capture), an (arguably) more competitive/open market (easier to gain traction, at least), lower expectations for immediate "productivity," etc. I'd imagine getting people to invest in something that isn't Microsoft/Apple for desktop hardware would be a relatively herculean task by comparison, given that most non-technical consumers couldn't care less about "bloat" or "security," no matter how much you stress it.
If you can code CSS, you can understand ASM; which does very little. Add just adds. This is disengenuous. various ASM op-codes will result in changes to CPU registers, might trigger interrupts and other hard to observe (outside of step-through debugging); which may be influenced by prior code run.
More features == everything is slower is a rather valid argument. The question is, how are those massively complex programs being developed? It's not like we use the previous version which was fast as a core and add features to it forever or alter it without performance regressions in order to facilitate the development of more complex features. What happens in the wild is the adoption of increasingly more complex and abstracted frameworks and libraries on top of managed modern languages in order to facilitate large teams of developers to quicky add those new features. Those frameworks and languages dictate a large part of the fundamental architecture of the software and also make an implicit tradeoff between ease of development and performance that is opaque to the implementer most of the time. Worse still, the trade-off is not done in large chunks in a few spots, it is done silently almost everywhere and each new feature that has a sliver of cross-cutting concern with invariably use some part of the framework's heavy (often global in the main loop) machinery that will slow things down. Let's look at the immediate window: If that particular widget is a complex WPF piece of crap in VS 20xx it is very hard for a developer that is tasked with a performance improvement to do a lot about it and get close to the machine performance since that person would have to scrap that entire control, nay the entire codebase!
Yep. Software is a complex organism shaped by many agents. Even if every developer was omniscient, that does not make them omnipotent, nor does it mean they have incentives to use their knowledge or power for the things others want them to.
For any programmer wanting to learn the "low-level stuff", I recommend the book : Computer System A Programmer's Perspective, by Randal E. Bryant and David R. O'Hallaron. As the name suggests it teaches you how the computer and the operating system works from a programmers perspective and also demonstrates various low level optimizations. It's also pretty rigorous.
Yeah, I agree with the commentary on the 30 million line problem. However, I think this will mainly start getting fixed (but with great effort) once we stop getting our sizable performance gains every hardware generation, which probably will happen soon given the physical limits we're running into with lithography. Because at that point, it won't be node size to drive generational performance and cover up all the code inefficiency. It starts becoming a money and energy problem.
This has been one of the best discussions I've heard in awhile. My wife is a vet. I've always found it interesting how continuing to learn is required and even paid for by her work, and then there are "programmers" who refuse to step out of the box they put themselves in. I find it interesting how so many programmers don't even bother to KNOW not just think their code is performant much less understand the fundamentals.
1:12:58 software development and hardware development should be integrated for that to happen, you don't need billions of dollars, you need only a couple of million to pay excellent engineers to create that. I'm interested on doing that.
most of what he said about operating system security and innovation / maintainability applies to web browsers as well, which are really also operating systems if you think about it (im pretty sure), except they're built on top of other operating systems. everyone now is forking chrome instead of making their own browser. but the complexity would be lower if someone wanted to start from scratch on browsers, not sure anyone would adopt it
the trade off of bad code is supposed to be speed of development (not extra features). could be that they don't learn good practices in the past before working on a project as well
Modern applications can exist because we rely on previous abstractions. To throw away those abstractions is simply not feasible. It’s not just a money problem. It’s a time problem as well. Could we roll back our civilization to the stone age and get back to where we are and in a better state? Sure. But it would take decades if not centuries to get back to where we are now.
JS compiles down to ASM@JIT? Certainly didn't think a hardware/raster/vector guru was going to make me feel better about learning JS for first language... I hope that's not why so many people tried to re-write everything under the sun in Node over the last 10-15 years (note: analogously happening with Rust ~5-10 yrs, what's old is new)
1:11:00 No, let's not use USB. It is entirely software, right down into drivers and down into the kernel. To remove complexity, removal of USB is a good place to start.
Complex != bad. One of the major reasons software exists is to do things that are too complicated to implement economically at a hardware level, and the hardware is designed around making that complexity possible. His point is that assembly is basically the programmer's equivalent of Doctor Seuss to their usual reading of A Game of Thrones.
@@burger-se1er Complex != feature rich. He's basically using the word to mean 'wasteful and incomprehensible', in opposition to code where it's trivial to reason about what it does for the exact same functionality.
@@Muskar2 No disagreement with the "libraries for frontend are basically bad" part, just the "he basically implies" and the "as" (meaning "because") in the original comment. Casey is very clear that web is generally bad at its job, but "hardware can't do it" is not his reason.
@@burger-se1ermissing the bigger point. These languages are just bloated APIs and not foundational tech. To put it simply, you are programming an API, not a language and computer. The critical issue missed is that because the API purpose is to block access to implementation information; when placed in clunky systems that introduce critical problems, the design of the tech inherently prevents anyone from solving the problems for the end users. The language doesn’t even need to be assembly level to be a well designed language. Even golang is a great example of a language that has simplicity and gives ppl an allowance to solve a software issue without introduce complicated API and dependency problems that most languages/frameworks introduce.
If people have never written a simple program in assembly, I'd recommend it. A simple console app which takes in input and formats a string will work. It doesn't need to be complicated.
Casey is spot on. The mainstream dev culture values deep knowledge of convoluted APIs (Android, iOS, JS-of-the-week, Docker, K8, etc.)--which over time is a waste of brain space because these things come and go--whereas general understanding is what endures.
docker lol, docker is very useful... It doesnt come and go.
@@jordixboy LOL
@@jordixboyin the 6 years I've been my current company has overhauled the virtualization/containerization stack 3 times. Docker isn't immune to the "brand new blazin' fast JS framework" disease
@@TypingHazard to what did you guys change? wasm?
That is not “dev culture”. It’s the reality of most developer jobs available
I have a theory. Developers should use old machines to code. This would improve software like crazy!
Great software Great code
not a bad idea tbh. Or at least easily simulate worse hardware
Intel Pentium M has instantly made me appreciate performant code
Do you have any recomendations?
Sometimes you don't have a choice. I tried using UE4 on a pentium from 12 years ago, compiling shaders and classes all day, absolute misery.
I love the take on learning. I’ve been self taught, hit a wall where I didn’t know what I didn’t know, and took a bootcamp course covering a wide breadth of tech. Now I still know nothing, but I know what I don’t know, and am actually building stuff!
Right on! Genuinely that’s awesome!
I love listening to Casey. Every time I hear him speak I learn something.
I have returned to once again learn a lot I didn't realize the first time around. (And hopefully add my 2 cents to the algorithm by commenting.)
Me too. He’s great
The game Turing Complete is fantastic, it's a puzzle game with a circuit simulator where you start with basic logic gates and boolean algebra, and it works up to you building a very simple computer with 1 byte instructions, and then a more advanced one with 4 byte instructions and a few address modes. It's still in early access but there's about 30-40 hours of gameplay just in the campaign.
Hey thanks for pointing me to this game, it's awesome.
Also, most of the things zachtronics has made
"Next.js is like the mumma bird pre-digesting the food for the baby bird" is such an amazing analogy.
1:05:33
The idea of userland drivers is really interesting. The Nintendo Switch has something like this: iirc the graphics driver, SD storage, and other drivers are all just regular user processes with a higher permission level. They have a very small (custom!) micro-kernel (only using around 500KB of kernel space for the binary), but basically everything is driven by these userspace drivers. Then each user process does IPC via syscalls, and the kernel just kinda mediates. Their IPC interface is a mess, but the rest of the system is relatively simple. I think this is probably driven by the need of security audits like Casey was talking about.
37:30 "taught that it's virtuous to not know how a computer works" . This is what has frustrated me so much - people bragging about ignorance and how they can just pay AWS or ChatGPT as if that's actually "smarter" that knowing basic information you need to host a web app.
The thing is that for efficiency and productivity we specialize. You wouldn't deem it necessary for a fast food employee to learn how to cook like a chef because even if they knew they are working with cost, time, equipment and human resources constraints that won't allow them to put that skill to use.
Like most industries, development requires acceptable employees in abundance and people who actually know what they are doing are necessary but in much smaller numbers.
Programming like all trades is a skill that is honed, some people will stagnate but some will expand their knowledge with time. Casey has had time to hone his skill, he also basically saw the industry grow from nothing to a huge complex mess.
People tend to overvalue the things they know and undervalue the things they don't. Here are a few front-end subjects, that are complicated, very useful and have absolutely nothing to do with how a computer works: optimizing CLS, working against industry giants performance bottlenecks (cloud fonts, gtm for examples that immediately come to mind), caching, critical CSS loading... People closer to the metal tend to undervalue these skills, and a 25 year old developer can only know so much.
As Casey mentioned, docker is just a tool, a webdev these days needs to understand hundreds of tools. Docker sure, but also the stuff they are dockerizing, linux, nginx, a sql server, a nosql server, redis, rabbitmq, their programming language, html, css, javascript, how the cloud works, ssh, HTTP rest, XML, SSL certificates, SMTP servers...
People really only understand a subset of everything they need to use for their jobs and I get that understanding the CPU can be low on the list of things to learn, most people won't be doing much bulk processing, it's always fun to ask people what kind of "big data" they worked with, the workloads are usually ridiculous.
So we all have gaps in our knowledge and need to get stuff to production, what I'll agree with is that people shouldn't be proud of not knowing but I'll always tells devs to use what they know over what they don't if there are actual deadlines and costs behind the thing because it's their asses on the line.
@@jeremiedubuis5058 "So we all have gaps in our knowledge and need to get stuff to production, what I'll agree with is that people shouldn't be proud of not knowing but I'll always tells devs to use what they know over what they don't if there are actual deadlines and costs behind the thing because it's their asses on the line."
Exactly, people like Casey saying there is no excuse not to know assembly language or how a CPU works if you are capable of understanding how React or Docker works are missing the point. I do not understand people who claim to be proud of not knowing, in fact I have not met such a person really during my time at software engineering. But, for a e.g. a frontend Web engineer, knowing how a CPU works is not going to pay the bills the same way knowing React does.
@@Doriyan If you know how a CPU works, it'll take minutes to tell whether a JS library is bad or not, simply by reasoning from its performance. Because lacking performance, bloat and lacking maintainability all strongly correlate. That's a very demanded skill. So it _does_ pay the bills to know fundamentals, because it helps you make informed decisions, even if you work 99% in high-level code. Learning to understand the CPU to a reasonable level is not a full-time job. It's taking a few weeks course or similar, and then you just know important fundamentals that will likely last decades. It should've just been curriculum, and I'm not sure why it isn't so many places - including at my Uni.
@@Muskar2 I call that plausible but unrealistic in practice. In order to understand, in *few minutes* whether some bunch of code is well optimized or not you need to understand each: the code itself, the problem it is trying to solve and the efficient ways to solve that problem. Additionally, when working with interpreted languages you need to understand how the interpreter itself runs under the hood. Can it be done? Sure. But a couple of minutes? I'm not so sure. Some of the libraries these days are massive and you could not even read majority of the moving parts in just few minutes, not to mention understand it properly. You can spot stupid stuff like unnecessary taxing string operations and triple for loops etc. but that is quite surface level in my view and has little to do with knowing how a CPU works in detail.
Call me cynical but for the majority of programmers in the kinds of jobs I have been in the past decade or so (cranking out Web App features on an assembly line at non-FAANG megacorporations) doing that based on a short course on CPU architecture is just too much to ask. Maybe you exist on a different level of the field where that would make a difference but on that I will have to agree to disagree.
For what it is worth, understanding how a CPU works and working with assembly code *was* part of my university curriculum but I find that it did not help me much as far as doing actual work in the field I ended up at. But I could be wrong, perhaps I did benefit from these fundamentals unconsciously and do not understand the point of view of someone who did not receive such education.
However, in my experience, I don't generally ask myself "is this code as optimal as it can be regards to its CPU utilization" when writing things, since that is not valued at all from the perspective that we get our marching orders from -- unless it is visibly hindering the app, which happens occasionally. Unfortunately the spirit of the game is pumping out as many features as possible to please the higher ups and that will not change from developer side of things.
@@Muskar2 Strongly agree. Knowing how a computer works at a more basic level absolutely will make your code better. The idea that you "write the code first then optimize it later" is something I almost exclusively see from people who are not very good at programming. While there something to be said about premature optimization, it's equally true that if you know how a computer works you can write fairly optimized code *on the first try*, in whatever language you choose.
Seeing people talk about how "actually it's okay that I'm specialized and I only know how to load cloud fonts in react is actually very complicated okay" really just sounds like cope from people who are either unintelligent or simply lazy.
Sadly, there was NO mention of Handmade Hero... I think that was the series that got Casey more widely known.
is there an official word on this? From the intro it sounded like he's not super stoked about the project
@@chrisc7265 probably because it's not the right audience for it since Handmade Hero is extremely gamedev specific.
@@autismspirit Well, they did talk about gamedev on the first few minutes, but maybe Casey intended to attract a more general audience to his new project.
36:00 - it's definitely a culture thing. The Product Manager doesn't care about speed as long as long as it's not a feature. We live in the world where features trump performance almost always.
then we need to get rid of them, but first we need to start educating users so the users will ask for performance.
also agile methodologies will never prioritize optimization over new features
just make performance a feature
@@Jakov-yf7yz if only you could save that. People don't care if things take a second or two more...
@@Jakov-yf7yz it'll just get pushed down the priority list and never make it to the sprint
I was so excited when I saw Casey's name in the title, the episode did not disappoint!
Where have all these awesome youtube channels been all the time? Lately there's quality content out there and you're one of them. subbed!
For all I know: inside the computer there are little goblins with hammers and notepads, running around doing magic.
Many are saying this
Computers really sped up when processors moved from MegaHertz to GoblinHertz
@@GlowingOrangeOoze lmaooo you got me rolling
@@GlowingOrangeOoze I'm a bit concerned what happens if we reach TrollHertz
@@nryle Laws of nature prevents a 10GoblinHz from existing, but we already have Troll_uOps and even Quagmire_uOps due to SIMD, multi-threading and Instruction-level parallelism.
Visual Studio is still slow. I use it all the time for work. It’s a little frustrating. Still not as bad as MS SQL Studio 😊
Also, it sounds like Assembly is kinda like Latin. Nobody writes it anymore, but being able to read it is still quite useful for many applications.
no, assembly is not latin. everything is powered by assembly, even if it's not written directly into it, but latin is mostly not used. i don't think there is a good analogy with languages, but i guess you could say assembly is to programming what derivatives are to physics. almost everything will include one, even if it's mostly hidden at the introduction and intermediate levels
Reading assembly, especially small chunks, is easy. Understanding what is happening in larger chunks is more difficult. Actually writing anything useful and/or interesting with assembly is very hard and takes some creativity. The idea that assembly is "complex" is a misunderstanding of what is difficult. The language itself is not complex (with a few exceptions like understanding memory ordering guarantees), but the solutions used can sometimes be hard to understand because of the creativity involved.
The assessment Casey provides of when Just In Time learning is appropriate, vs when something should be foundational knowledge, is deeply elucidating.
I am in physician and hobby in code for quite a while, more recently I have been working graphics apis, webgl/gpu. I will honestly say that understanding how a proper rendering pipeline works and understanding how shaders and how to transform data in 3d space, how to communicate between cpu and gpu, has probably been more enlightening to my personal knowledge base then all of medical school. It has even better solidified my understanding of our neurophysiology, for instance the way in which our retinas process data and the way in which those signals propagate throughout your brain is to get the right image out of the data, it is similar to how you transform multiple matrices to properly project an object into 3d space. Even just internalizing the concept of what exactly a "transform" is and thinking about things so reguraly in terms of matrices has been quite the trip for me. I think honestly think that everyone would benefit from understanding what the process it to get a triangle to render on the screen takes. 3D rendering is exhilarating, nothing has ever got me to open my old physics textbooks outside of academic need, but thinking about new rendering techniques makes physics exciting. I hated linear algebra in undergrad, now im starting to think linear algebra is the most important math to properly understand to take any field foward. And although I am not an expert in coding, I can only imagine that if coders understood the sheer amount of data that is able to rendered in miliseconds and re-rendered over and over again to produce 3d sceens, it would help them fundementally transform how they thought about approaching their domain of knowledge.
The story of my Python framework for pipelines… I am always thinking about how many iterations some types of computations will take and where to offload it to get the acceleration I know the hardware can provide. Ironically, I have not had to incorporate cython yet (3 years later) because understanding the code-hardware interaction has made my program thousands of times faster than anything running the last decade before I showed up in my production environment.
I'm not a developer. I'm not aspiring to be one and I certainly don't want to be one... but here's the thing. I work with an ERP software that is slow af, for which the company I work for pays about 100k/year for support. I had to write my own program that interacts with the same database in order to be more productive. And before you judge me, I did create a ticket to the software company but their reply was: "That is a limitation of the SQL Server engine." That was a bunch of malarkey. In just half a day I had setup a little program that worked 10x faster than their shitty software. It just had to update, delete or create some rows in the database.
how are you not a developer if you've literally written software. knowledge of SQL isn't that simple or easy to acquire.
SQL takes an afternoon to learn, maybe a weekend to really get good. It's really not as hard as people think
I imagine you used a basic library like C#'s SQLDataReader and they used an ORM like EntityFramework. I've written a lexer for SQL and a partial ODBC driver. Even SQL is actually pretty complicated for what it does, and if you don't need its core features like fault tolerance, your application can be made additional orders of magnitude faster than circumventing the wasteful ORMs. Let's say you're querying a Customer "row" with e.g. 4kB data. That's less than the L1D cache on all modern server hardware (typically 32kiB+ per core) and most client hardware, so you can process it very quickly (sometimes even in a few cycles depending on the work) and fetching it cold from memory is only going to be ~300cycles which is roughly ~0.00007ms compared to SQL which can easily take a few ms to finish a simple 'SELECT * FROM Customers WHERE id = @CustomerId'.
ABAP crew here?
@@Muskar2 I used C++ with SQLAPI++. You're probably right and they might be using an ORM. Just to put things in perspective: their software updates 1 row with 3 fields in about 10 seconds. My solution is almost instant. And there aren't any additional queries being run so it must be the code (I ran a trace using MSSQL Server Profiler).
I asked a dev senior to me on my team once about why we don't focus more on performance and was told "Oh we don't need to worry about performance because we can always throw more resources at it."
compute is cheap and the bottleneck is always the network anyway
code that is fast but incorrect is worse than code that is slow but correct. code that is not written at all is the worst kind of all.
@@ChamplooMusashi no code is the best code, just not for your pay check. Zero bugs, less complex, easy to maintain, better for security, super performant.
"Honey, why are you pouring the milk out into the sink?" "Because it's cheap"
If it is running on an end users machine it's not even your compute you are throwing away.
Yeah, I'm sure all the data sent is necessary and time spent waiting for the network isn't wasted at all.
My first Visual Studio was VS6, back in 1998 and I thought it was slow. If you tried Squeak Smalltalk it loads instantly and Smalltalk is at the top of Dynamic Languages next to Common Lisp, however Common Lisp is a lot faster than Smalltalk.
This was a super interesting episode! I think that Casey is right in a lot of ways that the root of slow software is lack of education / cultural issues. Something he didn't mention though, that falls under the category of culture I think, is the over-emphasis on avoiding "premature optimization" and the obsession with TTD. It's regularly preached that we should hack together the first thing that comes to mind to get our feature delivered and only consider improvement when stakeholders complain about something.
Premature optimization is a real isssue. The problem is that they turn "don't prematurely optimize" into "never optimize"
You get told "the feature works? Ok move onto the next one" before you can go back and fix the performance issues.
You're supposed to hack together a prototype to prove the concept and then fix it so it's actually made properly. But instead, prototypes get pushed as finished products for the sake of maximizing short term profits.
For my computer engineering degree, I had a semester long course where we each built during lab a thermostatically controlled thermoelectric heater/cooler with lcd screen with keypad using a freescale hc11 processor, and we had to do everything in assembly language. I do think it's a valuable experience.
Casey is always a delight.
Actually the reason there behind so much slow software is exactly that of being taught it isn't important. There is a single phrase that exemplifies this attitude. "Premature Optimization."
You can't go from a bubble sort to quick sort by simply improving it. It has to be written from the start. The performance has to be considered at the start and the aspects of what makes code fast and what is important have to be considered from the get go.
33:30 out of curiosity what search terms do I need to use in order to find this demonstration? I'd 100% believe it, but I want to find the actual video because I feel like that's a pretty visceral example of the concept. (it's easy to say "no, seriously, computers are so fast most things should be happening instantly" but actually showing someone the difference tends to make it hit home just how bad a lot of modern code has gotten)
The channel name is Molly Rocket. The video is called Twitter and Visual Studio Rant, from April 6th 2020.
Video with ID: GC-0tCy4P1U
The comparison Casey makes about game designers is very similar to what you can make about programmers. You can say that the average programmer just repeats the same things over and over but you'd expect any programmer with a bit of experience to be able to come up with the architecture of the program, not just copy paste from stack overflow.
With game design it's the same. Any experienced game designer should be able to come up with original game ideas, sometimes very experimental, sometimes more grounded in what players already know but with an original twist. But then just like in programming you have a lot of specialization:
- You have game designers who can write gameplay code to some degree and focus more on prototyping, which is especially important in action games where you need to tweak controls a lot so the experience feels right
- You have game designers who can't code but are really good at coming up with complex systems with lots of stats and numerical interaction. This will be your excel guy tweaking formulas
- You have people who are not so much into mechanics or coding but know a lot about narrative. This will be the guy writing quests or branching stories.
Those are some examples and yeah, just like in programming, if you work on a multi-million project, unless you're one of the top guys then your contributions are very small and heavily structured beforehand.
one of the things I noticed a few years ago (but maybe is better now) is devs not caring about performance because 'computers are fast' and 'memory is free'
like no, these things all add up to make modern computers wayy slower than they could be when everyone has this mindset. It's even making its way into game dev.
Also apps wanting to be flashy rather than performant and easy to use
Casey makes me excited about code again!
The Spotify app is really bad, on my 165Hz monitor you can see it drop frames when scrolling. Something about whatever React UI or Chromium they're using is absolutely horrible, and this is on a Ryzen 7 3700x and RX 6800. Finally, after upgrading to a Ryzen 9 9950x a 500$ CPU it doesn't drop frames when scrolling (mostly), I bought the CPU for other things of course but I can't imagine how bad it is on the average laptop. And yes, it is running with hardware acceleration and everything I tried EVERYTHING enabling Vulkan and messing with a bunch of flags it's just that slow. Seemingly, the web app has the same problem as well, but in Firefox it seems to be a lot better which I find to be a odd outlier as Chromium is usually faster. Should I care? Likely not, but it's mildly infuriating as I can easily tell
Great episode! Thank you both for your time.
so glad you enjoyed it!!!
If I understood the point you were trying to make around the 50 minute mark, it's that having these 'all-signing, all-dancing' frameworks/languages that are very high level means that you end up using whatever it is they've given you, and that's often not really what best solves your problem and *only* your problem. This is where a huge amount of performance problems come from, because you end up using two or more complicated functions to do something, because you needed some of the behaviour from each - say, finding the largest value while also removing zeros from a container of unsigned ints. Many programmers would write
sort(container) # sort container
slice = container.find_last(0) # finds last instance of 0
container = container[slice:] # Copies the container so that we overwite the 0s with the non-zero numbers, and shrink container accordingly
largest_value = container[-1]; # Take last element of the container - we could have done this immedately after the sort
But if you're writing the code yourself, this is unnecessary. Create a new container of the same length, iterate over your initial container and copy over any non-zero number, while also keeping track of the largest number you've seen.
(You use more memory, but it's not complicated to implement at all. You can also just overwrite the previous values in your own container by some offset, incremented each time you encounter a zero, and do it in O(n) operations and O(1) memory, or a number of other solutions, but I picked a very simple one for this example).
But this is not encouraged in higher-level languages. You're meant to use the built-in stuff, rather than roll your own. The problem is that, as your building blocks get more complicated, then the chances are that someone has built some construct that is *exactly* what you need goes down enormously. You can chain them together, as described above, but this is where slow code comes form.
In more simple languages, like ASM or C, you pretty much have to roll it yourself, so you'll write something that is precisely what you need - and in terms of productivity, I can write the above for loop in C faster than I can look up which functions to call on a python list in the example code above (how do I find the last 0, anyone?).
FWIW, Python is probably one of the biggest culprits of this, because of how things like simple for loops in python are so slow, and you're therefore massively encouraged to use the built-in 'special' functions that will execute C. These combinations of functions are doing far more work than you actually need, but using the interpreted for-loop will be even slower so you're basically screwed either way.
In my experience, those data science problems are actually part of where the average devs perform decently, because they often care about Big O notation. But it's the wasteful abstraction boilerplate and bloated APIs that hide the vast overgeneralizations and busy work, take finicky inputs, often poorly documented etc.
E.g. OOP objects that are just/mostly functions/methods are a typical source of boilerplate in my experience. I'd strongly recommend trying to minimize lines of code in your codebase (i.e. don't do SRP and short functions). Particularly taking something with several layers of wrapper functions and seeing if it can be inlined for its callers (and keep repeating that until the code's logic prevents you from doing so). You'll quickly find that the code was doing the same preparation work multiple times (perhaps even overlapping database/network queries), and that code understanding drastically increases. But immutable "pure" functions that are reused can stay separate and small if they need to be (i.e. often those dubbed 'utility functions').
This talk got me to sign up for Computer Enhance. Great talk, and your advertising totally worked.
By far, my favorite episode. You should make a "Maybe Programmers are Just Bad" Part 2.
Or rename the podcast to it, lol
1:05:49 It's worth noting that, in Windows at least, most of the drivers are in user space. Only drivers which require (or want) performance live in kernel space, at this point video drivers is the only big one.
I SO MUCH ENJOYED this podcast! What a chad Casey is. And really insightful questions. Huuge inspiration and lots of food for thought. Keep up the good work Backend Banter!
Very good point about the current state of affairs (hw, firmware, os bloat) precluding any innovation. Truly new General Purpose OSes are extremely unlikely to come to be, because of the baggage of useless stuff that needs to be reimplemented in order for it to be even minimally viable.
The altruistic part is so important on a much larger scale as well, and regarding waste in general. We show no care because there are so many resources. Right up until we've squandered all and it's time to think about economy because otherwise we're dead in the water. In software, people don't care because hardware is cheaper than people. It would take conscious effort to start making things efficient, because "it's the right thing to do", especially because it'll probably be more expensive in the short term. The market rewards immediate costs optimization though, so we're here
I watch anything with Casey. Just hoping some of his greatness rubs off on me. Love to hear what and how he thinks.
Casey is the best!
He keeps mentioning how much easier it is to understand assembly and gives examples like, "This instruction adds two numbers together". Yea he's right and that is simple but he doesn't address the real difficulty of reading assembly and that is understanding or identifying the high level objective of the assembly code you're looking at.
You could look at a function that implements Fibonacci sequence in assembly and be able to pretty quickly identify what each instruction is doing, but how long would it take before you figure out that the function is doing the Fibonacci sequence? Compare that to looking at an implementation in javascript or some high level language and most people familiar w/ Fibonacci would probably identify it immediately. I'm sure someone well versed in assembly could probably identify it almost immediately as well but that's because as they quickly gloss over each line of assembly they are maintaining the mental model or structure of the block of code and when they reach the last line understand what it's doing as a whole. That part takes time and practice in the language.
You also need to know how programs are structured to better help understand what's going on. When you see a register being added too, is it part of the logic of the function or is it increasing the stack pointer to create space for local variables?
However I think his point does stand. Even if all you can do is interpret assembly instructions line by line, it's still very valuable when you need to debug a crash or something, you can quickly see that some memory location was 0 indicating a bug in the code, or look at the generated assembly output of some calculations to see that it wasn't vectorized or something w/o having to interpret the entire block as a whole.
Great episode, will def. be checking out his substack to help stay more informed myself :)
I don't think he's advocating trying to understand a program just by looking at the assembly. I think the scenario Casey is imagining is where you have the high level code and you ask the question "what is this actually doing?" for some reason (like performance) so then you read the assembly to check. Like, being able to select the "go to dissasembly" option in Visual Studio to see what the C compliler actually output and understand it.
This is what assembly analysis tools like godbolt are for. They tell you which function lines up with which asm so you dont have to figure any of that out yourself as a human. Some can even put in comments in the asm to explain some of the more complex bits of it. The point of learning to read the asm is to know that when its compiled, you can see why its so much slower than it could be by seeing it doing needless operations you can get compiled out with some tweaks to the source. And with these tools, you can make source tweaks and see how it impacts the asm to make the speed better or worse, etc.
57:30 god... i'm deep in that struggle, right now I am in a deep refactoring session because another dev just pilled on sh*t on top of some other sh*t, I've just delete an entire class that was useless, felt great, but also do some cleaning from time to time....
You can tell Casey has been ready to vomit everything on his mind for a while now lol. I used to watch his game dev on twitch a while back and he's the first person that sort of showed me how in depth coding can get (this was before I was a programmer).
thank you very much for these low level talks! I love them so much, it's hard to express my feelings with words! Thank you very much agian, awesome channel, amazing podcasts!
I have to say, very good questions from the interviewer! And very good answers of course :)
Casey -> Include OS was interesting and ties in with what you were saying around the 1:00 mark
1:04:00 monolith operating systems IS a mistake. why the f are we requiring hardware code to be on the kernel, where does that even make sense ?
I'm a substack subscriber of Casey's and I'm very looking forward to watching the pain he mentions about moving off of substack. His pain is my joy, but also I get to learn.
That classification of Just In Time learning and where it does and doesn’t apply is so clutch. I’ve been doing this as a professional for almost 20 years and he absolutely nailed something that’s been bugging me for years - what should/shouldn’t I be spending my time learning? What’s okay to learn JIT style and what should be directed learning?
For the choice of assembly language I'd recommend RISC-V, for the simple reason of lack of banging your head against hysterical raisins. Also vector code is much more readable than SIMD but drives home the same important points.
Where things get complicated on that level is understanding how modern CPUs execute code, superscalar, pipelining, speculation, all that goodness. But for a basic understanding all that's necessary to know is "modern CPUs are complicated behind the veneer of pretending to be a microcontroller, that's why you're seeing compilers produce funny code sometimes", writing code that out-performs compiler output is a specialisation of its own nowadays.
Oh you got the GOAT on. Let's go!
Memory-mapped I/O and drivers in user land? Sounds like AmigaOS from the 90s! (Albeit with no kernel / user land separation at all (!))
I'm in a company that uses 10 different languages or framework, with either SQLite SSMS Azure etc.. little bit of docker here and there and now we're starting to use dart to port over our apps so there hasn't been time to really delve into the nitty gritty of each language but it is definitely a nice feeling being able to code and understand what each one does, maybe not in the most proficient way but enjoyable
The "Day in the life of" video genre points to a trend that plenty of software developers are "just good enough to pass an interview and schmooze their superiors, but not good enough to consistently deliver good work". Unfortunately, a whole lot of these kinds of developers were laid off in mass by these FAANG companies at the cost of "accidentally" laying off so many good developers as a way of getting rid of their seniority status AND their really high salary to adjust to the reality of a non-zero interest rate environment that created this mess.
As a result, the barrier of entry is higher simply to have better developers than ever before to clean up code that is now messier than and yet AI will be the big scapegoat for the decades worth of bad decisions by higher ups and lack of care for writing software with hardware constraints. Nope, I'm not blaming programming languages for this stuff as bad logic can be written in even the best of languages like Jai. There's a huge problem with requirement analysis that makes it hard for anyone to properly solve problems without looking up what a rando said on Stackoverflow or the latestLLM hotness.
Well, also think that the very nature of the industry that these devs exist in encourage this type of behavior. Being a legitimately good programmer doesn't get you much unless you're a savant, then of course you do whatever you want. For the rest, it's deliverables, not a collection of improvements on an old code base that in aggregate make the app run 10x as fast, silky smooth, respond exactly as expected, and encounters no errors.
That's not a shiny new feature, that's just "polish"
So why spend time honing your craft when your work will be thrown out or shipped off to a new developer at the whim of the 5th new project manager in as many years?
Companies don't measure software projects on how good their software is, only on how many features they can cram into the marketing page. The problem isn't the programmers per se, the company culture forces them to work a certain way.
@@rumble1925 At that point, they may as well just use an AI to save potential software developers from their torment. Then again, maybe the corpos get off on the developers suffering from low pay and chronic stress since that's apparently worth more than stress saved from berating the developers 24/7.
I wish interviews were easier than the jobs. I just went through a process to measure how much BS they would throw at me. I did 6 steps, 5 of which were leetcode and live coding tests. I aced them until I reached the last step, where they forced me to solve an algorithm in C#, a language that is not even part of their stack according to conversations with the company developers. Well, I'm not an expert in C#, never really used it, 15 minutes wasn't enough for me to bypass the language specifics and the crappy online IDE these tests use lol, even though I could solve this problem easily with some other languages.
The industry is full of hacks taking decisions, from HR to C-suite.
I don't need a new job, but I sure feel bad for anyone trying to get one in the area. From the number of unemployed engineers I've seen on LinkedIn and job posting platforms in general, I could argue that all these barriers are artificial and the scarcity is not true.
Dear god, I work in the high arcanes of ERP consulting. I could talk for hours about how shitty HR is and managers and company leadership and hiring processes are. In fact, I was even fired from a company for refusing to hire based on DIE skin color and gender.
A vastly inferior tech person with good schmoozing skills is far more likely to get hired over a competent, but shy or socially not as skilled candidate. It's sad.
The problem is not only the individual programmers and the education they receive, but also how we are organized to collaborate on projects and how the whole software ecosystem works. A programmer may be tasked to use a specific library or framework to integrate with other team's or third party code, without any documentation or source code on the performance implications of using a certain sequence of APIs. Maybe fetching an element using a getter methid is an O(1) in memory, maybe it requires a network roundtrip, if the API is poorly documented it is difficult to know. It is "easier" if the team is in control of the whole stack, but this might not be the case
Casey is spot on, but the sad thing is: there is nearly no way to explain it to the newer generations. In '87, you could start your 32bit multitasking machine in 0.5 sec. It went downhill from there. Why ?
In the 60, X was a theoretical landscape discovered and mapped by geniuses
In the 70, X was a practical landscape explored by very intelligent craftsman
In the 80, X was a new frontier explored by crazy adventurer with no rules.
In the 90, X became a business opportunity plundered by moronic fools.
In the 00, X was the "startup" era of consolidation of moronic behaviors.
In the 10, X is an industry generating billions from the crappiest toxic product
In the 20, X you cannot stop hearing about "best practice" and "agility" from the most ignorant fools, with no technical knowledge, no ethics, and an *actual* waterfall mindset.
Let's ChatGPT take over, there is nothing to be salvaged there (especially intelligence)
Replace X by any discipline you want
What drives me nuts is that we don’t seem to be waiting on anything. I have 20 cores sitting idle, any one of which will boost nearly double if a single-threaded workload happens. NVMe SSD, sitting idle, yet Everything. Takes. Seconds. To. Do. Anything.
I feel like the performance argument is such a trivial tech problem. Yes, there is complexity in things like React frameworks or Python, but you can still solve performance problems. It doesnt really matter the teck stack you use. You just need to know when it is a problem and when it isn't. The reason why you might not know is not because you dont understand a very low level tech (assembly for example), instead its because your coding doesnt lead you to test performance, now THAT's the problem. The problem is developers are not being taught how to test. I don't mean being a very thorough QA engineer testing a webpage. I mean understanding why you test and the history of testing and how it is involved in each and every software you touch, and how there are a lot of implicit standards for what a tested library does.
I don't think you need to know all about software testing to do basic performance tests.
It just comes with general low level programming knowledge. You should know not to waste memory and time creating unnecessary copies of the same obejct, for example. Unfortunately, modern high-level languages obfuscate this issue.
I cant stand the current standard of what it means to be a "dev", or "software engineer". Linking apis is not engineering or development, its arts and crafts. No disrespect but that is not the same as developing architectures and systems. since i have gotten into programming i have had the understanding that if i want to be better i need to know how everything works both on a high level and low level, and It has made me so much more confident when programming. and has made switching code environments and scopes easier and allows me to communicate better.
31:00 VS debugger slow? Wait until you try VS profiler.
32:55 Where is this video? Can't find it.
"Twitter and Visual Studio Rant" (Molly Rocket) 36:03
Is arms system ready Initiative pointing in somewhat the direction of simplifying hardware and lines to Boot, etc.?
ARM system initiative is not trying to simplify hardware, it’s simply trying to incentivize use of ARMs CPUs in data centers instead of AMD/Intel, the rest of the systems are the same buses, same units same everything.
1:10:30 have you heard about the computer hardware and software from scratch project: serenum? the creator started it bc of your 30mil line problem video. he doesnt have billions of dollars afaik :D
After knowing about his course for months, I think he just convinced to me do it. Unknown unknowns are tricky
When Casey was talking about creating a far simpler Operating System that doesn't require millions and millions of lines of code, it made me think of the ARM based Operating System called 'RISC OS'. It has a desktop environment etc but only runs on a single core and is written is about 60% ARM assembly, 40% C code. It would probably be a very good base for coders such as Casey to look at is he was serious about using a smaller/simpler OS than did not use Linux/Unix/UEFI etc as a base.
I never understand anything about anything when it comes to computers. How does electricity turn into hello? How does electricity turn into hello on a screen and then writing vector.2 = anim "jump" make a 3d sculpture make visual illusions and do a backflip at 9000 fps?
0 and 1
Really
Very nice.
omg pikuma hi
Oh, hi pikuma! 🙂
Nice that you are there too!
Managers will always take new slow features now over faster features eventually. People assume it'll be a need to rewrite the app in a new stack before fixing it in the current one.
Man! I totally agree about the CSS ! I can’t handle it, makes me frustrated but c++ and rust is just fine! I don’t find that to be a problem.
Conclusion - Frontend is hard
I one wanted to learn x86 assembly language, should he choose GAS syntax or Intel syntax?
I personally like NASM syntax the most, but the syntax is largely irrelevant. GAS will be better imo but Intel aint bad.
Casey is gigachad
gigacringe
true
Casey is the Alpha of Alphas
Simp harder
@@jofla Yes, unironically still using "cringe" is indeed gigacringe.
Does he still leave comments disabled on his videos?
From about 25:00 this is one of the several reasons why I don't consider Python a programming language, it's more of a scripting language, a glue language. The best python code does next to nothing in Python, other than making real code talk to one another in a heterogeneous setting or when someone isn't really a developer.
I really enjoyed this episode & the points made! Love the term just-in-time learning!
However, I can't help but think that all that "knowing how what you're writing will translate after transpilation/compilation" should be tackled on an LSP level in your code editor, or even by static analysis, than by memorizing it.. even if you do understand all the layers, you can still miss something here & there because of some edge case with the engine or you just being plain tired. Whereas if it showed "hey, this chained .map().filter().sort() is probably not a good idea. Have you considered a .reduce() instead?" it'd be massively more helpful and teach the developer along the way... or maybe it's just me blabbering 😄
1:12:50 sounds like a good thing for Geohot's tinycorp to start chipping away at. Not billions, but their whole thing is simplicity
49:53 yeah it's simple, that simplicity in language requires offloading most of the complexity of the program to the programmer.
Which is necessary, because only the programmer(s) will know what kind of problem they're solving. Trying to solve all problems - including problems that wouldn't ever occur to your software - is exactly the kind of complexity you'd want to avoid. The benefit of high-level code is a productivity increase. But it doesn't need to be at the cost of including mountains of unnecessary failure points or severely degrading control flow.
Casey might not be a famed game developer, but as far as game programmers are concerned, he is definitely in the top 5 most recognizable, mostly due to his online presence.
I am completely self taught and have a good understanding of C, Python, and Java yet I am yet to meet another programmer who I can sit down with and just write and discuss code. I want to work in this field but I have no one to talk too
If someone makes such a chip that is more simple then you'll be back to square one because complexity is easy to add fast with new features which will be built on top of the simplicity. Advertising is also a challenge, if something is so simple that it doesn't need certain features, then you can't use those buzz words to promote it, and comparison charts will have to put an "X" on that row for your product. I think the real solution is to move towards a world where people are passionate on the projects they work on beyond the incentive of money prioritizing fast pace development over design.
For a number of reaasons, I feel like shooting for the mobile phone market with a completely fresh OS would be easier than the desktop market. Simplicity of basic apps (calling, texting, photo capture), an (arguably) more competitive/open market (easier to gain traction, at least), lower expectations for immediate "productivity," etc. I'd imagine getting people to invest in something that isn't Microsoft/Apple for desktop hardware would be a relatively herculean task by comparison, given that most non-technical consumers couldn't care less about "bloat" or "security," no matter how much you stress it.
Google is trying that (again) with Fuchsia, but they already made the mistake of making Dart the main language you develop apps in.
If you can code CSS, you can understand ASM; which does very little. Add just adds.
This is disengenuous.
various ASM op-codes will result in changes to CPU registers, might trigger interrupts and other hard to observe (outside of step-through debugging); which may be influenced by prior code run.
So, which and how many opcodes does my CSS... ah, forget it.
Millions
More features == everything is slower is a rather valid argument. The question is, how are those massively complex programs being developed? It's not like we use the previous version which was fast as a core and add features to it forever or alter it without performance regressions in order to facilitate the development of more complex features. What happens in the wild is the adoption of increasingly more complex and abstracted frameworks and libraries on top of managed modern languages in order to facilitate large teams of developers to quicky add those new features. Those frameworks and languages dictate a large part of the fundamental architecture of the software and also make an implicit tradeoff between ease of development and performance that is opaque to the implementer most of the time. Worse still, the trade-off is not done in large chunks in a few spots, it is done silently almost everywhere and each new feature that has a sliver of cross-cutting concern with invariably use some part of the framework's heavy (often global in the main loop) machinery that will slow things down.
Let's look at the immediate window: If that particular widget is a complex WPF piece of crap in VS 20xx it is very hard for a developer that is tasked with a performance improvement to do a lot about it and get close to the machine performance since that person would have to scrap that entire control, nay the entire codebase!
Yep. Software is a complex organism shaped by many agents. Even if every developer was omniscient, that does not make them omnipotent, nor does it mean they have incentives to use their knowledge or power for the things others want them to.
How do you guys know that Microsoft isn't behind sublime 3 or that Apple isn't behind sublime 3 or that Google isn't behind a sublime 3
Where is Microsoft visual 2004 speed video?
For any programmer wanting to learn the "low-level stuff", I recommend the book : Computer System A Programmer's Perspective, by Randal E. Bryant and David R. O'Hallaron.
As the name suggests it teaches you how the computer and the operating system works from a programmers perspective and also demonstrates various low level optimizations. It's also pretty rigorous.
Yeah, I agree with the commentary on the 30 million line problem. However, I think this will mainly start getting fixed (but with great effort) once we stop getting our sizable performance gains every hardware generation, which probably will happen soon given the physical limits we're running into with lithography. Because at that point, it won't be node size to drive generational performance and cover up all the code inefficiency. It starts becoming a money and energy problem.
Isn't RISC-V addressing a bit the proposal mentioned in this part: ua-cam.com/video/qqUgl6pFx8Q/v-deo.html
This has been one of the best discussions I've heard in awhile. My wife is a vet. I've always found it interesting how continuing to learn is required and even paid for by her work, and then there are "programmers" who refuse to step out of the box they put themselves in. I find it interesting how so many programmers don't even bother to KNOW not just think their code is performant much less understand the fundamentals.
I can reverse engineer executables but I still can’t center a div 🤷♂️
1:12:58 software development and hardware development should be integrated for that to happen, you don't need billions of dollars, you need only a couple of million to pay excellent engineers to create that. I'm interested on doing that.
most of what he said about operating system security and innovation / maintainability applies to web browsers as well, which are really also operating systems if you think about it (im pretty sure), except they're built on top of other operating systems. everyone now is forking chrome instead of making their own browser. but the complexity would be lower if someone wanted to start from scratch on browsers, not sure anyone would adopt it
Really good podcast!
the trade off of bad code is supposed to be speed of development (not extra features). could be that they don't learn good practices in the past before working on a project as well
Modern applications can exist because we rely on previous abstractions. To throw away those abstractions is simply not feasible. It’s not just a money problem. It’s a time problem as well.
Could we roll back our civilization to the stone age and get back to where we are and in a better state? Sure. But it would take decades if not centuries to get back to where we are now.
Kool to have Casey on
JS compiles down to ASM@JIT? Certainly didn't think a hardware/raster/vector guru was going to make me feel better about learning JS for first language... I hope that's not why so many people tried to re-write everything under the sun in Node over the last 10-15 years (note: analogously happening with Rust ~5-10 yrs, what's old is new)
1:11:00 No, let's not use USB. It is entirely software, right down into drivers and down into the kernel. To remove complexity, removal of USB is a good place to start.
At 46:55 he basically implies that most of these libraries for frontend are basically bad as hardware isn’t even able to construct these things.
Complex != bad. One of the major reasons software exists is to do things that are too complicated to implement economically at a hardware level, and the hardware is designed around making that complexity possible.
His point is that assembly is basically the programmer's equivalent of Doctor Seuss to their usual reading of A Game of Thrones.
@@burger-se1er Complex != feature rich. He's basically using the word to mean 'wasteful and incomprehensible', in opposition to code where it's trivial to reason about what it does for the exact same functionality.
@@Muskar2 No disagreement with the "libraries for frontend are basically bad" part, just the "he basically implies" and the "as" (meaning "because") in the original comment.
Casey is very clear that web is generally bad at its job, but "hardware can't do it" is not his reason.
@@burger-se1ermissing the bigger point. These languages are just bloated APIs and not foundational tech. To put it simply, you are programming an API, not a language and computer.
The critical issue missed is that because the API purpose is to block access to implementation information; when placed in clunky systems that introduce critical problems, the design of the tech inherently prevents anyone from solving the problems for the end users. The language doesn’t even need to be assembly level to be a well designed language.
Even golang is a great example of a language that has simplicity and gives ppl an allowance to solve a software issue without introduce complicated API and dependency problems that most languages/frameworks introduce.
I remember I gave up on the C Python library (One calls c code in Python language) after realizing how awkward it is.
If people have never written a simple program in assembly, I'd recommend it. A simple console app which takes in input and formats a string will work. It doesn't need to be complicated.