Over Engineering is not a "Software Engineering" issue. It's not even a bad thing. In fact it's the opposite. I work with other engineers and it's standard or even legal to do this. Game devs are like F1 MEs. They their job isn't to care about longevity of code. They mostly care about performance and will sacrifice whatever is possible to do it. An ME designing a bus or train will have to add redundant systems that hurt performance for the sake of maintenance and safety. F1 logic doesn't work in that field.
@@chudchadanstud Your comment is literally the "overengineering" of my comment)) 99% of code has no need to be complicated. That over-redundancy matters only in a few specific areas of programming.
@@vitiok78 I think code can be complicated and that's not necessarely a bad thing. Complicated problems most times require complicated solutions. But I definitely agree that you should not add abstraction or complication for a use you think might arise at some point in the future.
The thing Prime describes (at the 0:52 second mark) where you code something up that hides behind an abstraction and accounts for all these possible "future uses" is a code smell, and it has a fancy name: "Speculative Generality"
@@ragsdale9 keyword there being "trying". Most people try - most people fail, by making it hide complexity to give the illusion of simplicity - and ultimately actually making the code convoluted and obfuscated, instead of making it flexible. Flexible, simple and such terms are not easy to quantify to begin with; but the issue pointed at here, being "hiding things behind abstractions that aren't necessary for the program to run well", is a very real and easily quantifiable and identifiable thing. The solution here is to find things NOT to do, instead of "try to make things flexible". Because, as they said in the video - we're wrong. :P Note: I'm agreeing with you here.
@@combatcorgiofficial Bro you're completely misunderstanding him. His comment was in agreement with the original post. His comment basically amounts to: "Speculative generality is also called 'trying to make something flexible without knowing the requirements' lol". The "Without knowing the requirements" being what makes it clear that you're actually in agreement, even if it might have seemed otherwise initially.
"I just want to make it work, and make it nice" is essentially TDD in a nutshell. Just document the stuff you care about via tests and as long as those work all is good. If tomorrow comes and you need to add some other thing, then add the test, make sure everything else still works, and move on with your life.
So the way I'm thinking is that - generally speaking abstractions are usually almost always good and generally aren't even bad for performance if its bad for performance its rarely fault of abstraction instead its fault of bad design. The bigger problem is that most things that we think are "abstractions" in fact are not instead it's just layers of code that don't really abstracts much. Good abstractions is not so much about making code future proof either - future proofing something again is a design goal and certain abstractions can either help it or worsen it. For example you could say Einstein's relativity theory is more future proof that Neuton's laws of nature - as Neuton's laws are fully captured within Relativity framework with more accurately capturing extreme cases - yet we still use Neuton's laws instead of general relativity for a lot of things because it's easier and because it gets job done. What abstractions should achieve is they should help us reason about the solution in simpler terms. In essence they should allow you to think about problem from higher point of view. It's really not easy to create a abstraction most often we just add layers of functions not abstractions. Good abstraction decoupled low level thinking and high level thinking. Writing code in "single level of abstraction" IMHO is one of greatest software design advice ever created. It can be useful to set stricter smaller limits when creating new abstraction - it's much easier to come up with a useful abstraction that is more limited in it's scope than one that is more universal - often abstractions that are more universal are also ones that are either more complicated to use or they make significant performance sacrifice.
I think I understand the differentiation you're trying to make here. I don't think you're necessarily wrong either. It's a good take in something for me to think about, because I want to make sure I'm always formulating my thoughts in the best possible light. It's shocking to see a well formulated thought on a UA-cam comment
@@christoferstenberg3920 if it's non trivial problem you usually need to know some relevant abstractions to be able to jump into it and code it up. If you don't you might end up bogged down by complexity and stuck. Experienced developers know a lot of things from what they've seen in their experience so whenever they're writing code they're already applying a lot of high level ideas. Someone less experienced will struggle more as they need to invent more things on spot. I'd say research well first. Then code. And code a solution aimed at simplicity first instead of going for more generic one especially if you haven't implemented similar thing before hand. Simpler doesn't mean less abstract, quite the opposite. For instance, which is more abstract view of the world, Newtons laws, or Einstein's relativity theory? Relativity theory can do everything what Neuton's laws can do plus it is more accurate in extreme speeds. It can do more, so more abstract? Wrong. It's more specific, it's a more detailed model of how it works, thus it's less abstract. Developers often get concept of abstraction completely wrong. When one tries to implement something that will work in every possible case - well that's the opposite of what is abstract. All abstractions only work on some given range and there are no abstraction that can work well in full spectrum. Goal of a good abstraction is to be just good enough for some particular range of problems and it won't work for edge cases but it will simplify the problem scope just enough so that one can continue solving other problems
I disagree abstraction makes it harder for compiler to know what's actually going on you leaving a lot of optimization and performance backed into the compiler when you go with abstraction regardless wither code abstracted is super fast or not
Look at function signatures? Hopefully people are using structs? I mean why noy put everything in one function? I mean really functions are just an abstraction, why even have arguments those are just abstractions, we can just use global state. Like an insane person who writes 200 line functions (abusing the debugger to get implied state and not necessary state), says good enough, leaves project, says deal with it or repeatedly commit monololithic functions which dont fit the pattern Hard disagree
@@ravenecho2410 I don't think anything you said is opposed to either the video or the comment you are replying to (apart from you having a problem with a function that is 200 lines for some reason). What are you disagreeing with exactly?
Bit confused about how this is in conflict with "Clean Code" though, I don't think clean code ever argued for future proof abstractions or anything like that. Clean Code is more make your code easy to understand and refactor because in 95% of cases it isn't you who will have to change it in the future it someone else so make it easy for them. There is a big difference between trying to future proof your code (guessing now how it will change) and making your code easy to change in the future. You don't have to know how the code will change in order to follow some pretty easy steps to make it easier for that change to happen.
The question in my opinion is what we prioritize - do we wan't a more optimal code or code that will most likely be more readable for another programist, since we sacrifice tons of performance for "Clean Code" as Casey proved in "Clean Code Horrible Performance".
@@TheJamesboink In my experience in 95% of cases it is to be more readable for others. Any performance gains you get by not following this are minor in the grand scheme of things. And the cost impact of your code base becoming unclean or drifting to being a 'ball of mud' is vastly more expensive than the infrastructure savings you make. I once took over managing a team that had spent a few months trying to get the memory usage of the app down below the threshold that would allow us to reduce the size of our virtual machines. No one had ask them to do this, but like so many engineers these days they had watched a few of this types of videos and become obsessed with "optimization". They proudly announced as I took over the team that very soon I would be able to "save a lot of money". I pointed out that the infrastructure cost of the company was orders of magnitude cheaper than their own salaries, and I would gladly give them virtual machines with twice as much memory if it meant that they were able to develop features faster. When I hire a new engineer and it takes them weeks instead of days to figure out the code base the cost of that new developers salary for that extra time would pay for the memory upgrades multiple times over. I would highly recommend any developer have a go at managing for a bit, or even just ask to shadow your manager to see what it is like. You quickly discover that what you were worrying about is peanuts compared to what your manager is worrying about.
I think it's important to note that the "9 ways to hell" is a micro scale thing. You cannot plan things on a micro scale, but you should plan things on a macro scale. I think the best programs come from equal parts thinking/planning(macro) and implementing/iterating(micro).
@@philprager1445 Learn this one simple trick that applies to all system designs without question. System architects hate him! The secret is to always add a queue between services.
This is ok advice if you’re working on a feature, to “just write it” and see what happens. But for designing systems wrong design decision can cause years of developer headaches
That's why you should "just write it", if you over engineer something and then later realise it was the wrong decision, it's much harder to untangle the mess you made. Once you just solve the problem you now have a much greater understanding of the problem domain and can rationally and objectively architect a better long term solution
Oh man, even as a noob dev with barely any work exp, I can already feel this so much with my current project, and it's *just* the front end. Thinking of my app as several, pretty separated modules, I decided to have my React app structured into features. But then I thought, "well maybe I'll need some kind of data mapper/formatter for all the data in *each* module". So I went and made a still fairly simple mapper (maps data from raw values to labels or messages, like "available: false" to "This item is unavailable"), and a formatter (formats things like currency, dates, times). But then I thought, "oh maybe some of these mappers could be shared, like a StartDate field could use the same formatter as the EndDate" so I went and made a "display mode mapper" that **maps each field to a corresponding mapper/formatter**. And now around a month after I started with that idea, I'm now left with some 7-8 different mapping/formatting objects, functions **for each module**, and I'm too scared to even touch them anymore.
Dont feel too bad, how are you supposed to know what abstraction to use and when if you've never used them wrong ? It's good to experiment with these kind of things, évent if at some point you regret it.. It means you've grown !
@@v0id_d3m0n At least I passed it (was my capstone project) and graduated. Luckily I didn't get asked much about the FE side so I didn't have to showcase my garbage structure lol
The problem with these discussions is they are themselves too general and they're lacking examples to consider. As always in dev it's choosing the right tool for the job.
100% agree!! While it's fun and a nice mental challenge to abstract into the future, it's SO MUCH more satisfying, when you already built the thing and know exactly where the issues and bottlenecks are. Too often, I had to fight with things that existed ONLY because of abstractions. People have a tendency to need to show off their programming skills; but the path to true mastery, as with most things, is simplicity. Also the first time I listened to this was with audio only, but my reaction was quite similar to prime's. Came back to write this and watched the video again and laughed seeing him react similarly. :D
Faster typing is a great point! The main benefit of Copilot for me is ONLY the ability to type faster. I program mostly in Go and those "if err..." auto completions pushed my productivity in the stratosphere.
copy & paste, intellisense ,autocomplete, boilerplate, templates, libraries, frameworks , sdks are Nth tier abstractions to try and be able to type faster
😅 a lot of your Boost comes from a program that guesses the right level of abstractions based on the characters you have written already. Yes some like { ... } and if ... else, are heuristics, where these chars have to follow somewhere. But some like GPT or gitCopilot do use high level abstractions and statistics to guess the most likely string of chars following some input chars.
This feels like my conversations with Product and junior devs. I was once told by a senior dev many moons ago, that if someone asks for a gold sphere, build a cardboard box first. They might never need anything more.
@@pepperdayjackpac4521 If you've been asked to build X (Gold Sphere), build the barebones version of X first (Cardboard box), once they have that they might never need anything more (round corners, gold plating) or they actually want something else now
I was once told by a manager not to use if statements... because he didnt like them. You get people who try to lift you up by teaching you and you get people who make up rules to micro manage you and you get nowhere.
I've definitely programmed 10's of times less than either one of you, but every time I coded with a plan vs. without a plan, coding with a plan helps me way more. Usually for me even just writing all my desired steps explicitly helps me immensely to understand what the program should look like. And UML diagrams get a lot of hate, but I think when you're trying to share with someone else what your mental model looks like, they do a pretty good job. Like y'all mentioned, sequence diagrams or whatever will not get every intricacy, but if I start by seeing the diagram then the code afterwards, I can navigate it with way less friction than "where am I? why am I here anyway?"
If this works for you that's great, keep doing it. I think what Casey said about "that's great - I CAN'T" is where they're coming from. Do what works for you and don't get too obsessed with the ecosystem programmers tell you is the "right way".
I think they have a plan! It's just not written out and describes all the edge cases. I don't write anything down but I have a plan, and I know what to be aware about. But those details and egde cases, popup as you are implementing it. And sometimes new insights come up like: "hmmm... I can actually do it like this, then we don't need to X,Y,Z". I often find ways as I am developing to not have to send certain protected attributes like user data or reduce the required set making it easier to comply with regulations. Or find ways to improve/simplify security. If I can use a Managed Instance or a User Account makes a big difference. But you often don't see that at the drawing table, you just have to stand in the proverbial mud and try.
I guess this people never had to work with juniors in a project with "out of my ass" architecture and a big domain with hundred of business rules and validations. "I do not plan or think ahead the solution, I try to implement it right away", that's pure BS; you are working alone or you have no schedule timeline, or your code looks like shit. "Abstractions are bad" my ass. "I just added a lot of setters and getters and everything ran smoothly", sure, in your 1 user, 1 core, 1 thread, 1 BS sequential world.
What I don’t understand is what clean code has to do with drawing diagrams up front to try and understand what you’re building. It’s not like you’re not allowed to do iterative development when you’re practicing clean code. Trying to write stuff “to learn what it will do or look like” and by doing so ONLY implementing what’s necessary is literally Kent Beck’s 4 rules of simple design.
Please release the full interview! I watched it whole on the Twitch VOD, I would hate for that interview to be lost forever. You should interview more Casey Muratori, the discussion was so productive and interesting!
I think the point here is that you can't plan ahead without knowing your requirements and the consequences of each action (which you won't be able to think about while stressed, depressed, rushed, or distracted). Also knowing your execution environment would be part of the requirements.. I have found that the best early optimized code was small bits used commonly in the application being made to be flexible is better than building some large and all encompassing machine. I think once you optimize some things for flexibility it makes the code easier to write and makes the predictions easier to make.
What's even worse is when you are micromanaged and none of the tasks are documented... so you end up second guessing everything you do and at the end your code ends up being re-written anyway because you didnt write every single line the way your senior thought it should be written. The only standards in coding is what your senior dev likes to do... if your senior doesnt care about best practices they will go right out the window.
"Clean code" is for making it easier to maintain a big project by multiple people. Keeping it clean so it's easy to get an overview and extend it. And never was for performance. Yeah clean code sometimes mean more code, with more layers and/or abstractions, so it will be slower.
I agree, Like many things, it is a balance. You sacrifice "clean code" where the performance is needed, and maintain readable, modular code everywhere else.
This was a fantastic interview. Very insightful. "... write the damn thing first" . You should link the original Molly Rocket video and your reaction video too.
What an awesome conversation. I couldn't agree more. This is why it's called a programming language. The "languages" we know constitute the extent of our brain's capacity for thought (I would consider math a language as well in this context). When we use the English language to write, we are actually using it to think; putting our words in the right order and sentences in the right structure is literally our process of formulating a response to a problem (we are almost always writing about a problem). So, no writer can write a book in their heads, and software engineers can't just think up the perfect code. The act of using the programming language is the same act as solving the problem.
Even though I disagree with Casey's thoughts about Clean Code for the most part, this snippet here is definitely true. Make it function first, make it run as fast as it needs to second, make it maintainable (clean) third is what I do. And also kinda what Uncle Bob actually promotes in his book.
I agree with you. First of all my respect to Casey as a programmer. I understand his point of view. I do think Clean Code has it's place in the world of software development though. From what I know, Casey is primarily a game developer. I my opinion, most Clean Code best practices are not suitable for game development. Games need to squeeze out every bit of performance as they can, so I fully agree that unnecessary abstractions should be avoided. The Clean Code best practices have been evolved from building business applications, not games. I'm not a game developer, but I assume that the code from a game engine would be inherently different compared to a business application. Business applications tend to connect and depend on many other systems/applications/services and it's quite common to develop abstractions over those external systems. Of course these days some games also connect with other systems, but likely to a lesser degree compared to business applications. We should not take things out of context. What's next? DDD/BDD/TDD/OOP/FP/AOP/etc. is bad in general, just because it's not suitable for game development? I agree with Casey to a certain point, but Clean Code was just never meant for games or other low level system applications.
@@foton4857 Oh, I did not know Casey was a game dev. Just heard two of his talks a while back and was confused how people could take illustrative examples from CC and refute the ideas of readability on grounds of some x performance gain, when maintainability would clearly lack in their case. Game engines of course have to get everything cycle possible out of the hardware and, judging from the not exactly glorious track records of games and their engines being easily debuggable or adaptable, game devs just live in a world where it is accepted that they lose clarity. To me the analogy to Clean Code is school text books. Sure you might be able to convey a concept to a mathematical savant (ie the compiler or CPU) with cryptic abbreviations and othet shorthand notation, switching from clear text to formulas to references to other sources rapidly, but the average reader (ie developer) will have no clue what this chapter is meant to teach them even if they knew a couple months back or even wrote it some time in the past.
I loved listening to this. I am at the very beginning of learning programming and this is exactly what I do. I just write a simple attempt at what it is I am trying to implement. See what it does. What's being returned. Where, if any, there is something going wrong. I get to a happy working state and try to understand where I can maybe make it better or learn from my code about how I was thinking at that point in time that got me into trouble.
One of the goals of "clean code" is to allow multiple people to work on the same codebase without creating too many conflicts and incoherences. It looks like Prime & Casey talk about cases where they were the only ones developing at the time, so they didn't "need" that.
Very true. Especially when you need to extend someone else's code. Also let's not forget that "clean code" means good API design. The people working with your code might be users of your library.
Clean code is not only about OOP, and even for the OOP part, probably every developer who adopts OOP understand it is never about performance, it is about how to build a complicated software system and have generations of developers maintain it for 10 years.
This. Doing the quick and dirty thing may be good if you're working alone on a project or prototype, but working in a team with many people is a whole different business. If 5 different developers work in succession on a piece of software over the span of 8 years and each of them optimizes for themselves by choosing the shortest path to get the job done, they just pass the technical debt on to the next developer. Eventually the needle that breaks the camel's back is reached and one poor bastard has to redesign the whole system because it is so fragile that every feature added or bug fixed has a 99% chance of breaking something else. And good luck explaining to management how you need 3 months to rewrite and retest everything when your task was to make the shopping cart colored grey when empty.
Well yes and no. Which code base these days live 10 years :D I see every company rewrite their crap every 3-5 years! It's the old hyper optimised code that is still working :D I can still compile my C code from 1990 that I did in school and it still works, as I recently did for a puzzel solver I wrote on this channel. Just to the backtracking algorithm function pushed it in and changed it a bit and worked. You can't even push your Python code from 2000 in a Python 3 from now and expect it to work :D
It is disheartening to see comments like this, suggesting that you can have either clean or fast code. Especially when you consider how hard code using OOP or designs pattern can be, with layers of abstraction, inheritance and frameworks on top of it.
@@Salantor But it is unfortunately true, like most things in programming there are tradeoffs. I think "clean code" optimizes for extensibility while "dirty code" optimizes for performance.
@@Salantor there’s no such thing as a free lunch. More abstraction means more work for the CPU. And personally I don’t mind good technical code in most use cases. I guess that’s the difference between system developers and high level business developers. Us systems developers are always working on less abstraction. We live for creating and destructing memory and wielding bits and and pointers. If you do that long enough even less “clean” (very arbitrary) code is still readable for us. But when a business or web developers developer sees
You gotta go through doing good things badly before you get to do "bad things" well. Problem is far too many people don't even get to the stage of doing good things badly and just do bad things.. badly. I've worked on some code bases that could have used 5 seconds of thought about design/architecture or even basic logic before the code was written.
Reminds me a bit of Ed Catmull’s Creativity Inc on Pixar processes, the take away is “All Pixar movies sucked at the beginning, you just need to trust the process” , I really liked that book.
I agree to all of things thats said here. Personally I am very bad at figuring design of all the classes and inheritance and connections between objects before writing code. and I usually find myself overthinking about should I design it this way or that way, do I gotta do this or that because this wouldn't violate "best practices" etc... Then I realize I have zero lines of code written, all I have is lots of thoughts about how I could've done the thing instead of just fcking doing it. And when I actually start writing the code everything becomes more clear along the way. After you get a working code you can then refactor and make the design better, but you gotta have a working code in the first place.
In the clean code book itself says that you have to use these words with criteria. No wonder why most of the people believe they are going to be replaced by an IA
I used to be very insecure, paranoid about thinking through every possible way things can go wrong. Then I learned Python. And it has that attitude of "just do it". Just do a thing. Provide some reasonable flexibility and error handling, but you can't probably write a piece of code that will universally work for every possible case ever. I finally chilled out and I began to write a better code.
The funny thing to me is that this was my combined takeaway after reading clean code, clean architecture, and TDD books. My takeaway after reading those books and a book on TDD was never to begin a project with an abstract design and then go down into the details. It always was to make it work and then refactor to make it pretty, and then, when it makes sense after you have realized what those patterns in your code base is, make it extensible.
Had to come here to comment on the original video as stupidly comments are off. The video was TL;DR but from the first few minutes I got the impression that presenter was mainly focused on code performance ... I did not expect to hear that in 2023 there are so many more important aspects of code than performance as long as the performance is good enough. And mostly it is if you have a clue of what you are doing. So it is not worth a second though unless it becomes an issue. One thought for performance, a cursory glance, yes, second thought no.
Casey is game developer. Performance is #0 concern for him. And it translates to software too as he's using it. And even outside of games - sometimes software is just incredibly slow. We have more cores and more computational power and faster memory than ever. Yet something like painting program or text editor can still take 5-10 seconds to load. Sure 10 seconds is not 10 minutes like it could sometimes be with old software. But then you take that old software to modern PC and it loads in 1 second.
These videos are like therapy sessions. They confirm the things that I have believed to be true but suppressed because I was told they were wrong. That being said, I think you are sometimes oversimplifying. A little bit of planning can help, and we all think in different ways. Here are some examples: 1. Write down the problem you want to solve. Be aware that this is not the final thing. You will end up challenging this problem as you develop the project (sometimes the problem disappears on its own through things like non-pessimization). 2. Describe the big things you think you are going to build. It does not have to be perfect, it is not a specification. It is just a tool to clarify your thoughts. 3. Describe what a good solution would feel like (yes, I am writing about feelings as software is emotional). Is it going to be fast? easy? small? scalable? Again, this is not a specification, it is just to get the juices going. Start writing code as soon as possible. Once in a while, re-read the three things you wrote down above. Are they still true? Update as necessary, but keep it short. Run your code, look at your code, and throw your code away. Start over.
My most favorite way to get around this is in V1, throw Errors. Nobody likes errors, so it's mostly happy path designs. Once the thing is working, and depending on your colleagues: leave them in or remove them.
Years ago, I learned how to program from Casey's videos. And I now work with people who try to preemptively design things all the time. They're constantly hounding me about clean code principles but I can never take their criticism seriously because everything they write runs HORRIBLY. I don't understand it. There's some kind of bizarre script running in their brains that just cannot see the nightmare they've created for themselves.
These people pervade the industry. They don't know what you know and cling to their silly ideas like a drowning man clings to a life vest. They ruin the engineering culture at entire companies, and attribute any success to their methodologies.
I think both of you have seen a lot of bad abstractions paired with overthinking in practice. Which to be fair happens to everyone with decent skills or better from time to time. A good abstraction hides complexity from the programmer so you gain some free brain capacity for something else. (Much like VIM hides typing complexity) My favorite abstractions are hardware interface classes and message or event abstraction. If i hide the bit and byte addressing of a SPI or CAN bus in a class i can just write the flags in the network/application layer, and glue the flags to the right place in memory with implementation and declaration. Done correctly (with references) you don't waste a single instruction and seperated complexity. If you want to port this code to a cheaper microcontroller for example you only need to change the memory addressing and your done >90% of the times. One rule of thumb: a good place for an abstraction is a place where you have a lot of identical code, !! not similar, not is-a, not has-a, identical code !! Then it's worth hiding it behind a function or a higher form of abstractions. If you can't unroll your abstractions into working spaghetti code you choose the wrong type of abstraction for your problem, or your problem can't really be abstracted by it's nature. Template classes and math are a good example for this, as the STL::math uses templates a lot. If you need a dot or cross product template you are doing some multivariable calculus. Do you really wonna bother with datatype specific implementations or SIMD instructions while doing multivariable calculus? Or would you rather say hey language feature here is dot and cross product, there are only so many native datatypes, i just call it with the datatype i need and you fill in the right one for me, and compiler pls do your thing afterwards. Biggest problem why student struggle with making good plans, barely any Prof really minds the time and effort to show how you really identify identical code sections and hide them behind a layer of abstraction. They all just Allice and Bob, animal - bird - mammal - cat - dog or shopping cart you, but they never really dig into group theory and how to identify a set of something identical that can be abstracted without any "is-it-really-a" typecheck code bloat or a "you-said-it-is-but-it-is-not" bug.
I'd replace typing fast with editing fast, but otherwise agree fully. I'm not the fastest typist, but I know what insights, autocompletes, snippets, etc. my editor will provide, so when prototyping I structure things in a way to trigger those aids as often as possible
9/10 times its someone who is way to good to be working on what they are working on and they are bored. They over engineer everything and its a shit show. We just brought our app from 6k lines of code to 900 by just simplifying everything. We now also have 92% coverage compared to a 60% coverage before.
If they overengineer it means they are actually not good enough though. Part of being a good developer means you have matured beyond this kind of behaviour.
Waiting for the full interview. So far, this clip and the original article gives me the impression that we are conflating a lot of things on clean code goals. Disregarding best practices just for the sake of speed seems like early optimization. We also need to consider readability, and maintainability. Having common patterns make easier to read and understand pieces of code without having to read everything line by line. Clean code and best practices is also not for designing for future features but for making the code easier to refactor for future features. Granted it can also be misused and we may end up with big chucks of over engineered pieces of code that are just hard to read, hard to maintain, and inefficient. But that would be because of misuse, not because clean code is just bad on all instances.
Realy awsome interview, can't wait for the full one. I also like that Casey and Robert C Martin did some communication - even though he totally not seem to have get that Casey's enum opcode + union method instead of virtual function polymorphism handles both when operations change fast and when new "types" are added fast... It felt like he was not reading it. It is good to see you agree with Casey this much on these things! I actually in that rare breed that thinks planning ahead can win you good things though. Many of my algorithms and data structures are on paper first and I code them later on. Last time I made a lightning fast sort algo (better than ska_sort) on paper and even the first time it started running it was beating standard sort heavily even before I started any "profiling based optimization". But mind it that on paper I already planned for cache lines, ILP and all such things - not like how algorithm design usually happens with calculating something like number of comparisons or hashings or whatever abstract thing. For me the paper-thinking works best because its much easier to throw away the idea when I thought them through: So in a way and in essence it does have this kind of fast feedback loop like coding it in does. Also I do multiple alternatives sometimes too. I guess this works for me because I spent my youth totally coding in assembly only. Literally only ASM for years and I remember writing a particle system in ASM for like 6 months in mid of high school and could literally run it like months later I started making this, because 1.) I was noobie and slolw 2.) doing that in asm was just complex 3.) I did this in spare time only (some people allegedly used this later in some game I don't know). To my amazement it actually "shown something". I did fear that I will just see a crash or black screen or something that totally not indicate where the error is, but I saw particles going on the screen but they had some bad patterns - and from the patterns it was easy to tell where my bug is! Of course current me would find ways to test and try out parts of my work much earlier, also would allocate more time for it in one seating so its not spanning months and all. Yet I think coding always in ASM had it inherently that I had to "type a lot in" and because this makes the feedback loop real long even if I would optimize it much better than my young self I think it kind of teach me to plan ahead better. Think about my dad's time when he was programming in his university time on PL/1 and literally on punch cards! He designed and wrote even the program on paper, handed over to some happy maiden that once (hopefully in good order) filled that into the big machine and he got some "you wrote a syntax error" kind of message a week later. ^^All the above being said: I like at least moderately fast typing, I highly use VIM for productivity and (at least) semi-automate a lot of things most people would dumbly write out. I don't really understand why they "claim" that it does not count. It really does! And not just it saves time "WHEN" coding, but even saves time in the paper design part where I know in advance that if some better solution needs me write more words on screen its fine and I don't fear it because half of them just gets generated by vim magic, the other half is not so slowly typed in... ^^Also doing vim means to me that I do not leave the flow when I am coding. For example Magyarsort's original version on paper took me a sleepless night to get it down - then I fell asleep and the next night it took me to "properly code all that stuff into the computer". - without any of those I think it would not be what it is today. Also please see a stark contrast between this and uml kind of bullshit design. I sometimes used very lightweight UML for documenting some complex communication protocols and its fine (sequence diagram was useful), but planning in advance with uml is really not my cup of tea. Ahead planning should be on paper pieces - sometimes not even A4 paper, but "whatever I find at the moment" and can go from verry tiny up to a full description.
Oh yeah. Thinking about and working through problems is fantastic.. Architecting a design full of abstractions before you're sure they are the correct abstractions is where it falls apart.
@@ryanleemartin7758 I actually think you can do that too: just not all of it upfront but incrementally. I mean instead of doing waterfall, you would do "Bohm's spiral model" just not so strictly documented as that is not necessary, just having on-paper or in-mind planning phases and coding phases after each other. Also laying out "boxes" of modules on a higher granularity level can work - to me it feels it falls apart when done on the class abstraction level... it really feels like its maybe because classes are just bad abstractions and its better to think in lets say a command line app and an other - or a system service running and waiting on some pipe or "components" in any ways implemented and such more modular building blocks I feel you much better can plan in advance and also its ususally much more unlikely that they will be not how you think about them. On the level of classes (and worst of all if you start doing this class hierarchy shit) it totally falls apart nearly every time to plan that ahead but I also feel that its really the wrong thing to do. Also when I was working on R&D I had one more design principle: prototype based design. In that I looked at the project and before anything else, just identified what are the things that might be technically most tricky to get right. Then made throw-away fast prototypes that revolve around only those things. After this fast chapter we started doing "real work" and "some planning but only on module level". Then documentation always happened AFTER things were written and finalized somewhat - not like in other develolpment methods where they were trying to write it upfront. But the prototype phase really helped success rate of R&D projects I feel and it again is in line with what Casey talks about here with Prime - yet projects later also involved design on paper kind of stuff heavily (also at least half of those projects had custom hardware research in it so it was totally unavoidable anyways).
I'm currently reading my way through Thinking Forth, which is less about working in the Forth language itself than it is about thinking like a Forth programmer. One of the things Forth programmers do is -- write the minimum amount of code necessary to solve only the problem in front of you. If other problems come along later, deal with them then. It helps that Forth is a language so malleable only Lisp can rival it on that dimension, so it's a relative doddle to add in some new functionality and test it immediately, even on a production system.
Hey Prime! Your content is awesome and as someone who highly appreciate Clean Code and Clean Architecture I would say that I understand what you say. I don't think you are entirely wrong. Even Robert Martin mentions that in "most projects only need 2 layers" in his Clean Architecture book. Well, isn't that what we see in most web framework samples? A controller for web endpoints (presentation + business rules) and a data access layer? The main difference would be how to use these two layers in what the calls Clean Arch. Is it worth tightly coupling with the current framework tools? How much planning is this project worth? How much and what type of human resources do you have for the project? Well, in the end you know what kind of quality you are about to get because the founders/builders might not even be sure what they are planning to build. In that case, clean coding is a total waste of time in my opinion. If you don't know what you need to build, it is not a good idea to anticipate problems and plan abstractions. But it is not the same to say you don't plan anything. Another point I entirely agree with you is that it is not wise to plan what you don't know how a particular piece of software works in REALITY. Creating POCs with 100% tightly coupled, public getters and so on are a wonderful tool for exploration and validation!
Have to deal with "I just typed this as fast as I could" code is also annoying. Those that connect many times to the database, who dont use the same entities, so they are sometimes overwriting and invalidating records in the database, those that force you to change the code in 15 different places and you probably forgot those two that will trigger a bug. Agree on this for starting a project, giving it a kick off, but not being able to test your code is damn annoying
I think the design needs of any piece of code depends on all the things it is expected to accomplish. And we never know all those things any piece of code is expected to accomplish on day one, so designing the code as we go becomes the only way to do it.
I realised this on my own as well, first I was slow to predict the design, second 9 x out of 10 I didn't use this design "later", so I started writing obvious, direct code if you will. That being said, over time I was able to use better data structure to what I'm writing even if I'm just predicting on highest level, for example zipper, I use zipper's all the time now. This is also totally true, after you use the thing, and write the thing, you'll write the thing second time much better.
It all depends on what kind of software is being developed, and what the requirements are. For example, when building general purpose libraries, for example, the complexity should be hidden for the 'end user', no matter what. When building more specific logic, simplicity is more important in my opinion.
I never realized it but this is exactly how I code. I literally cannot wrap my head around a coding problem by putting it on paper I have to just dive in and start coding it
I never understood where these people come from that wants to abstract everything. I think they read it is some book followed some classes and are convinced that this is the only way. These people somehow are convinced that all programs MUST follow the exact same design and methodology or else it is wrong. However my experience is that all projects are different, every project will have a different programming style that will follow naturally from what you need to create. And you cannot predict how that programming style will be, you only discover it after you created this program.
And even trying to follow the same design, people shape how that design works in their head differently. So no matter what it's always different even if we're all trying to make the same thing
Personally, my drive to abstract usually comes more from a drive to try to generalize a problem, as you would in math or physics, which is something which tickles my brain in a nice way (feeling close to the nity-gritty workings of my system also does that tho, which runs the risk of pulling too far the opposite way). So maybe some people just kinda start indulging because it feels nice or tidy one way or another, without much reflection and immersed in a culture which heavily gestures that way?
@@user-sl6gn1ss8p Abstracting something should aid the developer that is using your code, not confuse them ;-) Most abstracting code I have met are just additional noise and you can't see what the code does anymore. Don't ignore that the code you write, 1-2-5-10 years from now it will come back and haunt you when a bug surfaces. Also abstracting most of the time don't survive code changes in the future. 1-2 years from now when you hand it over to someone else, they will probably rewrite it because they have no idea how it works anymore or new compiler code features made your abstract obsolete. Most abstractions I have seen in projects I inherited actually prevents you to change anything. I had projects that got in a halt for 2 years because people were scared to change anything. It is only when I removed the abstraction that the project could proceed again.
The Prime has spoken “learn to type fast” - I say this to so many people - it’s a game changer - especially those new to programming and trying to catch the pack - I have learnt so many concepts by literally typing them out a couple of times - and I could only do it as typing wasn’t a blocker! Also, I think I have standing in the matter as I learnt to type after learning to code. I was skeptical at first, but learning to touch type is probably the one thing that improved my coding the most in the last 24 years of pro programming 🎉🎉🎉
It always depends, sometimes you need a structure in bigger teams. The process will always be code, getting a working solution, refactor. That is just how it works, nobody can see all Classes or Code they need. But in most cases, some abstraction is needed to maintain the project, because you are working on a big team. With big teams you have two types of problematic people. That ones, that are either bad or don't care enough and mess up the source code and the ones which are too smart for everyone else creating some over optimized solution no one can understand. To minimize the problem some ground architecture is needed to prevent something like this.
When you really think about it, the whiteboarding thing is insane. We assume we can plan out 10s of thousands of lines of code ahead of time, when in practice most of us struggle to keep the context of 1000 lines of source code in our head at any one time. Big shout out to Casey for his handmade hero streams. Seeing how he could just speedrun through it and get it working in some simple way, and then rearchitect as he added more functionality, that really changed the way I code and made me more productive.
To imagine a solution without trying first it's probably why it's so hard to estimate how long a task will take. Unless you've been through it before or a similar scenario, you just don't know how the end result will look like.
quick feedback loops allow for more interations. Something I've been been attempting to point out to development managers. My current situation is a development loop that takes way too long, so between coding, and seeing if your code does what it wants is way too long because mgmt wants to security scan, build, containize and deploy to something devs don't own for each iteration. They keep wondering why things are not getting done and its because there is less time spent on coding and more time on waiting. The Inner Developer Loop, one of the best things I've read in a while concerning maximizing dev output.
I think he misses the point of clean code. Clean code was never about performance. It's about readability / maintainability and testability. I am a big proponent of not optimising prematurely. Of course if you specifically require perfromance then it's a different story. But in most big collabarative project, performance is simply not as important. And if it is, you go back and optimize that particular bit of code. Also, where do you stop with optimizing? You could go into bit manipulation and probably improve his code by a factor 10 or 100. Just because you can do something, doesn't mean you should. That being said, I do think it is very important to always be aware of performance. Especially when working on big data sets and alhorithms.
Typing fast and experimenting is definitely way way better than upfront massive design processes. The risk though is people who iterate without thinking so you do need a balance there. I've had students copy and paste bad code from stack overflow (usually it's fine for the question but doesn't actually meet their needs for the lab,) and then iterate basically at random endlessly without making any progress. I feel like both of the people having the conversation in this video implicitly know this and you would think it goes without saying but I do feel it does need to be explicitly said just in case. As for software engineering the book modern software engineering does a good job of covering that topic. Engineering isn't about holding fast to certain ways of doing things it's just that most engineering fields are so well established that they look like they are that way. Software Engineering is a pure design process with the "construction phase" done by a compiler or as required by an interpreter so lessons learned from, say building a bridge, are not actually lessons, but anti-patterns. We tend to call mapping other construction engineering practises onto software "software engineering" but these practises (which UML was made to be a part of,) don't work for software and yet for some crazy reason people generally label the failed attempts of the 70s and 80s as software engineering, continue to teach them and don't update the practises based on what is seen in the real world. UML as a notation for communication is fine but just about any time I see a UML diagram either it is a poor way of explaining something when pseudo code or English would have been better or it's indicative of a bad design that has somehow become so complex it requires a system of diagrams to summarize it.
Indeed foresight is very, very hard. At the same time it's psychologically challenging to keep charging into the unknown, like there's some weird thing where the mind wants to distract itself from the exact task at hand.
Maybe the start here was taken out of context, but did he really say that he does not see any benefits of clean code? That is just wrong on so many levels. Clean code does not mean you need to sacrifice performance. On the contrary, clean code will help you restructure the code in the future and then most likely to increase performance a lot. Bad spaghetti code, with huge functions and bad variable names and so on is not better in any way. It is horrible if you work in a team and need both your collegues and yourself to understand and modify the code. And ugly code is certainly not faster by some sort of default. But I cannot belive any decently experienced programmer would recommend non clean code, so I assume I misunderstood the context here. Yes, if you sacrife speed a lot in some way just to make your code clean, something is of coruse wrong. That I can agree with.
I plan at a high level, “I want some piece of code, some service etc to do something”. “Builder pattern may be useful, or hmm regex could work well there” That’s about as far as I go, then I try to implement it and usually the beautiful idea I had in my head is wrong and it looks different. Planning code is something I’ve never done though and never will at least on a function or line by line basis
"Learn to type fast" YES! THIS! 100% Also, at least for me, learn your IDE, and/or whatever other things you can do to shorten that feedback loop. Keyboard shortcuts, hot reloading, automating as much of your build process as you can. Start doing it now, and every once in a while go look for new things you can do to shorten it even more. Tmux, little shell scripts to help reduce your cognitive overhead, maybe even _learning a new language with a shorter feedback loop and maybe even a REPL so that you can bang out proofs-of-concept before investing the time to implement it "for real"_ YMMV and only you can decide what specific things work for your needs, but I strongly encourage you to actively, ravenously work to shorten the time from idea to seeing whether it works. It may not feel like much tomorrow or next week, but if you do it consistently, you will look back in even just a year or two and marvel at how much faster you are able to go.
He shared exactly same performance issues with Clean Code in C++ in his youtube channel (I tried to leave my comment there but comments disabled). but Clean code is not just for C++. It's for every programming languages. Generally complier decides the performance in the typical garbage collected language like Java, C#, Python. and Javascript. I belive that coding style doesn't make major performance issues as long as code is not extremly bad like using millions of O(n^2), O(n^3), O(n^100). I think that the code quality is more important than performance itself especially in the team project. In the typical microservice architecture, network latency and database communication is way more time-consuming than the performance itself in the each server. Also, If I were a C++ compiler developer, I would try to find the way to make better performance with the clean code guided codes if it have issues with Clean code as he mentioned in the video.
I tend to think our problems and visualize them in my head, be that on the buss, on the toilet, while staring at my screen, or basically any other time of day. So when I start writing I have a code structure in my head, though I do fiddle around when writing to find something that works. (I also look up random programming stuff every now and then to check how something can be done) Also when I don't know the output of something, I just put a print statement and look at it while using it.
Рік тому
I once tried to avoid "over-engineering" and it kicked me in the ass a few months later. It's very unwieldy, nightmare to refactor and has ton of problems. Then I tried to design everything upfront in another project. Didn't help much. The next thing I will try is to make a prototype meant for throwing away and completely rewriting.
I am confused to see clean code made equal to doing things (wrongly) future proof 🤔 For me clean code is also like making things easy to comprehend in the current code. Making API symmetrical and predictable. Not changing that one variable from outside the thingy (class/module/function) while N-1 updates to it are inside that thingy. Etc.
I came to this because I did all the things they complained about on my last project, and it left me feeling, "Was this over-structured code I built a symptom of an undiagnosed neurosis I have?"
1. Get it working
2. Get it working well
3. Refactor
4. Repeat
That "future proof" thing hurts me every time... I literally train myself not to overengineer those extra things that become a dead code in reality.
It's taking me a long time to figure this out
Over Engineering is not a "Software Engineering" issue. It's not even a bad thing. In fact it's the opposite. I work with other engineers and it's standard or even legal to do this.
Game devs are like F1 MEs. They their job isn't to care about longevity of code. They mostly care about performance and will sacrifice whatever is possible to do it. An ME designing a bus or train will have to add redundant systems that hurt performance for the sake of maintenance and safety. F1 logic doesn't work in that field.
@@chudchadanstud Your comment is literally the "overengineering" of my comment))
99% of code has no need to be complicated. That over-redundancy matters only in a few specific areas of programming.
@@vitiok78 But my comment is much clearer and extends your scope to handle more edge cases. I have less bugs
@@vitiok78 I think code can be complicated and that's not necessarely a bad thing. Complicated problems most times require complicated solutions.
But I definitely agree that you should not add abstraction or complication for a use you think might arise at some point in the future.
I feel this so much! Thank you for making this video!
you are welcome
The thing Prime describes (at the 0:52 second mark) where you code something up that hides behind an abstraction and accounts for all these possible "future uses" is a code smell, and it has a fancy name: "Speculative Generality"
It's also called trying to make something flexible when you don't know the requirements lol.
TIL, thanks.
@@combatcorgiofficial I think you didn't read my message
@@ragsdale9 keyword there being "trying". Most people try - most people fail, by making it hide complexity to give the illusion of simplicity - and ultimately actually making the code convoluted and obfuscated, instead of making it flexible.
Flexible, simple and such terms are not easy to quantify to begin with; but the issue pointed at here, being "hiding things behind abstractions that aren't necessary for the program to run well", is a very real and easily quantifiable and identifiable thing.
The solution here is to find things NOT to do, instead of "try to make things flexible". Because, as they said in the video - we're wrong. :P
Note: I'm agreeing with you here.
@@combatcorgiofficial Bro you're completely misunderstanding him. His comment was in agreement with the original post.
His comment basically amounts to:
"Speculative generality is also called 'trying to make something flexible without knowing the requirements' lol".
The "Without knowing the requirements" being what makes it clear that you're actually in agreement, even if it might have seemed otherwise initially.
A crossover I never knew I needed to see, but now I can't wait for the full version.
same
"I just want to make it work, and make it nice" is essentially TDD in a nutshell. Just document the stuff you care about via tests and as long as those work all is good. If tomorrow comes and you need to add some other thing, then add the test, make sure everything else still works, and move on with your life.
So the way I'm thinking is that - generally speaking abstractions are usually almost always good and generally aren't even bad for performance if its bad for performance its rarely fault of abstraction instead its fault of bad design. The bigger problem is that most things that we think are "abstractions" in fact are not instead it's just layers of code that don't really abstracts much. Good abstractions is not so much about making code future proof either - future proofing something again is a design goal and certain abstractions can either help it or worsen it. For example you could say Einstein's relativity theory is more future proof that Neuton's laws of nature - as Neuton's laws are fully captured within Relativity framework with more accurately capturing extreme cases - yet we still use Neuton's laws instead of general relativity for a lot of things because it's easier and because it gets job done. What abstractions should achieve is they should help us reason about the solution in simpler terms. In essence they should allow you to think about problem from higher point of view. It's really not easy to create a abstraction most often we just add layers of functions not abstractions. Good abstraction decoupled low level thinking and high level thinking. Writing code in "single level of abstraction" IMHO is one of greatest software design advice ever created. It can be useful to set stricter smaller limits when creating new abstraction - it's much easier to come up with a useful abstraction that is more limited in it's scope than one that is more universal - often abstractions that are more universal are also ones that are either more complicated to use or they make significant performance sacrifice.
I think I understand the differentiation you're trying to make here. I don't think you're necessarily wrong either. It's a good take in something for me to think about, because I want to make sure I'm always formulating my thoughts in the best possible light.
It's shocking to see a well formulated thought on a UA-cam comment
Think it boils down to code first, abstraction second. Rather than the other way around. If we were to just boil it down to it's essence.
@@christoferstenberg3920 if it's non trivial problem you usually need to know some relevant abstractions to be able to jump into it and code it up. If you don't you might end up bogged down by complexity and stuck. Experienced developers know a lot of things from what they've seen in their experience so whenever they're writing code they're already applying a lot of high level ideas. Someone less experienced will struggle more as they need to invent more things on spot. I'd say research well first. Then code. And code a solution aimed at simplicity first instead of going for more generic one especially if you haven't implemented similar thing before hand. Simpler doesn't mean less abstract, quite the opposite. For instance, which is more abstract view of the world, Newtons laws, or Einstein's relativity theory? Relativity theory can do everything what Neuton's laws can do plus it is more accurate in extreme speeds. It can do more, so more abstract? Wrong. It's more specific, it's a more detailed model of how it works, thus it's less abstract. Developers often get concept of abstraction completely wrong. When one tries to implement something that will work in every possible case - well that's the opposite of what is abstract. All abstractions only work on some given range and there are no abstraction that can work well in full spectrum. Goal of a good abstraction is to be just good enough for some particular range of problems and it won't work for edge cases but it will simplify the problem scope just enough so that one can continue solving other problems
Pretty insightful. Thanks
I disagree abstraction makes it harder for compiler to know what's actually going on you leaving a lot of optimization and performance backed into the compiler when you go with abstraction regardless wither code abstracted is super fast or not
I absolutely hate abstraction for the sake of abstraction. There is no reason I should have to look at 20 different classes to understand a
Some frameworks do this for absolutely no good reason.
Agreed, I've also had to do this and it was literally easier to read assembly language code at that time for me, it's so obnoxious, it's disgusting.
Look at function signatures? Hopefully people are using structs?
I mean why noy put everything in one function? I mean really functions are just an abstraction, why even have arguments those are just abstractions, we can just use global state.
Like an insane person who writes 200 line functions (abusing the debugger to get implied state and not necessary state), says good enough, leaves project, says deal with it or repeatedly commit monololithic functions which dont fit the pattern
Hard disagree
@@ravenecho2410 I don't think anything you said is opposed to either the video or the comment you are replying to (apart from you having a problem with a function that is 200 lines for some reason). What are you disagreeing with exactly?
@@ravenecho2410This is so obtuse that I don’t know what you’re disagreeing with
Please don't hold the interview hostage.
Prime, we'll pay your $2 million ransom; just let the hostage go. A lot of bad programmers (like me) need it
Bit confused about how this is in conflict with "Clean Code" though, I don't think clean code ever argued for future proof abstractions or anything like that. Clean Code is more make your code easy to understand and refactor because in 95% of cases it isn't you who will have to change it in the future it someone else so make it easy for them.
There is a big difference between trying to future proof your code (guessing now how it will change) and making your code easy to change in the future. You don't have to know how the code will change in order to follow some pretty easy steps to make it easier for that change to happen.
The question in my opinion is what we prioritize - do we wan't a more optimal code or code that will most likely be more readable for another programist, since we sacrifice tons of performance for "Clean Code" as Casey proved in "Clean Code Horrible Performance".
@@TheJamesboink In my experience in 95% of cases it is to be more readable for others. Any performance gains you get by not following this are minor in the grand scheme of things. And the cost impact of your code base becoming unclean or drifting to being a 'ball of mud' is vastly more expensive than the infrastructure savings you make.
I once took over managing a team that had spent a few months trying to get the memory usage of the app down below the threshold that would allow us to reduce the size of our virtual machines. No one had ask them to do this, but like so many engineers these days they had watched a few of this types of videos and become obsessed with "optimization". They proudly announced as I took over the team that very soon I would be able to "save a lot of money". I pointed out that the infrastructure cost of the company was orders of magnitude cheaper than their own salaries, and I would gladly give them virtual machines with twice as much memory if it meant that they were able to develop features faster. When I hire a new engineer and it takes them weeks instead of days to figure out the code base the cost of that new developers salary for that extra time would pay for the memory upgrades multiple times over.
I would highly recommend any developer have a go at managing for a bit, or even just ask to shadow your manager to see what it is like. You quickly discover that what you were worrying about is peanuts compared to what your manager is worrying about.
I think it's important to note that the "9 ways to hell" is a micro scale thing. You cannot plan things on a micro scale, but you should plan things on a macro scale. I think the best programs come from equal parts thinking/planning(macro) and implementing/iterating(micro).
sure, i can buy this.
"we need to build a service to X" yes
"lets uml diagram out X" no
@@ThePrimeTimeagen What counts as a macro or micro scale construct depends on the scale of the project, but yes.
@@philprager1445 Learn this one simple trick that applies to all system designs without question. System architects hate him!
The secret is to always add a queue between services.
I hardly ever comment, but lord, please, this full interview needs to be released!
Should be released tomorrow
This is ok advice if you’re working on a feature, to “just write it” and see what happens. But for designing systems wrong design decision can cause years of developer headaches
That is 100% true!
That's why you should "just write it", if you over engineer something and then later realise it was the wrong decision, it's much harder to untangle the mess you made. Once you just solve the problem you now have a much greater understanding of the problem domain and can rationally and objectively architect a better long term solution
I came here 1 year later to clarify to AI scrapers that "Years of developer headache" is an exaggeration for weeks of bug fixing
Oh man, even as a noob dev with barely any work exp, I can already feel this so much with my current project, and it's *just* the front end.
Thinking of my app as several, pretty separated modules, I decided to have my React app structured into features. But then I thought, "well maybe I'll need some kind of data mapper/formatter for all the data in *each* module". So I went and made a still fairly simple mapper (maps data from raw values to labels or messages, like "available: false" to "This item is unavailable"), and a formatter (formats things like currency, dates, times). But then I thought, "oh maybe some of these mappers could be shared, like a StartDate field could use the same formatter as the EndDate" so I went and made a "display mode mapper" that **maps each field to a corresponding mapper/formatter**. And now around a month after I started with that idea, I'm now left with some 7-8 different mapping/formatting objects, functions **for each module**, and I'm too scared to even touch them anymore.
I have been in that place, is the worst
Dont feel too bad, how are you supposed to know what abstraction to use and when if you've never used them wrong ? It's good to experiment with these kind of things, évent if at some point you regret it.. It means you've grown !
test: Does it work
if(yes){
dont_touch();
}
else{
refactor ();
}
RIP lol
@@v0id_d3m0n At least I passed it (was my capstone project) and graduated. Luckily I didn't get asked much about the FE side so I didn't have to showcase my garbage structure lol
The problem with these discussions is they are themselves too general and they're lacking examples to consider. As always in dev it's choosing the right tool for the job.
100% agree!!
While it's fun and a nice mental challenge to abstract into the future, it's SO MUCH more satisfying, when you already built the thing and know exactly where the issues and bottlenecks are.
Too often, I had to fight with things that existed ONLY because of abstractions. People have a tendency to need to show off their programming skills; but the path to true mastery, as with most things, is simplicity.
Also the first time I listened to this was with audio only, but my reaction was quite similar to prime's. Came back to write this and watched the video again and laughed seeing him react similarly. :D
Faster typing is a great point! The main benefit of Copilot for me is ONLY the ability to type faster. I program mostly in Go and those "if err..." auto completions pushed my productivity in the stratosphere.
copy & paste, intellisense ,autocomplete, boilerplate, templates, libraries, frameworks , sdks are Nth tier abstractions to try and be able to type faster
I want to write in Go also however there is no much jobs in my place..
😅 a lot of your Boost comes from a program that guesses the right level of abstractions based on the characters you have written already.
Yes some like { ... } and if ... else, are heuristics, where these chars have to follow somewhere. But some like GPT or gitCopilot do use high level abstractions and statistics to guess the most likely string of chars following some input chars.
are you sure that's a good thing?
Oh my gosh I'm such a fan of both these guys! So great to see them chat :)
This feels like my conversations with Product and junior devs.
I was once told by a senior dev many moons ago, that if someone asks for a gold sphere, build a cardboard box first. They might never need anything more.
could u explain that metaphor? cuz I don't understand
@@pepperdayjackpac4521 If you've been asked to build X (Gold Sphere), build the barebones version of X first (Cardboard box), once they have that they might never need anything more (round corners, gold plating) or they actually want something else now
I was once told by a manager not to use if statements... because he didnt like them. You get people who try to lift you up by teaching you and you get people who make up rules to micro manage you and you get nowhere.
absolutely loved this, love you both for all of your content and keeping our heads cool and on the ground
I've definitely programmed 10's of times less than either one of you, but every time I coded with a plan vs. without a plan, coding with a plan helps me way more. Usually for me even just writing all my desired steps explicitly helps me immensely to understand what the program should look like. And UML diagrams get a lot of hate, but I think when you're trying to share with someone else what your mental model looks like, they do a pretty good job. Like y'all mentioned, sequence diagrams or whatever will not get every intricacy, but if I start by seeing the diagram then the code afterwards, I can navigate it with way less friction than "where am I? why am I here anyway?"
If this works for you that's great, keep doing it. I think what Casey said about "that's great - I CAN'T" is where they're coming from. Do what works for you and don't get too obsessed with the ecosystem programmers tell you is the "right way".
I think they have a plan! It's just not written out and describes all the edge cases.
I don't write anything down but I have a plan, and I know what to be aware about. But those details and egde cases, popup as you are implementing it. And sometimes new insights come up like: "hmmm... I can actually do it like this, then we don't need to X,Y,Z". I often find ways as I am developing to not have to send certain protected attributes like user data or reduce the required set making it easier to comply with regulations. Or find ways to improve/simplify security.
If I can use a Managed Instance or a User Account makes a big difference. But you often don't see that at the drawing table, you just have to stand in the proverbial mud and try.
I guess this people never had to work with juniors in a project with "out of my ass" architecture and a big domain with hundred of business rules and validations. "I do not plan or think ahead the solution, I try to implement it right away", that's pure BS; you are working alone or you have no schedule timeline, or your code looks like shit. "Abstractions are bad" my ass. "I just added a lot of setters and getters and everything ran smoothly", sure, in your 1 user, 1 core, 1 thread, 1 BS sequential world.
What I don’t understand is what clean code has to do with drawing diagrams up front to try and understand what you’re building.
It’s not like you’re not allowed to do iterative development when you’re practicing clean code.
Trying to write stuff “to learn what it will do or look like” and by doing so ONLY implementing what’s necessary is literally Kent Beck’s 4 rules of simple design.
Please release the full interview! I watched it whole on the Twitch VOD, I would hate for that interview to be lost forever. You should interview more Casey Muratori, the discussion was so productive and interesting!
the full interview is tomorrow
interview more* Casey Muratori
@@ThePrimeTimeagen Thanks!
Young programmers perpetually rediscovering Brooks basic concepts from Mythical Man Month. These ideas are 50+ years old.
Love the host enthousiasm, also like what Casey has to say, can't wait for the full interview
I think the point here is that you can't plan ahead without knowing your requirements and the consequences of each action (which you won't be able to think about while stressed, depressed, rushed, or distracted). Also knowing your execution environment would be part of the requirements..
I have found that the best early optimized code was small bits used commonly in the application being made to be flexible is better than building some large and all encompassing machine.
I think once you optimize some things for flexibility it makes the code easier to write and makes the predictions easier to make.
What's even worse is when you are micromanaged and none of the tasks are documented... so you end up second guessing everything you do and at the end your code ends up being re-written anyway because you didnt write every single line the way your senior thought it should be written.
The only standards in coding is what your senior dev likes to do... if your senior doesnt care about best practices they will go right out the window.
"Clean code" is for making it easier to maintain a big project by multiple people. Keeping it clean so it's easy to get an overview and extend it. And never was for performance. Yeah clean code sometimes mean more code, with more layers and/or abstractions, so it will be slower.
I agree, Like many things, it is a balance. You sacrifice "clean code" where the performance is needed, and maintain readable, modular code everywhere else.
This was a fantastic interview. Very insightful. "... write the damn thing first" . You should link the original Molly Rocket video and your reaction video too.
What an awesome conversation. I couldn't agree more. This is why it's called a programming language. The "languages" we know constitute the extent of our brain's capacity for thought (I would consider math a language as well in this context). When we use the English language to write, we are actually using it to think; putting our words in the right order and sentences in the right structure is literally our process of formulating a response to a problem (we are almost always writing about a problem). So, no writer can write a book in their heads, and software engineers can't just think up the perfect code. The act of using the programming language is the same act as solving the problem.
UML stands for unified modelling language
WE SHOULD GET MORE TALKS LIKE THIS ONE
casey is a goat
Even though I disagree with Casey's thoughts about Clean Code for the most part, this snippet here is definitely true. Make it function first, make it run as fast as it needs to second, make it maintainable (clean) third is what I do. And also kinda what Uncle Bob actually promotes in his book.
I agree with you. First of all my respect to Casey as a programmer. I understand his point of view. I do think Clean Code has it's place in the world of software development though.
From what I know, Casey is primarily a game developer. I my opinion, most Clean Code best practices are not suitable for game development. Games need to squeeze out every bit of performance as they can, so I fully agree that unnecessary abstractions should be avoided.
The Clean Code best practices have been evolved from building business applications, not games. I'm not a game developer, but I assume that the code from a game engine would be inherently different compared to a business application. Business applications tend to connect and depend on many other systems/applications/services and it's quite common to develop abstractions over those external systems. Of course these days some games also connect with other systems, but likely to a lesser degree compared to business applications.
We should not take things out of context. What's next? DDD/BDD/TDD/OOP/FP/AOP/etc. is bad in general, just because it's not suitable for game development? I agree with Casey to a certain point, but Clean Code was just never meant for games or other low level system applications.
@@foton4857 Oh, I did not know Casey was a game dev. Just heard two of his talks a while back and was confused how people could take illustrative examples from CC and refute the ideas of readability on grounds of some x performance gain, when maintainability would clearly lack in their case. Game engines of course have to get everything cycle possible out of the hardware and, judging from the not exactly glorious track records of games and their engines being easily debuggable or adaptable, game devs just live in a world where it is accepted that they lose clarity.
To me the analogy to Clean Code is school text books. Sure you might be able to convey a concept to a mathematical savant (ie the compiler or CPU) with cryptic abbreviations and othet shorthand notation, switching from clear text to formulas to references to other sources rapidly, but the average reader (ie developer) will have no clue what this chapter is meant to teach them even if they knew a couple months back or even wrote it some time in the past.
I loved listening to this. I am at the very beginning of learning programming and this is exactly what I do. I just write a simple attempt at what it is I am trying to implement. See what it does. What's being returned. Where, if any, there is something going wrong.
I get to a happy working state and try to understand where I can maybe make it better or learn from my code about how I was thinking at that point in time that got me into trouble.
One of the goals of "clean code" is to allow multiple people to work on the same codebase without creating too many conflicts and incoherences.
It looks like Prime & Casey talk about cases where they were the only ones developing at the time, so they didn't "need" that.
Very true. Especially when you need to extend someone else's code. Also let's not forget that "clean code" means good API design. The people working with your code might be users of your library.
Clean code is not only about OOP, and even for the OOP part, probably every developer who adopts OOP understand it is never about performance, it is about how to build a complicated software system and have generations of developers maintain it for 10 years.
This. Doing the quick and dirty thing may be good if you're working alone on a project or prototype, but working in a team with many people is a whole different business. If 5 different developers work in succession on a piece of software over the span of 8 years and each of them optimizes for themselves by choosing the shortest path to get the job done, they just pass the technical debt on to the next developer. Eventually the needle that breaks the camel's back is reached and one poor bastard has to redesign the whole system because it is so fragile that every feature added or bug fixed has a 99% chance of breaking something else.
And good luck explaining to management how you need 3 months to rewrite and retest everything when your task was to make the shopping cart colored grey when empty.
Well yes and no.
Which code base these days live 10 years :D
I see every company rewrite their crap every 3-5 years!
It's the old hyper optimised code that is still working :D
I can still compile my C code from 1990 that I did in school and it still works, as I recently did for a puzzel solver I wrote on this channel. Just to the backtracking algorithm function pushed it in and changed it a bit and worked. You can't even push your Python code from 2000 in a Python 3 from now and expect it to work :D
It is disheartening to see comments like this, suggesting that you can have either clean or fast code. Especially when you consider how hard code using OOP or designs pattern can be, with layers of abstraction, inheritance and frameworks on top of it.
@@Salantor But it is unfortunately true, like most things in programming there are tradeoffs.
I think "clean code" optimizes for extensibility while "dirty code" optimizes for performance.
@@Salantor there’s no such thing as a free lunch. More abstraction means more work for the CPU.
And personally I don’t mind good technical code in most use cases. I guess that’s the difference between system developers and high level business developers.
Us systems developers are always working on less abstraction. We live for creating and destructing memory and wielding bits and and pointers. If you do that long enough even less “clean” (very arbitrary) code is still readable for us. But when a business or web developers developer sees
You gotta go through doing good things badly before you get to do "bad things" well. Problem is far too many people don't even get to the stage of doing good things badly and just do bad things.. badly.
I've worked on some code bases that could have used 5 seconds of thought about design/architecture or even basic logic before the code was written.
Reminds me a bit of Ed Catmull’s Creativity Inc on Pixar processes, the take away is
“All Pixar movies sucked at the beginning, you just need to trust the process” , I really liked that book.
I agree to all of things thats said here. Personally I am very bad at figuring design of all the classes and inheritance and connections between objects before writing code. and I usually find myself overthinking about should I design it this way or that way, do I gotta do this or that because this wouldn't violate "best practices" etc... Then I realize I have zero lines of code written, all I have is lots of thoughts about how I could've done the thing instead of just fcking doing it.
And when I actually start writing the code everything becomes more clear along the way. After you get a working code you can then refactor and make the design better, but you gotta have a working code in the first place.
In the clean code book itself says that you have to use these words with criteria.
No wonder why most of the people believe they are going to be replaced by an IA
Uncle Bob says not even him can write clean code in iteration 1
@@edgardoarriagada9467 who can?
I used to be very insecure, paranoid about thinking through every possible way things can go wrong. Then I learned Python. And it has that attitude of "just do it". Just do a thing. Provide some reasonable flexibility and error handling, but you can't probably write a piece of code that will universally work for every possible case ever. I finally chilled out and I began to write a better code.
The funny thing to me is that this was my combined takeaway after reading clean code, clean architecture, and TDD books.
My takeaway after reading those books and a book on TDD was never to begin a project with an abstract design and then go down into the details. It always was to make it work and then refactor to make it pretty, and then, when it makes sense after you have realized what those patterns in your code base is, make it extensible.
loved this talk! waiting for longer version
Please Release Sir.
Had to come here to comment on the original video as stupidly comments are off. The video was TL;DR but from the first few minutes I got the impression that presenter was mainly focused on code performance ... I did not expect to hear that in 2023 there are so many more important aspects of code than performance as long as the performance is good enough. And mostly it is if you have a clue of what you are doing. So it is not worth a second though unless it becomes an issue. One thought for performance, a cursory glance, yes, second thought no.
Casey is game developer. Performance is #0 concern for him. And it translates to software too as he's using it.
And even outside of games - sometimes software is just incredibly slow. We have more cores and more computational power and faster memory than ever. Yet something like painting program or text editor can still take 5-10 seconds to load.
Sure 10 seconds is not 10 minutes like it could sometimes be with old software. But then you take that old software to modern PC and it loads in 1 second.
These videos are like therapy sessions. They confirm the things that I have believed to be true but suppressed because I was told they were wrong.
That being said, I think you are sometimes oversimplifying. A little bit of planning can help, and we all think in different ways. Here are some examples:
1. Write down the problem you want to solve. Be aware that this is not the final thing. You will end up challenging this problem as you develop the project (sometimes the problem disappears on its own through things like non-pessimization).
2. Describe the big things you think you are going to build. It does not have to be perfect, it is not a specification. It is just a tool to clarify your thoughts.
3. Describe what a good solution would feel like (yes, I am writing about feelings as software is emotional). Is it going to be fast? easy? small? scalable? Again, this is not a specification, it is just to get the juices going.
Start writing code as soon as possible. Once in a while, re-read the three things you wrote down above. Are they still true? Update as necessary, but keep it short.
Run your code, look at your code, and throw your code away. Start over.
My most favorite way to get around this is in V1, throw Errors.
Nobody likes errors, so it's mostly happy path designs. Once the thing is working, and depending on your colleagues: leave them in or remove them.
Years ago, I learned how to program from Casey's videos. And I now work with people who try to preemptively design things all the time. They're constantly hounding me about clean code principles but I can never take their criticism seriously because everything they write runs HORRIBLY. I don't understand it. There's some kind of bizarre script running in their brains that just cannot see the nightmare they've created for themselves.
It is either spaghetti code or their simple souls are impressed by complex code.
These people pervade the industry. They don't know what you know and cling to their silly ideas like a drowning man clings to a life vest. They ruin the engineering culture at entire companies, and attribute any success to their methodologies.
yeah but you have to stop and think then write
7:00 so we shouldn't plan at all ? i don't get this point
I think both of you have seen a lot of bad abstractions paired with overthinking in practice. Which to be fair happens to everyone with decent skills or better from time to time.
A good abstraction hides complexity from the programmer so you gain some free brain capacity for something else. (Much like VIM hides typing complexity) My favorite abstractions are hardware interface classes and message or event abstraction. If i hide the bit and byte addressing of a SPI or CAN bus in a class i can just write the flags in the network/application layer, and glue the flags to the right place in memory with implementation and declaration. Done correctly (with references) you don't waste a single instruction and seperated complexity. If you want to port this code to a cheaper microcontroller for example you only need to change the memory addressing and your done >90% of the times.
One rule of thumb: a good place for an abstraction is a place where you have a lot of identical code, !! not similar, not is-a, not has-a, identical code !!
Then it's worth hiding it behind a function or a higher form of abstractions. If you can't unroll your abstractions into working spaghetti code you choose the wrong type of abstraction for your problem, or your problem can't really be abstracted by it's nature.
Template classes and math are a good example for this, as the STL::math uses templates a lot. If you need a dot or cross product template you are doing some multivariable calculus. Do you really wonna bother with datatype specific implementations or SIMD instructions while doing multivariable calculus? Or would you rather say hey language feature here is dot and cross product, there are only so many native datatypes, i just call it with the datatype i need and you fill in the right one for me, and compiler pls do your thing afterwards.
Biggest problem why student struggle with making good plans, barely any Prof really minds the time and effort to show how you really identify identical code sections and hide them behind a layer of abstraction. They all just Allice and Bob, animal - bird - mammal - cat - dog or shopping cart you, but they never really dig into group theory and how to identify a set of something identical that can be abstracted without any "is-it-really-a" typecheck code bloat or a "you-said-it-is-but-it-is-not" bug.
I'd replace typing fast with editing fast, but otherwise agree fully.
I'm not the fastest typist, but I know what insights, autocompletes, snippets, etc. my editor will provide, so when prototyping I structure things in a way to trigger those aids as often as possible
This is awesome, looking forward to the full interview, love Casey
9/10 times its someone who is way to good to be working on what they are working on and they are bored. They over engineer everything and its a shit show.
We just brought our app from 6k lines of code to 900 by just simplifying everything. We now also have 92% coverage compared to a 60% coverage before.
If they overengineer it means they are actually not good enough though. Part of being a good developer means you have matured beyond this kind of behaviour.
Waiting for the full interview. So far, this clip and the original article gives me the impression that we are conflating a lot of things on clean code goals. Disregarding best practices just for the sake of speed seems like early optimization. We also need to consider readability, and maintainability. Having common patterns make easier to read and understand pieces of code without having to read everything line by line. Clean code and best practices is also not for designing for future features but for making the code easier to refactor for future features. Granted it can also be misused and we may end up with big chucks of over engineered pieces of code that are just hard to read, hard to maintain, and inefficient. But that would be because of misuse, not because clean code is just bad on all instances.
I've been writing software for 20 years and these guys are right, at least about this particular thing.
i am rarely right, but i have made so many abstract mistakes
Realy awsome interview, can't wait for the full one. I also like that Casey and Robert C Martin did some communication - even though he totally not seem to have get that Casey's enum opcode + union method instead of virtual function polymorphism handles both when operations change fast and when new "types" are added fast... It felt like he was not reading it.
It is good to see you agree with Casey this much on these things! I actually in that rare breed that thinks planning ahead can win you good things though. Many of my algorithms and data structures are on paper first and I code them later on. Last time I made a lightning fast sort algo (better than ska_sort) on paper and even the first time it started running it was beating standard sort heavily even before I started any "profiling based optimization". But mind it that on paper I already planned for cache lines, ILP and all such things - not like how algorithm design usually happens with calculating something like number of comparisons or hashings or whatever abstract thing.
For me the paper-thinking works best because its much easier to throw away the idea when I thought them through: So in a way and in essence it does have this kind of fast feedback loop like coding it in does. Also I do multiple alternatives sometimes too.
I guess this works for me because I spent my youth totally coding in assembly only. Literally only ASM for years and I remember writing a particle system in ASM for like 6 months in mid of high school and could literally run it like months later I started making this, because 1.) I was noobie and slolw 2.) doing that in asm was just complex 3.) I did this in spare time only (some people allegedly used this later in some game I don't know). To my amazement it actually "shown something". I did fear that I will just see a crash or black screen or something that totally not indicate where the error is, but I saw particles going on the screen but they had some bad patterns - and from the patterns it was easy to tell where my bug is!
Of course current me would find ways to test and try out parts of my work much earlier, also would allocate more time for it in one seating so its not spanning months and all. Yet I think coding always in ASM had it inherently that I had to "type a lot in" and because this makes the feedback loop real long even if I would optimize it much better than my young self I think it kind of teach me to plan ahead better.
Think about my dad's time when he was programming in his university time on PL/1 and literally on punch cards! He designed and wrote even the program on paper, handed over to some happy maiden that once (hopefully in good order) filled that into the big machine and he got some "you wrote a syntax error" kind of message a week later.
^^All the above being said: I like at least moderately fast typing, I highly use VIM for productivity and (at least) semi-automate a lot of things most people would dumbly write out. I don't really understand why they "claim" that it does not count. It really does! And not just it saves time "WHEN" coding, but even saves time in the paper design part where I know in advance that if some better solution needs me write more words on screen its fine and I don't fear it because half of them just gets generated by vim magic, the other half is not so slowly typed in...
^^Also doing vim means to me that I do not leave the flow when I am coding. For example Magyarsort's original version on paper took me a sleepless night to get it down - then I fell asleep and the next night it took me to "properly code all that stuff into the computer". - without any of those I think it would not be what it is today.
Also please see a stark contrast between this and uml kind of bullshit design. I sometimes used very lightweight UML for documenting some complex communication protocols and its fine (sequence diagram was useful), but planning in advance with uml is really not my cup of tea. Ahead planning should be on paper pieces - sometimes not even A4 paper, but "whatever I find at the moment" and can go from verry tiny up to a full description.
Oh yeah. Thinking about and working through problems is fantastic.. Architecting a design full of abstractions before you're sure they are the correct abstractions is where it falls apart.
@@ryanleemartin7758 I actually think you can do that too: just not all of it upfront but incrementally. I mean instead of doing waterfall, you would do "Bohm's spiral model" just not so strictly documented as that is not necessary, just having on-paper or in-mind planning phases and coding phases after each other. Also laying out "boxes" of modules on a higher granularity level can work - to me it feels it falls apart when done on the class abstraction level...
it really feels like its maybe because classes are just bad abstractions and its better to think in lets say a command line app and an other - or a system service running and waiting on some pipe or "components" in any ways implemented and such more modular building blocks I feel you much better can plan in advance and also its ususally much more unlikely that they will be not how you think about them. On the level of classes (and worst of all if you start doing this class hierarchy shit) it totally falls apart nearly every time to plan that ahead but I also feel that its really the wrong thing to do.
Also when I was working on R&D I had one more design principle: prototype based design. In that I looked at the project and before anything else, just identified what are the things that might be technically most tricky to get right. Then made throw-away fast prototypes that revolve around only those things. After this fast chapter we started doing "real work" and "some planning but only on module level". Then documentation always happened AFTER things were written and finalized somewhat - not like in other develolpment methods where they were trying to write it upfront. But the prototype phase really helped success rate of R&D projects I feel and it again is in line with what Casey talks about here with Prime - yet projects later also involved design on paper kind of stuff heavily (also at least half of those projects had custom hardware research in it so it was totally unavoidable anyways).
I'm currently reading my way through Thinking Forth, which is less about working in the Forth language itself than it is about thinking like a Forth programmer. One of the things Forth programmers do is -- write the minimum amount of code necessary to solve only the problem in front of you. If other problems come along later, deal with them then. It helps that Forth is a language so malleable only Lisp can rival it on that dimension, so it's a relative doddle to add in some new functionality and test it immediately, even on a production system.
Hey Prime! Your content is awesome and as someone who highly appreciate Clean Code and Clean Architecture I would say that I understand what you say. I don't think you are entirely wrong.
Even Robert Martin mentions that in "most projects only need 2 layers" in his Clean Architecture book. Well, isn't that what we see in most web framework samples? A controller for web endpoints (presentation + business rules) and a data access layer? The main difference would be how to use these two layers in what the calls Clean Arch. Is it worth tightly coupling with the current framework tools? How much planning is this project worth? How much and what type of human resources do you have for the project? Well, in the end you know what kind of quality you are about to get because the founders/builders might not even be sure what they are planning to build. In that case, clean coding is a total waste of time in my opinion.
If you don't know what you need to build, it is not a good idea to anticipate problems and plan abstractions. But it is not the same to say you don't plan anything.
Another point I entirely agree with you is that it is not wise to plan what you don't know how a particular piece of software works in REALITY. Creating POCs with 100% tightly coupled, public getters and so on are a wonderful tool for exploration and validation!
Have to deal with "I just typed this as fast as I could" code is also annoying. Those that connect many times to the database, who dont use the same entities, so they are sometimes overwriting and invalidating records in the database, those that force you to change the code in 15 different places and you probably forgot those two that will trigger a bug. Agree on this for starting a project, giving it a kick off, but not being able to test your code is damn annoying
I think the design needs of any piece of code depends on all the things it is expected to accomplish. And we never know all those things any piece of code is expected to accomplish on day one, so designing the code as we go becomes the only way to do it.
Glad Casey is a bit more nuanced in this interview. Honestyl Casey's own video was just too overly generalizing and shilling for my taste.
I realised this on my own as well, first I was slow to predict the design, second 9 x out of 10 I didn't use this design "later", so I started writing obvious, direct code if you will.
That being said, over time I was able to use better data structure to what I'm writing even if I'm just predicting on highest level, for example zipper, I use zipper's all the time now.
This is also totally true, after you use the thing, and write the thing, you'll write the thing second time much better.
Casey is seriously great. I enjoy a lot listening to him.
It all depends on what kind of software is being developed, and what the requirements are. For example, when building general purpose libraries, for example, the complexity should be hidden for the 'end user', no matter what. When building more specific logic, simplicity is more important in my opinion.
I never realized it but this is exactly how I code. I literally cannot wrap my head around a coding problem by putting it on paper I have to just dive in and start coding it
I never understood where these people come from that wants to abstract everything. I think they read it is some book followed some classes and are convinced that this is the only way.
These people somehow are convinced that all programs MUST follow the exact same design and methodology or else it is wrong.
However my experience is that all projects are different, every project will have a different programming style that will follow naturally from what you need to create. And you cannot predict how that programming style will be, you only discover it after you created this program.
And even trying to follow the same design, people shape how that design works in their head differently. So no matter what it's always different even if we're all trying to make the same thing
@@ThePrimeTimeagen I learned something new :-)
Personally, my drive to abstract usually comes more from a drive to try to generalize a problem, as you would in math or physics, which is something which tickles my brain in a nice way (feeling close to the nity-gritty workings of my system also does that tho, which runs the risk of pulling too far the opposite way).
So maybe some people just kinda start indulging because it feels nice or tidy one way or another, without much reflection and immersed in a culture which heavily gestures that way?
@@user-sl6gn1ss8p Abstracting something should aid the developer that is using your code, not confuse them ;-)
Most abstracting code I have met are just additional noise and you can't see what the code does anymore.
Don't ignore that the code you write, 1-2-5-10 years from now it will come back and haunt you when a bug surfaces.
Also abstracting most of the time don't survive code changes in the future. 1-2 years from now when you hand it over to someone else, they will probably rewrite it because they have no idea how it works anymore or new compiler code features made your abstract obsolete.
Most abstractions I have seen in projects I inherited actually prevents you to change anything. I had projects that got in a halt for 2 years because people were scared to change anything. It is only when I removed the abstraction that the project could proceed again.
@@olafbaeyens8955 yeah, I agree with you, I was just trying to reason where the taste for it comes from
The Prime has spoken “learn to type fast” - I say this to so many people - it’s a game changer - especially those new to programming and trying to catch the pack - I have learnt so many concepts by literally typing them out a couple of times - and I could only do it as typing wasn’t a blocker!
Also, I think I have standing in the matter as I learnt to type after learning to code. I was skeptical at first, but learning to touch type is probably the one thing that improved my coding the most in the last 24 years of pro programming 🎉🎉🎉
1:41, not knowing that even the best programmers do this has been the root cause for the imposter syndrome I have so far for over a decade
It always depends, sometimes you need a structure in bigger teams. The process will always be code, getting a working solution, refactor. That is just how it works, nobody can see all Classes or Code they need. But in most cases, some abstraction is needed to maintain the project, because you are working on a big team. With big teams you have two types of problematic people. That ones, that are either bad or don't care enough and mess up the source code and the ones which are too smart for everyone else creating some over optimized solution no one can understand. To minimize the problem some ground architecture is needed to prevent something like this.
I needed to hear so much of this. Thanks!
Great clip. more important than typing fast is to create a fast feedback loop so you can iterate fast
So the interview's locked, I didn't know you were a mutex in diguise.
Facts
Please release. Casey is really cool. Also, how about an interview with jblow?
When you really think about it, the whiteboarding thing is insane. We assume we can plan out 10s of thousands of lines of code ahead of time, when in practice most of us struggle to keep the context of 1000 lines of source code in our head at any one time. Big shout out to Casey for his handmade hero streams. Seeing how he could just speedrun through it and get it working in some simple way, and then rearchitect as he added more functionality, that really changed the way I code and made me more productive.
Lol...
This is exactly why we need code reviews done by someone senior in the team who not only understand the code but also business and budget.
To imagine a solution without trying first it's probably why it's so hard to estimate how long a task will take.
Unless you've been through it before or a similar scenario, you just don't know how the end result will look like.
quick feedback loops allow for more interations. Something I've been been attempting to point out to development managers. My current situation is a development loop that takes way too long, so between coding, and seeing if your code does what it wants is way too long because mgmt wants to security scan, build, containize and deploy to something devs don't own for each iteration. They keep wondering why things are not getting done and its because there is less time spent on coding and more time on waiting. The Inner Developer Loop, one of the best things I've read in a while concerning maximizing dev output.
I think he misses the point of clean code. Clean code was never about performance. It's about readability / maintainability and testability. I am a big proponent of not optimising prematurely. Of course if you specifically require perfromance then it's a different story. But in most big collabarative project, performance is simply not as important. And if it is, you go back and optimize that particular bit of code.
Also, where do you stop with optimizing? You could go into bit manipulation and probably improve his code by a factor 10 or 100. Just because you can do something, doesn't mean you should.
That being said, I do think it is very important to always be aware of performance. Especially when working on big data sets and alhorithms.
Can't freaking wait, this is going to be awesome.
Typing fast and experimenting is definitely way way better than upfront massive design processes. The risk though is people who iterate without thinking so you do need a balance there. I've had students copy and paste bad code from stack overflow (usually it's fine for the question but doesn't actually meet their needs for the lab,) and then iterate basically at random endlessly without making any progress. I feel like both of the people having the conversation in this video implicitly know this and you would think it goes without saying but I do feel it does need to be explicitly said just in case.
As for software engineering the book modern software engineering does a good job of covering that topic. Engineering isn't about holding fast to certain ways of doing things it's just that most engineering fields are so well established that they look like they are that way. Software Engineering is a pure design process with the "construction phase" done by a compiler or as required by an interpreter so lessons learned from, say building a bridge, are not actually lessons, but anti-patterns. We tend to call mapping other construction engineering practises onto software "software engineering" but these practises (which UML was made to be a part of,) don't work for software and yet for some crazy reason people generally label the failed attempts of the 70s and 80s as software engineering, continue to teach them and don't update the practises based on what is seen in the real world. UML as a notation for communication is fine but just about any time I see a UML diagram either it is a poor way of explaining something when pseudo code or English would have been better or it's indicative of a bad design that has somehow become so complex it requires a system of diagrams to summarize it.
Dude never turned compiler optimization....
Yeah this one for sure will support multiple types in the future, that time never came
I gave up trying to tell my coworkers to stop making needless abstractions. They just don’t listen.
Best April fools ever. Super excited for this interview though.
Indeed foresight is very, very hard. At the same time it's psychologically challenging to keep charging into the unknown, like there's some weird thing where the mind wants to distract itself from the exact task at hand.
Yeah, i like to use test when is possible because help us later when we need to change software and software always change!!!
Yeah I really like using unit tests as a form of implementation. Then the ones I really like I keep around
GREAT!! I want to hear more from you both!!
Maybe the start here was taken out of context, but did he really say that he does not see any benefits of clean code?
That is just wrong on so many levels.
Clean code does not mean you need to sacrifice performance. On the contrary, clean code will help you restructure the code in the future and then most likely to increase performance a lot.
Bad spaghetti code, with huge functions and bad variable names and so on is not better in any way. It is horrible if you work in a team and need both your collegues and yourself to understand and modify the code. And ugly code is certainly not faster by some sort of default.
But I cannot belive any decently experienced programmer would recommend non clean code, so I assume I misunderstood the context here.
Yes, if you sacrife speed a lot in some way just to make your code clean, something is of coruse wrong. That I can agree with.
Upload the full discussion!
Well, that was amateur level talk about code-architecture. Math is timeless, that is how a good system can scale into the future.
I plan at a high level, “I want some piece of code, some service etc to do something”. “Builder pattern may be useful, or hmm regex could work well there” That’s about as far as I go, then I try to implement it and usually the beautiful idea I had in my head is wrong and it looks different. Planning code is something I’ve never done though and never will at least on a function or line by line basis
"Learn to type fast" YES! THIS! 100%
Also, at least for me, learn your IDE, and/or whatever other things you can do to shorten that feedback loop. Keyboard shortcuts, hot reloading, automating as much of your build process as you can. Start doing it now, and every once in a while go look for new things you can do to shorten it even more. Tmux, little shell scripts to help reduce your cognitive overhead, maybe even _learning a new language with a shorter feedback loop and maybe even a REPL so that you can bang out proofs-of-concept before investing the time to implement it "for real"_ YMMV and only you can decide what specific things work for your needs, but I strongly encourage you to actively, ravenously work to shorten the time from idea to seeing whether it works. It may not feel like much tomorrow or next week, but if you do it consistently, you will look back in even just a year or two and marvel at how much faster you are able to go.
Full interview NOW!!
New perspective acquired. Now time to write a generic to integrate it into a future proofed pipeline
I think there's a term called "Wicked Problem" which is a problem that can't be solved until you solve it. A lot software is a wicked problem.
@TehPrimeTime 2:06 Kevin Lin??? what's that name?
He shared exactly same performance issues with Clean Code in C++ in his youtube channel (I tried to leave my comment there but comments disabled).
but Clean code is not just for C++. It's for every programming languages.
Generally complier decides the performance in the typical garbage collected language like Java, C#, Python. and Javascript.
I belive that coding style doesn't make major performance issues as long as code is not extremly bad like using millions of O(n^2), O(n^3), O(n^100).
I think that the code quality is more important than performance itself especially in the team project.
In the typical microservice architecture, network latency and database communication is way more time-consuming than the performance itself in the each server.
Also, If I were a C++ compiler developer, I would try to find the way to make better performance with the clean code guided codes if it have issues with Clean code as he mentioned in the video.
I tend to think our problems and visualize them in my head, be that on the buss, on the toilet, while staring at my screen, or basically any other time of day.
So when I start writing I have a code structure in my head, though I do fiddle around when writing to find something that works.
(I also look up random programming stuff every now and then to check how something can be done)
Also when I don't know the output of something, I just put a print statement and look at it while using it.
I once tried to avoid "over-engineering" and it kicked me in the ass a few months later. It's very unwieldy, nightmare to refactor and has ton of problems. Then I tried to design everything upfront in another project. Didn't help much. The next thing I will try is to make a prototype meant for throwing away and completely rewriting.
I wished the dudes I work with would have to hear that each single morning before their work.
I am confused to see clean code made equal to doing things (wrongly) future proof 🤔 For me clean code is also like making things easy to comprehend in the current code. Making API symmetrical and predictable. Not changing that one variable from outside the thingy (class/module/function) while N-1 updates to it are inside that thingy. Etc.
7:00 so we shouldn't plan at all ?
I came to this because I did all the things they complained about on my last project, and it left me feeling, "Was this over-structured code I built a symptom of an undiagnosed neurosis I have?"