- 622
- 139 745
The Serious CTO
Canada
Приєднався 27 вер 2017
Dive into the intricate world of technology, where the concepts are as vast as the internet itself, with a unique approach that blends humor, plain language, and insightful comparisons. Imagine learning from a Serious CTO - so serious that even the zeros and ones stand at attention - who doesn't just share tips but narrates them through captivating stories infused with wit, making the most complex ideas not only accessible but ridiculously engaging. How might these tales, so humorous they could make even a robot chuckle, revolutionize your tech journey? Whether you're a seasoned tech professional or someone who thought 'Python' was just a snake, discover how laughter, clarity, and practical insights converge in a spectacular fusion to transform your understanding of technology.
Meet the CTO Who's Changing the Game!
🎯 Welcome to The Serious CTO - Where Technical Leadership Meets Practical Wisdom
With 20+ years of CTO experience since 1999, I share battle-tested insights for developers and tech leaders. No fluff, just actionable technical knowledge packed into 4-6 minute videos.
🔍 What You'll Find Here:
• Code smell detection and solutions
• Software architecture best practices
• Tech leadership insights
• Project failure analysis and prevention
• Real-world developer career guidance
• And yes, some tech dad jokes 😉
⏱️ New Videos: Twice Weekly
Perfect for busy developers who want to level up their technical and leadership skills.
🎯 Who This Channel is For:
• Developers looking to advance their careers
• Aspiring technical leaders
• Current CTOs seeking fresh perspectives
• Anyone interested in software development best practices
💡 Featured Playlists:
• Software Project Failures Explained
• CTO Leadership Lessons
• Code Architecture Deep Dives
• Tech Career Development
📚 Experience & Background:
• CTO since 1999
• Extensive enterprise software development experience
• Focus on practical, implementable solutions
• Real-world case studies and examples
#SoftwareDevelopment #TechLeadership #CTO #CodeQuality #SoftwareArchitecture #DeveloperCareer
With 20+ years of CTO experience since 1999, I share battle-tested insights for developers and tech leaders. No fluff, just actionable technical knowledge packed into 4-6 minute videos.
🔍 What You'll Find Here:
• Code smell detection and solutions
• Software architecture best practices
• Tech leadership insights
• Project failure analysis and prevention
• Real-world developer career guidance
• And yes, some tech dad jokes 😉
⏱️ New Videos: Twice Weekly
Perfect for busy developers who want to level up their technical and leadership skills.
🎯 Who This Channel is For:
• Developers looking to advance their careers
• Aspiring technical leaders
• Current CTOs seeking fresh perspectives
• Anyone interested in software development best practices
💡 Featured Playlists:
• Software Project Failures Explained
• CTO Leadership Lessons
• Code Architecture Deep Dives
• Tech Career Development
📚 Experience & Background:
• CTO since 1999
• Extensive enterprise software development experience
• Focus on practical, implementable solutions
• Real-world case studies and examples
#SoftwareDevelopment #TechLeadership #CTO #CodeQuality #SoftwareArchitecture #DeveloperCareer
Переглядів: 145
Відео
The Code Smell Scam That Misled An Entire Generation Of Developers
Переглядів 2,4 тис.9 годин тому
Don't fall for the code smell scam! Learn about speculative generality and how it's fooling most developers in this CTO insights video. 00:00 - Start 00:52 - What is it? 01:19 - The Reality 01:33 - Real World Examples 04:32 - Outro
Is PostgreSQL Hiding THESE Amazing Features From You?
Переглядів 15014 годин тому
As a seasoned CTO, I've come to realize that not learning PostgreSQL sooner is one of my biggest regrets. In this video, I'm sharing my insights on why I think PostgreSQL is an essential tool for any serious CTO, and how it can take your database management to the next level. From its ability to handle large volumes of data to its reliability and scalability, I'll be diving into the benefits of...
I Found the Worst Coding Mistakes in History
Переглядів 1,1 тис.День тому
I Found the Worst Coding Mistakes in History
SILENCE is Costing You More Than You Think!
Переглядів 677День тому
SILENCE is Costing You More Than You Think!
Why Your Code is Jealous (and How to Fix It)
Переглядів 20814 днів тому
Why Your Code is Jealous (and How to Fix It)
Mastering Inversion of Control: Simplify Your Code Today
Переглядів 88314 днів тому
Mastering Inversion of Control: Simplify Your Code Today
LONG METHOD FIXES You Never Knew You Needed!
Переглядів 54921 день тому
LONG METHOD FIXES You Never Knew You Needed!
Avoid a Tangled Codebase - A Cautionary Tale
Переглядів 62021 день тому
Avoid a Tangled Codebase - A Cautionary Tale
Transform Your Code With These Techniques
Переглядів 173Місяць тому
Transform Your Code With These Techniques
Unraveling the Secrets of the EVENT LOOP!
Переглядів 229Місяць тому
Unraveling the Secrets of the EVENT LOOP!
Is Your Code Stuck in the Primitive Stone Age?
Переглядів 1,3 тис.Місяць тому
Is Your Code Stuck in the Primitive Stone Age?
Unlock Your Database Superpowers with Mariadb!
Переглядів 126Місяць тому
Unlock Your Database Superpowers with Mariadb!
Trouble Ahead! How Divergent Change Threatens Your Code
Переглядів 130Місяць тому
Trouble Ahead! How Divergent Change Threatens Your Code
Understanding the SOLID Principles in 10 Minutes
Переглядів 242Місяць тому
Understanding the SOLID Principles in 10 Minutes
Uncover the HIDDEN Danger in Your SWITCH Statement!
Переглядів 1,1 тис.2 місяці тому
Uncover the HIDDEN Danger in Your SWITCH Statement!
Code Smell: Shotgun Surgery or Just Bad Aim?
Переглядів 8832 місяці тому
Code Smell: Shotgun Surgery or Just Bad Aim?
Why do software projects fail? CTO reveals class diagram insights
Переглядів 4,8 тис.2 місяці тому
Why do software projects fail? CTO reveals class diagram insights
Programmers: Masters of Mystery or Just Data Clump Enthusiasts?
Переглядів 6902 місяці тому
Programmers: Masters of Mystery or Just Data Clump Enthusiasts?
Why Your Company Needs a Chief Digital Officer (and Fast)
Переглядів 1042 місяці тому
Why Your Company Needs a Chief Digital Officer (and Fast)
The BEST Way to Harness the Power of Large Language Models
Переглядів 843 місяці тому
The BEST Way to Harness the Power of Large Language Models
What Top CTOs Know About DevOps That You Don't!
Переглядів 3663 місяці тому
What Top CTOs Know About DevOps That You Don't!
Data Mesh: The Future of Data Engineering Explained
Переглядів 1523 місяці тому
Data Mesh: The Future of Data Engineering Explained
3 Keys to Building a Strong Team Culture
Переглядів 653 місяці тому
3 Keys to Building a Strong Team Culture
MongoDB: The Database Revolution You Didn't See Coming
Переглядів 1043 місяці тому
MongoDB: The Database Revolution You Didn't See Coming
Why You Shouldn't Become a Data Analyst
Переглядів 594 місяці тому
Why You Shouldn't Become a Data Analyst
Theory of Constraints Finally Explained (Breakthrough Case Study)
Переглядів 736 місяців тому
Theory of Constraints Finally Explained (Breakthrough Case Study)
System Usability Scale: The Best User Test You’ve Never Heard Of
Переглядів 606 місяців тому
System Usability Scale: The Best User Test You’ve Never Heard Of
Nightmare on Code Street: Eventual or Strong Consistency?
Переглядів 616 місяців тому
Nightmare on Code Street: Eventual or Strong Consistency?
Why Kotlin is Still the Best Choice for Android Development
Переглядів 786 місяців тому
Why Kotlin is Still the Best Choice for Android Development
One example of this I see all too often in the game modding community is the "library mod." A mod that only contains common code for all of that modders other mods. Of which there will be ONE. >.>
I'm guilty as charged 😂 May I suggest lowering the volume of the background music? (or perhaps even remove it completely) Less it more ;)
I sentence you to… listening to background music that’s too loud! Good suggestion, the next 2 will be the same but next ones will be less loud
I witnessed an AbstractWorkflowInstanceFacadeFactory. It had 100 methods on it that did wildly different things. Any time you needed a new workflow "step", you needed to add a new method to the interface and abstract class. Oh! And don't forget to add the method to the three additional copies of the same abstract class.. That said, I have seen times where there are cases that benefit from generality. Web request filters are a good example of good generalization. We use generalization all the time. But that stuff belongs in libraries as dependencies. And if you're writing it, you probably don't need to. There's probably an existing library for it.
How on Earth is acknowledging the fact that changes eat up the most time cost as well as they're the most difficult part of the design process. How in the world can you say that anticipating this fact and so designing with the idea that the design needs to be flexible enough so that changes won't be difficult. They won't be annoying. The next person that comes in is going to think there's fucking lucky stars that there are already the scaffolding that they need to start making these changes. How is that? Not a good thing? That's what future proofing is. You're not predicting the future. You're using your experience to know with an absolute certainty that people are going to want to change things. A lot of the changes are going to be stupid and beyond that you know what aspects of the design need to be most flexible and so I mean you kind of have I mean that. That's what experience gives you the ability to kind of know what the next person coming in and working on this thing. What are they going to be asked to mess around with first for them to be able to come in and not have to? You know essentially start from scratch as well. You know they're not having to decode something that's impossible to figure out. It's already been made flexible as well as object-oriented enough to where. If they don't like it, it's easy just to remove it all together and replace that something with something else because you've gone ahead and made sure that everything is even modular and even if they don't keep anything, they're still going to work within that modular framework so that that the more people that work on it the less has to be like figured out all over again. And then by the time it makes its way back around to you it's not really an issue. It's already following that flexibility and workflow that you went ahead and provided because you just knew from experience that this is how it works. Maybe design is so different from code, but I can't imagine that coding to anticipate change is a bad thing. I just can't imagine that.
Thanks for commenting, I love being challenged. Let's take an analogy: driving a car What I'm basically saying is that future building slows you down and that you should avoid it when you are speculating so this way you can go faster. That doesn't mean I'm saying drive at 180 km/h or mph I'm saying if you do it when you don't need you it will take you longer to get to your destination (ship the product) If you know what the future holds then of course you should plan and put stuff for it, but only when you are absolutely sure. And even then for me that's a bit of a grey zone. What if the product flops? Ships and then is abandoned? What does all this future-proofing give you? The product is dead...
@@theseriouscto You haven't ever gone back into a dead project for that one amazing thing that absolutely is fits the the issue you are currently having. Basically what I mean is. Future proofing is ensuring that not only yourself, but other people that you may never actually meet, you can feel more assured that you, and they will not have to solve the problem from scratch a second time.
What you’re explaining is writing clear and clean code Not implementing a whole scaffolding of abstractions or interfaces that are only used in one straight line to a single class Different things no?
Can long parameter lists happen when too many non-coding managers start making requests the design team isn't allowed to refuse? If so what's the solution?
Love this question, but you're probably not going to like my answer: It doesn't matter Non-coding managers should not tell you how to code. Therefore, you should be in complete control over the parameter list and how long it can get. Refactoring is your friend. If non-coding managers want to code, hand them the keyboard for a few months and turn off your cell phone. Tell them you'll go do their non-coding job (accounting, marketing, counting paperclips, etc...) Now, telling you what features are needed is another story altogether, so much so that it has nothing to do with coding. They describe the feature, you figure out how to code it.
Simplicity is king. I estimate about 2/3 of code in most codebases isn’t actually needed. I’ve done plenty of refactors where a small amount of code is added and 3-4x the amount of equivalent code is removed.
That's exactly the point-we're often overbuilding solutions to problems that don't exist yet! We all know that one day that special person in our lives may need bigger clothes... But let's not buy those clothes until they are needed ;-)
It never happens until it happens, before avoiding it you should take into consideration the real project context, the problem that I see with this ideas is that no one suggests a realistic alternative, because yeah you can add noise and unnecessary complexity to the system, but not doing it can also add complexity and bottleneck and the "you can refactor later" in most of the cases it is a very big lie, before taking any decision you should first analyse the context.
The real context will draw a BIG LINE between speculative and the real project context Given that most software projects never get finished, I think it's important to always leave the fluff for later Refactor later is not a lie, it's work Get it working, get it working well, enhance it, repeat
Awww great Video - only thing that could make it better was if Noah was a cat ^^
Yeah Nora is a keeper, fun fact I now don't record when she's around since she tries to climb on me at the worst moments. Gotta love them ;-) More fun fact? Nooby which is much older is more used to be recording and just lays at my feet.
@@theseriouscto A... Nora... not noah, my bad. Such a cutie ^^
Why are you attacking me ^^ seriously one thing I want to add - sometimes this overengineering already happens at conception level with bloated requirements that never really get used - because we might need it later
I think I'm attacking everyone ;-) Couldn't agree with you more, hence the power of saying no - OR - priorities. Suits want everything but they have to prioritize and you start with the most important thing. Would a suit get married and fix issues with the bride later? Well we don't write software like that either
Thanks 👍. Btw, take a look at surrealdb. I was amazed about joins without joins 😮.
This is a fantastic philosophy that im trying to encourage in our teams. It seems many engineers write complex abstractions for no other reason than enjoying writing code. 😂
Absolutely! It’s like some engineers think they’re composing Shakespeare when they’re really just writing a grocery list. Let’s aim for clarity over complexity!
Or a REST call has a dozen queryString params as it is a GET and does not (should not) accept body objects.
Ah yes, the classic 'queryString of doom'-where a simple GET call looks more like a ransom note. 😂 While it’s true GET shouldn’t accept a body, cramming a dozen parameters into the query string often signals that the API might need some rethinking. Sometimes, it’s worth considering whether those parameters could be grouped into a resource or if the endpoint should be restructured to better fit the use case. After all, readability and maintainability are just as important as sticking to HTTP conventions.
@@theseriouscto - some cases, like my last API, are constrained by decisions lost to the annals of time. And to top it off, the "data store" is an old mainframe with an old XML based "API" that required 4 fields just to get past security checks. Some days you just have to package that ransom note into an object then use that one object everywhere (GET/POST/PUT/DELETE/etc). GETs are a querystring that is turned into this "channel" object. All the others that take a body have this same object as a child object. I really appreciate this series as param passing is where I see a lot of "huh?" moments! 😛
i'll be pedantic and say that you're really complaining about pre-mature abstraction. Pre-mature abstraction is bad, future proofing is good.. Pre mature abstraction is just future proofing gone wrong.
We are 100% on the same page, and that's the definition of Speculative Generality. I think the word Speculative gives it away ;-)
Yes, excellent video. I definitely went through a phase like this in my career. In the last two-three years I have noticed myself finally stepping away from this, and it's much better. However, I think because I wrote (stupid) abstractions, I have learnt how to do it. Many people I know never learned to do it, and now they can't do it. Maybe there is some value in this, although I still feel bad that someone needs to maintain my bad code.
Maintaining bad code is like cleaning up after a party you didn't throw! At least now you know how to throw a better one next time!
That spoiler warning came too late; I couldn't click off the video. :(
So Sorry about that. Out of curiosity what content would you suggest?
I instead like to fully investigate the possible futures, and then just ensure that we're not abjectly working against those futures and preventing them. I'm happy to not build the future until we get there. But ignoring known futures is the road to hell. We're just coming out of a TWO YEAR refactor that was 90% downtime, and it was all from ignoring known futures while writing the code. Totally avoidable. We refused to look far enough into the future, and it cost us 1-2 years of full team downtime.
Wow, two years of refactoring sounds like a tough lesson-I'm sure that’s not an experience you want to repeat! You make a great point about known futures. Ignoring clear signals of where the code might need to go can create massive technical debt and costly downtime, as you've experienced. I also think it’s important to distinguish between speculating on features and knowing about them. Speculation often leads to over-engineering for hypothetical needs, while ignoring concrete, foreseeable changes can have consequences like the ones you described. Your approach of investigating possible futures without overcommitting to them strikes a great balance. It’s about acknowledging likely changes and ensuring you’re not actively working against them while still focusing on solving today’s problems. Thanks for sharing-it’s a valuable perspective and a reminder for everyone to keep their eyes open for those 'known futures' without falling into the trap of speculation.
I would say I’m probably the type of developer who tries to implement abstraction and modularity whenever possible. And I can see how this might increase complexity and waste time for both me and my teammates. However, I’ve also experienced difficulties with code that doesn’t have enough structure. I’m talking about hardcoded spaghetti with functions containing 1000+ lines. It was a nightmare to add features when there was no underlying framework whatsoever. Any advice on finding a healthy middle ground?
Great point! Striking the right balance between abstraction and simplicity is like walking a tightrope-lean too far either way, and things get messy. On one hand, too much abstraction can lead to unnecessary complexity that bogs down the whole team. On the other, too little structure turns the codebase into a plate of spaghetti no one wants to touch. My advice? Start simple. Build just enough structure to solve the problem at hand while keeping things flexible for future changes. Use refactoring as your safety net-let the code evolve as the requirements become clearer. And don’t be afraid to involve your teammates in these decisions; shared understanding is the key to maintainable code. Oh, and if a function starts approaching 1000 lines, that’s probably your code tapping you on the shoulder saying, 'Help me, I’m drowning!' 😂 Is getting a refactoring tool an option for you? I'd be curious to see if a good one spots patters.
@theseriouscto I don’t have any experience with refactoring tools, but if you have any recommendations let me know! I finished my internship at the company I was working at, and I’m not sure if it would be an option anyways, but would be nice to know.
@@owenm3112 What tech stack do you use?
I was working on an amateur project for a guy that had already built out a foundation, and his database had like 48 tables in it. MAYBE 5 had data, the rest were for "just in case" and "I need this later." It was a PITA to work in, and that is before getting to the mess in his website that interacted with it. That project made me stop "building ahead" when the Good Idea Fairy would whisper "You might need this later."
Ah the classic case of "just in case" tables! Built on the ideology of inspector gadget ;-)
Yup - you'll be preparing for anticipated changes that will never be there, while the real changes will anyway require a refactoring
Happy we're on the same page, out of curiosity what content would you suggest?
"The best code is the code that doesn't exist", are you paraphrasing Genrich Altshuller?
Not intentionally, but I’ll take it if it makes me sound smarter! 😄 The idea does align with Altshuller’s TRIZ philosophy: the best solution is one that eliminates the problem entirely. In coding, that often means solving the problem so elegantly-or questioning whether it needs solving at all-that you don’t end up writing unnecessary code. Truly a case of less is more!
@theseriouscto it's in my email signature "the ideal system is when there is no system" along eith Churchill's exhortation "no idea is so outlandish..." etc.
The only future proofing code needs is comments. Lots and lots of comments. Ideally funny ones.
Funny comments? Absolutely-future devs deserve to laugh while crying over the code! Just remember, comments are like seasoning: too few, and it’s bland; too many, and you’ve turned your code into a novel. Aim for the sweet spot where they’re useful, hilarious, and don’t double as a stand-up routine.
@@theseriouscto I can only write comments and no code so my opinion doesn't really matter. But as a problem solver, my view is that in the future I'm much more likely to go back to notes to see HOW I was thinking than WHAT I was thinking. In other words, I learn from the process more than from the results. Put another way: the way to future proof code is make it so you're better at coding after every project.
@ I have a great video coming up on a comment smell scheduled for Jan 5th I think you will enjoy the value it will bring
I'm glad this has been said- I was a programmer for a while. Originally, I wanted to make games but abstraction honestly killed it for me. I'd get so in my own head after being taught what was wrong to do, how much future proofing etc- that I'd spend multiple days on a game system that should be super simple... only to find that I need a value from that game system that felt too specific and so I'd try and add another layer of abstraction to future proof even further before getting burnt out thinking about all the ways it could go wrong/ all the things I can't replace. Bounced around a few companies, eventually went to a none gaming one and found a team with my mindset. They worked on small projects around the company which fit me well. Company wanted a countdown timer? Great! We'll build the most abstract countdown clock ever. It took multiple days and was super awkward to use... I didn't work on it- but was super interested in it. They went for component structure- each component was an interface. Interface for Minutes, Seconds, Hours, Miliseconds all within the Interface for a Clock which was part of an interface for a component called Timer. Seperate from Timer was an interface for getting the current time. There was an interface for getting the deadline time. They'd been setup incase we needed multiple timers, or if the timer would be using nanoseconds or days, or years or a different clock setup incase like- we stopped using hours and minutes... They wanted to make sure the errors would be futureproofed, they plugged in an error handler that reports errors and sends an email out. This was designed to be replacible and allowed for multiple error handlers to run incase there were other ways to report it. There was in interface for displaying the clock. Interestingly, they seperated out these components onto multiple processes that all spoke to eachother- that way you could stop a process and replace it with something else. ... ... ... ... ... ... It was... never extended... it was just a timer... they actually ran out of time making the thing that they didn't add any UI for changing the deadline... I also found out- once I was given it- that the code constantly broke when running it- you needed to run it multiple times as the components would sometimes reference something that hadn't loaded yet. Additionally, the error handler had become so ingrained that it was required to run and actually didn't work- it would replace the error message with a custom error message saying 'Error- error handler has failed to report error' It was... future proofed to death... I think another team came over and remade it in less then a day with just 'Look at time, look at deadline, show the difference on this clock...' Fun :) I quit programming- tried getting back into it but abstraction and such are so ingrained that I no-longer find it fun to make games
Hello, I hope to learn something new from you!
Welcome aboard! I’m excited to have you here. Stick around, and I’ll do my best to bring you something new, insightful, and maybe even a little fun along the way. Let’s make learning awesome together!
Ig this isnt as much the case in game development, say you're making an rpg it would be good to "future proof" say the components/character classes so all new players, enemies, npcs etc. are easy to implement in the future and act consistently
True, in game dev, future-proofing is like setting the rules for a board game before anyone starts playing-it keeps things consistent and makes adding new pieces a breeze. But even then, the trick is to future-proof just enough. Build for expansion, not for every possible DLC idea that might pop into your head at 3 a.m. Otherwise, you’re just creating a boss-level challenge for yourself!
Another thing that can happen is, one developer will be comfortable with a certain level of complexity or abstractness of a design, so it doesn’t feel over engineered to them, so they build a complex solution that handles problems just fine as long as they are maintaining it, but then you bring in more junior developers who need the code to be super easy to comprehend, and they either make a mess or they just can’t touch the “over engineered” code. What makes it over engineered in some cases is the mismatch between the complexity of the solution and the actual developers who will be maintaining it.
Or maybe developers are just used to a different set of design patterns. I think it's key to have a common understanding how code is developed in a dev team. This might change with time but should not change over night.
Ah, the classic 'over-engineered code handoff'-where one dev’s masterpiece is another dev’s nightmare. Complexity is relative, isn’t it? What feels like elegant design to one person might look like a crime scene to someone else. 😂 That’s why shared team standards are so critical. A consistent approach gives everyone a common language to work with, minimizing that ‘what fresh hell is this?’ moment for whoever inherits the code. And yeah, evolving those standards is fine, but doing it overnight? That’s how you give your team whiplash!
There’s a ven diagram between “smart enough to anticipate possible future changes in requirements” and “dumb or inexperienced enough to believe that you can mitigate this proactively”. That said, some futures are worth planning for, if you know they’re coming (or at least very likely to).
That Venn diagram is where optimism goes to die! 😂 But you're spot on-there’s a difference between guessing wildly and making an informed bet on a likely future. Planning for the inevitable? Smart. Trying to outsmart the unknown? That’s how you end up with a Rube Goldberg machine in your codebase.
"Premature optimization is the root of all evil" indeed.
Absolutely! Premature optimization is like packing for a trip to Mars when you're just planning a weekend road trip-you'll probably end up hauling a lot of unnecessary baggage and still forget your toothbrush.
@@theseriouscto Abso-fkn-lutely.
I think there is a place for future proofing: Knowing you will actually need these abstractions. E.g. you wrote similar code for different clients before and thus have some experience where it will end up. But if you solve a new problem, you dont know what your code has to solve in the future. In this case abstraction is just hubris. You think you know how to describe the world in abstractions. And then the exceptions hit you like a truck.
It's not speculative then
If you’re basing your abstractions on known patterns or prior experience, it’s not speculative-it’s informed design. The distinction lies in whether you have concrete evidence or are just guessing about future needs. Speculative generality becomes a problem when developers try to anticipate future requirements without sufficient context or data to justify those decisions. This often leads to over-engineered solutions that solve problems no one actually has, making the codebase harder to work with. On the other hand, leveraging prior experience to create abstractions that address recurring patterns is a smart way to future-proof in a controlled, meaningful way. The key is knowing when you’re dealing with a recurring problem versus stepping into the unknown.
There are sometimes good performance reasons not to split your data into discrete objects. Identical operations on, say, 2 trillion vertices can be slowed by several orders magnitude. Solution? I guess encapsulation of the entire dataset, with getters and setters for every variable, and provide pointers to the internal arrays. Tl;dr: objects aren't the only way to do these things.
You’re absolutely right-performance can make splitting data impractical, especially at 'trillion vertices' levels. Encapsulating the entire dataset with getters, setters, and pointers to internal arrays can be the right call in those cases. Objects are a tool, not a religion! That said, I don’t propose silver bullets. I propose solutions to actual problems-problems that exist. If your current approach works and doesn’t cause issues, there’s no need to go hunting for problems to solve, especially if they don’t exist yet. Overengineering is just speculative generality in a shiny new outfit.
I was and I still somewhat am guilty of this due to having to work in a context without good internal standards. I realize that my intent, simplifying my future self life, is at odds with the strategy. However I think that many devs do this because they do *recognize* a problem, and they solve said problem with the tool they know best: code. While said problem is best approached from a different angle, getting everybody on board with a style, adding tools to reinforce that style (I don't care where brackets are written, as log as it's consisdent on the whole codebase). Likewise, refactoring and tests should be embraced, but in many companies management is pushing for more features because that's where the money comes from, ignoring the economic aspects of unmaintained code. Teaching to non-technical people that *not* refactoring is *expensive* is a challenge, some don't know and can learn. Some know but don't care because they are already polishing their CV after they get their 'unexpected productivity' bonus.
Thank you for the thoughtful comment! You bring up an excellent point about how developers often default to solving problems with code because that's the toolset they know best. I agree-sometimes, these issues are better addressed through team standards and tools, but that requires organizational buy-in, which isn't always easy to get. In my experience, most products don't even make it to the shelf, so the priority is often building a minimal viable product as quickly as possible. While I don't advocate writing bad code, there are so many great refactoring tools available now that cleaning up the code later isn't as intimidating as it once was. So, for early-stage projects, I recommend focusing on getting the product out the door without worrying about future-proofing. After all, if the product doesn’t succeed, all that extra effort was wasted. That said, explaining this approach to management can be a challenge. I’ve found that construction analogies work well for this. For example, you can compare bad code to poorly placed electrical wiring: it works initially, but if you later need to add plumbing in the same spot, you’ll have to redo the wiring first. This helps non-technical stakeholders understand why cleanups like refactoring are necessary. Finally, when you have these conversations, it’s important to establish your role as the technical expert. A little humor helps: I like to tell management, 'I won’t mess with accounting or marketing if you don’t tell me how to architect software or write code!' It’s lighthearted but gets the point across.
I'm sub #953🎉
Sub #953? You're officially part of the cool kids club! Welcome!
100% true! Abstract base class with interface for data access for, you know, when you might change the DB. Because that happens all the time...
I know right? I can usually debunk that one just by looking at the queries and see if they use specific functions ;-)
@@theseriouscto you have to be prepared. You might get a call in the 3am and you have to change the DB immediately :)
I've actually done that. One of our products used to sync through Dropbox, and we moved it to sync via S3. Because of the abstract base class, it was relatively non-invasive. That said, it's not a sure thing that the base class helped. I could just have changed all references from Class A to Class B, and implemented functions until it compiled again. And it was a one-time deal.
@@ivanmaglica264 Definitely, my DevOps team got those calls from clients all the time
@@HollywoodCameraWork If the base class didn't really help then it wasn't done right and didn't need to be done. Sounds like you deal with some strange issues.
I think there is a very distinct difference between preparing a future and keeping the door open for change. The experience of refactoring gets creates the ability to estimate which things will be pain in the ass to change later, so while creating you can avoid these pain points. I've been at my company for a really long time, so i had the exerience to change what i had done years later. I had to dogfeed my smart ideas and that transforms what you see as "smart" imo. I know that in other places of the world, it is normal/recommended to change employers a lot ( avg. google engineer stays for 1.5 years i read) so it likely won't be you who does the change. So i would assume that in those situations, the reaction to a very complex system prepared for a future is not "wow, there is so much there already" but rather "wow, thats complex. Burn it down make it new."
very clear and concise explanation 🙏
Clear and concise? Looks like I’m on a roll! Next up: making spaghetti code look like gourmet cuisine!
This is the right level of inappropriateness to use when explaining 'inappropriate intimacy' ❤😂
Inappropriate intimacy? Sounds like a coding class I’d sign up for! Just remember, keep your code clean and your jokes cleaner!
thank you! very interesting and helpful. regards from a system engineering student
You are very welcome Regards from a grey beard 😄
That's a nice video about responsibility and ownership. Being a programmer, I think communication in a lot of cases is even more important than code Edit. Subscribed :)
Thanks for the positive feedback and welcome aboard. Let me know if there’s anything specific you’d like to hear about.
Another example of how being a SE has little to do with how you code, and more about how you manage appearances to everyone else. A poor coder could do what you say, he just needs to be chatting all the time about what he and everybody else is doing, using the right jargon about trust and communication.
Totally agree but equally true for many other professions. I think a simple slack channel for these kinds of announcements is best as it will minimize the chatting and distractions. The higher up can monitor it and push out the message when necessary.
As a CTO, you'd be responsible for the architechture..., so Whilst a rewrite is even more laborous, thereafter implementing additional functionality should be more economical. Could outsource once.
99% agree, the 1% would be the details and scope of the rewrite
If only common sense would be common.
I know right? Common sense as it turns out is not that common, it’s rare Should we call it rare sense?
A - how do you have only 864 subs? b - you need a white board :)
A) What a coincidence, I was asking myself that very question! B) Thought about it, I think you're the 2nd person to suggest this to me through comments - Problem is my recording studio prevents me from having a large one... but I've been thinking about alternatives like an iPad and a Pen... You think that would fit the bill?
@@theseriouscto I think it removed my comment from earlier... either way any visual aid like a diagram or ipad will work just as well if not better
@@niftylius Ok, let me see what I can do - thanks!
Cool video, keep it up! But please show code snippets for a longer time :)
Been thinking about that, also been thinking about making them available through GitHub - What would be your personal preference?
Yes, class diagrams. Pretty pictures of a wonderful system that doesn't exist. In reality, diagrams like these aren't referenced, become stale, and is only something are architect who's completely removed from delivery responsibilities might care about. Design your classes with TDD, humans are notoriously bad at modelling software with boxes.
Thanks for the comment! I appreciate your perspective-UML and class diagrams certainly have their critics, and there’s no one-size-fits-all approach to software design. You’re right that some diagrams can become stale or irrelevant if they’re not updated. That’s why, in my experience, they work best as lightweight planning tools for aligning teams early on, especially in larger projects. They’re not meant to replace iterative practices like TDD but to complement them when mapping out complex systems. As for the 'wonderful system that doesn’t exist,' I can see where you’re coming from. But in real-world scenarios, like when I sold software to a major automotive manufacturer, UML diagrams were a necessary part of the process (big-company vibes, right?). We only used them for critical areas, and they added just enough clarity to get everyone on the same page without over-engineering. Ultimately, whether it’s UML, TDD, or any other methodology, the key is flexibility-using the right tool for the right job. Dogma rarely works in software, and I’d never recommend a one-size-fits-all solution. What works for one team or project might not work for another, and that’s perfectly okay.
Nice reply, probably better than my sarcasm warranted 😂 agreed completely!
Yes, sir. You've got my sub
Awesome, let me know if there’s any specific subjects you would like me to pick
@@theseriouscto Tech zoo - having too many languages and methodologies in the same project.
@@niftylius Love it, definitely putting on my list
I love how software industry just keeps inventing names for old concepts. Functions pointers, virtual tables, and setting functions, are now called "Inversion Control". Just like Internet(dumb terminal mainframe/server) is now called "Cloud", lol!
Haha, I get where you’re coming from! The software industry does have a knack for rebranding concepts, often to make them easier to communicate or to highlight their broader applications in new contexts. You’re absolutely right that the core idea behind Inversion of Control has been around for a while-function pointers, virtual tables, and callback functions are great examples of similar concepts. What IoC does differently, though, is take these principles and organize them into a systematic approach that scales well in large applications. It’s not just about function calls but about managing object lifecycles and dependencies at an architectural level. As for the “Cloud” comparison-spot on! It’s the same infrastructure idea rebranded with a buzzword. 😂 But hey, if these new names help more people understand and adopt useful ideas, maybe it’s not such a bad thing! Glad you’re seeing through the jargon, though-it’s always good to remember the roots of these concepts. Thanks for sharing your perspective! 😊
This video has too many repetitive metaphors. Explain it, one metaphor, then demonstrate! Show us a bad example and then refractor it! Cuz now this is like a programming for managers video.
Thanks for the feedback! 🎉 You’re absolutely right-sometimes too many metaphors can water down the technical depth, especially if you’re here for hands-on coding insights. The goal was to make IoC accessible to a wide range of viewers, but I see how it leaned more toward explanation than demonstration. Here’s what I’ll do for future videos (and maybe even update this one!): 1. **Trim the Metaphors:** Stick to *one clear analogy* to explain the concept, then dive into a real-world coding example. 2. **Bad-to-Good Workflow:** Show a messy, tightly coupled code example (bad), explain its issues, and then refactor it using IoC principles (good). 3. **Balance the Audience:** I’ll make sure the video stays technical enough for developers but still approachable for non-programmers. In fact, since you mentioned it, here’s a quick **example** I’d include in an update: **Bad Example (Tightly Coupled Code):** (java) public class UserService { private UserRepository userRepository; public UserService() { this.userRepository = new UserRepository(); // Hardcoded dependency } public void saveUser(User user) { userRepository.save(user); } } (/java) **Why It’s Bad:** The `UserService` depends on the specific implementation of `UserRepository`. If `UserRepository` changes (e.g., switching to a caching layer), you’d have to modify `UserService`, breaking modularity. **Refactored with IoC (Dependency Injection):** (java) public class UserService { private final UserRepository userRepository; public UserService(UserRepository userRepository) { this.userRepository = userRepository; // Dependency injected } public void saveUser(User user) { userRepository.save(user); } } (/java) **What Changed:** Now the `UserService` doesn’t care how the `UserRepository` is implemented-it just needs the interface. This makes the code more testable and flexible. Let me know if this kind of example would resonate better. Thanks for helping me improve the channel-your input is gold! 💡
@@theseriouscto Hey! Nice to see that you take the feedback seriously! I think the example muddies the distinction between IOC and DI. For the latter you can just point people to the Code Aesthetic video. That's like gold standard coding video so I'd recommend not to compete there but to make a complementary video about different examples of IOC and then just point to CA's video for those interested. Best of luck!
I wish I understood this better. Lots of tantalizing metaphors, but I'm just not seeing it.
Thanks for your honest feedback-it’s super helpful! 😊 It sounds like the metaphors were intriguing but didn’t quite connect the dots for you. Let me break it down a bit more plainly: Inversion of Control (IoC) is about shifting the responsibility for creating and managing dependencies away from your code and into a central system, like a framework or container. Instead of your class saying, “I need this tool, let me go build it,” it says, “I need this tool-someone else will hand it to me.” For example, imagine you're building a house. Normally, you’d hire a plumber, an electrician, and a carpenter yourself (this is like your class creating its own dependencies). IoC is like hiring a project manager who brings in the right specialists for you. You just focus on designing the house (your app logic). If you’re curious, I’d recommend checking out our videos on **SOLID Principles** and **Shotgun Surgery**-they explore related ideas in a more step-by-step way that might make IoC click for you! Let me know if you’d like a deeper dive or more examples! I'm here to help. 🙌
@@theseriouscto Yeah, another metaphor isn't helping here. You're bringing coals to Newcastle. What I don't understand is how this differs from what I do already in say, JS and Java, by listing out imports at the top of the file--which is what it sounds like IoC would change--, and what difference it would make for programming in those languages.
Great question-let’s break this down! IoC isn’t about changing how you handle imports at the top of the file. It’s about shifting the responsibility of creating and managing dependencies away from your code and into something external, like an IoC container. Here’s what this looks like in practice: ### **Manual Wiring vs. IoC** **Without IoC (Manual Setup):** ```java public class UserService { private UserRepository userRepository; public UserService() { this.userRepository = new UserRepository(); // You create it manually } } ``` Here, `UserService` determines *how* `UserRepository` is created and tied to it. If you need to switch to a `MockUserRepository` for testing, you’d have to change this code. **With IoC (Dependency Injection):** ```java public class UserService { private final UserRepository userRepository; public UserService(UserRepository userRepository) { // IoC Container injects it this.userRepository = userRepository; } } ``` Now, the IoC container handles creating `UserRepository` and injecting it into `UserService`. Want to swap `UserRepository` for a mock? Update the configuration, not the code. --- ### **Why This Matters** IoC isn’t just about avoiding manual wiring-it’s about making dependency management **external and flexible**. Instead of hardcoding object creation, IoC lets you decide via configuration, annotations, or environment settings. **Example with Configuration:** ```java @Configuration public class AppConfig { @Bean public UserRepository userRepository() { return new SqlUserRepository(); } } ``` This way, your code doesn’t care *which* implementation of `UserRepository` it’s using. Testing? Use `InMemoryUserRepository`. Production? Use `SqlUserRepository`. The system is modular, scalable, and easier to test. --- ### **What About JS?** In JavaScript, you might use frameworks like Angular or NestJS to leverage IoC. Even in React, hooks like `useContext` mimic some IoC principles by centralizing dependency management. The real benefit is **decoupling your app’s logic from its plumbing.** Your objects stop worrying about *how* to get their tools and focus only on using them. Let me know if this clears things up or if you’d like a deeper dive into any part! 🚀
If you are the one who has to write all the code in the end, no matter what coding strategy you use, you just move code from one place to another one but in the end, you write the code and you write the same amount of code. Sure, come concepts are cleaner conceptually than others, but whether I implement the code to use payment service in my order class or whether my order class defines a payment interface and some payment service wrapper then implements that interface, I write the same amount of code in both cases, just once in the order class and once it a payment wrapper implementation. And when I need to switch to a different payment service, it also makes no difference if I have to rewrite the code that uses the payment interface in my order class or whether I have to re-implement the payment interface in a new payment service wrapper for the new payment service, it's again the same code and the same amount of code. Those strategies only make you write less code, if you can push that work to someone else, like I define the interface but someone else must implement it. Of course, the strategy will make a difference if I need to support more payment service in the future at the very same time but if that is a requirement, developers would naturally not directly use a single service in the order class but use an abstraction layer to begin with.
I agree with you on the quantity of code part, however... Ever consider how computers work? Say with all those different types of printers? Ever hear of drivers? Flexibility, Scalability and Maintainability are the 3 key advantages. I wouldn't use this approach to press less keystrokes on the keyboard, but to make my system flexible, extensible and/or configurable without needing to recompile. No?
Inversion of control is not dependency injection. And dependency injection is not an example of inversion of control. If you want an example of inversion of control, that USES dependency injection, it could be this: interface Service { fun doAction(); } class SomeManager { val injectedServices: List<Service> fun doActionForAllServices() { injectedServices.forEach { it.doAction() } } } Here the `SomeManager` class does not know about implementations of the `Service` interface. It does not CONTROL which implementations are going to be called when `doActionForAllServices` method is executed. Instead an independent `MyServiceImpl` can decide that it will now change how `SomeManager` works by implementing the `Service` interface.
IoC is the what, not the how. It’s the principle. One popular way to implement IoC is through Dependency Injection (DI).