I think that programmers overthink everything. I have a screw driver that maybe I use once per year. That not make me write a post pondering "screw driver yes or not". Its just a tool that is nice to have when I need it. Yes, i know that its just entertainment, but still it bother me a little.
John carmark exclusively codes inside the debugger he said something along the lines "i like to know what my code is doing, is just faster than trying to guess it out"
As a game dev, debuggers are one of my main tools to pause a get to look at a state and how it transforms. Most bugs i run into don't have anything to do with a higher architecture issue it's just someone in the team including myself forgetting some edge case some tester keeps running into
Debuggers are real time exploratory tools. Loggers are record keeping tools, to painting a picture of the system as it runs. Both have always operated at different scales and both are imminent valuable. If you've ever worked on a video game, you know that turning on real time logging is a substantial performance hit to varying degrees. If you want highly performant systems, you don't want logging or print statements because it's a lot of unnecessary operations. But if you want a highly scaled system you need logging in order for the totality of your systems operations to have a picture painted of it and accessible. To reject one or the other out of hand, or to mischaracterize one or the other, is just wrong, and dumb.
"Real time", really? Debuggers disrupt the time behaviour of your code, there is no way around it. If your code runs on a number of communicating devices simultaneously, (e.g., a few MCUs communicating over i2c or CAN), no freaking way attaching a debugger to any of them will be of any use at all. Just learn to debug the right way, without any of those debuggers. Even if debugging involves flipping a pin and connecting an oscilloscope to it. In my well over 30 years career I never had a singe use case for a debugger. Not even once.
@@vitalyl1327 It sounds like you work in a pretty interesting field where debuggers wouldn't be useful, or would be outright problematic. That's interesting. How much can you talk about what kind of work you do? You're working closer to the hardware layer, is that right? I commonly work in domains where disruption in the time behavior of my code doesn't have negative consequences while I'm working on solutions, and the ability to break into running operations, inspect the call stack, step into function calls, and injecting function calls for inspections and experiments while the main program is paused has incredible (safe) utility and benefits.
@@thebluriam these days most of the systems are very complex and contain multiple parts, some are software, some purely hardware, and there is very little tools available for simulating such systems. Try to find a decent mixed signal simulator that will simultaneously let you debug software running on an MCU and debug how an anaolog circuit will respond to this software behaviour, all in properly simulated time. So, until we have such simulators, the only real way to debug such systems will be to run them physically, in real time, and then collect as much data as you can while they run - pass all the trace data through available pins if you have any, even blink LEDs and record slow-motion video (I did it a few times, was quite fun), use analog channels to log more data... What is not possible in such scenarios is to pause system at any moment you like and inspect it with a debugger. And these are systems this world runs on - dozens to hundreds of MCUs in any modern car, MCUs running a lift in your building, MCUs in medical equipment in your hospital, etc. It means, if we want to sustain the very foundations of our civilisation, we should not train programmers who might eventually end up supporting such systems with an emphasis on interactive debugging. Much better to teach everyone debugging the hard way, and only then tell them that there's such a thing as a debugger that can be handy if your system is not time-sensitive and if all the usual debugging methods failed. Not the other way around. So, my point is, the hard methods should always be the default, interactive debugging as only a last resort. We'll have better developers this way.
@@thebluriamHe is just an old fart. You can perfectly use an debugger if you work close to hardware and don't look at signal-level communication. Hardware protocols are synchronic, command-based pieces of code that you can step through or use breakpoints with in case of exceptions that you want to have a real time look at. Loggers are actually way less useful in that environment. Debuggers only stop being as useful in distributed systems, but for everything else, they are a powerful tool. It's just that some people don't know how to use them properly.
I find the logger better if you need to see the scenario that happened in a concurrent environment. The debugger is good if you're doing something that is truly single threaded imo. Like a race condition where the break point causes things to synchronize correctly is the devil.
Haha, I just left a comment saying that I sometimes use a debugger to actively make this happen as a sort of test if it is just a dumb race-condition no one has considered
@@daniilpintjuk4473 That's not always possible. Either your code is heavily concurrent, or you're wasting the hardware capacity. Having said that, yes, you must always do your best to avoid concurrency where it's not needed. E.g., a single FSM with predictable timing is better than an interrupt-driven system, as you can always reason about the worst case timing and make sure it meets the hard real-time requirements. No way to do it with concurrency present. Yet, there's a form of concurrency that's even harder to debug and that's pretty much unavoidable in any modern system - multiple communicating devices, e.g., a number of MCUs communicating, each can be perfectly single-threaded and sequential, but their interaction still only makes sense in real-time and cannot be paused.
This, right here! Log scenarios, data, and business steps taken. Debugger is for isolation and technical system analysis. I need both. I never use general breakpoints inside processing datasets. You need to identify the problematic items and conditional break on them or just break outside of the dataset (sometimes just that helps). I'm a both logger and debugger user, and I'm trying to figure out why is this feud against debuggers. The alternatives seem to be: log everything or log-and-try until you find it. None of which I'm comfortable with. The first is too costly both in digging time and resources, and the second is majorly costly in terms of time and very very risky, to the point in which some deployment processes won't even allow it. Indeed, after exhausting the data logs and the debugging analysis, there is this ultimate step of having to admit defeat and go to the product manager and ask for a debugging log ticket. That one needs approvals, a scheduled deployment date, and so on. It will take from a couple of hours to days just to get that one out to figure out what the hell is happening. And don't forget about cleaning up. At some point, someone had set a debugging log in a rather inoffensive place for a particular type of client that where very very rare. A B2B contract added about 500k users of that type overnight and in less than 2 days our logging service reached its quota and blocked all the other logs, basically creating a blank window of logs for about 2 hours.
I mean generally if you start working on a new project in a new job. A debugger can really help understand the whole flow and also helps new guys working on the project to productive quickly
When I started self teaching PHP (my first language) a few years ago, I was flailing in futility until i got xdebug set up. I can't fathom how anyone ever learned any coding without a solid debugger, let alone work productively. And now that I'm reasonable capable, it just helps me understand other people's code (or even remember my own)
This is what reverse engineering tools shine especially when code is build with debug information. You get nice function blocks with lines showing how they call each other. Like Ghidra or IDA.
Exactly this. Especially when you’re in a legacy code base or a part of the code base maintained by the CTO that force pushes to master every time and hasn’t written a unit test since college.
A debugger is a godsend when you are learning a new complex language like C++ or Rust. Going through the code line by line, value by value and always rethinking what the values mean, where you are at and what is actually happening right here can be such a tremendous eye-opener and learning experience!
And if you are on Linux and throw in RR to record the execution and do reverse execution at speed, suddenly you will reduce debug time by 90% - easily.
@@captainnoyaux it's a reverse-debugger, which makes it possible to "execute backwards" (it doesn't actually execute backwards but it looks as though it does). This makes it so, you find an error, you can set a watchpoint on a value that changed in a way you did not expect, hit reverse-continue and execute until you find where it got set to a value you did not expect. It records the execution and that recording can be debugged using gdb. It's a _fantastic_ tool. Just google RR debugger. But its not for web developers though. It's for real developers (just kidding). So any compiled languages will do. But you can't record interpreted languages as that will record the actual VM not the interpreted language, if you understand what I mean.
Sometimes it’s easier to just scrap a proc and rewrite it. If I use a debugger it’s to track down the proc where the bug occurs, and if it’s simple to fix I fix it and if it takes me more than 5 minutes I usually just rewrite it to do what it’s supposed to do from the ground up instead of trying to dissect logic that isn’t obvious.
Even those you can log out. That’s how I mainly debugged. But I guess that’s what the author alludes to, your functions need to be well setup and tested to begin with. So you know where something goes wrong. And then a log is used to pin point it.
@@temper8281 have your kernels dump into dedicated areas of some large array, of course. Then have another kernel that wilp aggregate and descramble the log data.
the problem with this is that these guys are system level programmers (compilers & kernels). they don't build on-top of large frameworks built by others that may not behave as expected or may contain bugs. that's probably why you think that John Carmack has a different view, and rightly so. the problem with things like printf and alert, is that you have to change your code (and not forget to change it back!) and you have to already know in advanced what you want printed, sometimes especially framework objects have so many fields that you just want to look around at a specific breakpoint (again this is not necessarily something these guys need). i don't think people step through all their code that often. we tend to set only 1 or 2 breakpoints
true, when I saw how tsoding logged his code he always makes sure to mention that reducing the redundancies is a must or else you will update some part of your and you will end up forgetting to update another.
well, developers who build on top of vast third-party "frameworks" are just doing it wrong. In the vast majority of cases using a third party dependency is rather a liability than help, and can hardly be justified. The programming culture went down the gutter in the recent decades, with developers thinking it's acceptable to drag dozens or even hundreds of third party dependencies for even tiniest of functionality. It's all utterly disgusting. So, yes, attitude towards debuggers is a good marker of what culture programmer is more aligned with, and yes, I'm using this as an interview question to filter out undesirables.
@@vitalyl1327 vast frameworks and dozens of third party dependencies are clearly not the same thing. i agree with the liability that comes with each third party dependency (JavaScript is a good example). but i'm missing all nuance in your reply, such as reinventing the wheel or code from other employees being similar to third party code. limiting the use of third libraries seems more suitable as a company policy, than interview criteria. there is some irony in all this with how these system level programmers, expect us to use their stuff. how do you feel about candidates that use things like printf and don't know how to use a debugger? i think it's important to use the right tool for the job. if you know more tools, then it's better. sometimes it's printf and sometimes the right tool happens to be the debugger
Agree, I can't imagine anyone who would risk to hire a developer who doesn't use a debugger. He would be a cause of the enormous amount of bugs (liability), which will definitely lead to the bad consequences for the team and the company.
There's one niche of software development which really *needs* debuggers, and that's embedded. The whole point of a lot of embedded systems is to interact with the real world and external peripherals, so being able to set a breakpoint and see if the serial transmission on a board you just designed works or not is invaluable. Oftentimes, the debugging isn't even to verify your software, but just to see if the software plays along with the hardware. Combine this with the fact that you might not have a printf implemented, you might not have enough memory for printf, or printf would severely impact your program execution (say, if you use semihosting) makes debuggers way more attractive. There are solutions like SEGGER RTT out there that work very well for giving you tools you're just 'used to have' on general purpose computer debugging - and it achieves this over a debug link. Saving logs directly to the memory of your chip, reading said memory while the processor does other shit through the debug interface.
I would generally disagree with this. I have just started a new job working with a transportation management system which monitors and controls the highways in massive states like Texas and Florida. The project has been active for almost 25 years. I don’t even know what I would do trying to keep track of what is going on without using a debugger with breakpoints and being able to see the callstack or see how certain methods change values. Trying to just put a bunch of print statements would be a nightmare. I could be misinterpreting what the author is saying, but I also think “don’t debug, just write better code” is not a good point? Like yeah, no shit, I would want to write better code. Too bad that the job I arrived at already had 3 million lines of code written.
Don't put a bunch of print statements. Instrument your code automatically (trivial to do with LLVM), use static analysis to extract call graphs, use profilers to see where most of the time is spent, etc. Interactive debugging is such a barbaric and unjustified approach when there's a lot of fine tools available.
@@xybersurfer where they don't exist, they can be created. I wrote a Fortran-77 static code analysis tool once, spent like 4-6 hours on it, and it was enough to build a call graph of a huge code base, and also to extract BNF syntax from a hand-written parser in this Fortran code. I would have spent months stepping through all of these in a debugger.
@@vitalyl1327 i don't think you can expect most programmers to also be compiler writers. but i like your attitude and i would definitely be down for that personally, as i do have experience with BNF and writing parsers. but it would actually have to pay off. i could see how some very specific bugs can be found using a profiler, but i have doubts about finding bugs using static analysis. how would that be different from theorem proving, which is extremely time consuming?
Graphics debuggers (renderdoc/nvidia nsight) are incredibly complex and take about as much time to learn as to learn rust. However fixing issues with 0 error codes, 0 logging possibilities, and thousands of threads with only color values to debug is just plain impossible. The debuggers are this complex because of how complex GPUs are. There's no way that almost any modern video game/ game engines would be made without using these.
I never had any good experience with NSight. And it's not really a debugger, it's just a way to access performance counters, something akin to gperf for CPUs. My most fulfilling experience with debugging a GPU code was when I had a complete RTL model of the said GPU running in Verilator, and I could instrument the hardware as much as I like to monitor code performance for my specific use cases. Obviously, if you're not the GPU vendor, this option is not for you...
I'm not a "always run your code in a debugger" person like Carmack, but debuggers are so important. If you're working in a lower level language and running in a debugger, you are going to catch your mistakes faster. In this way, you won't need to do in depth debugging later. Most of the time a debugger is gonna catch a silly mistake that saves you hours later. Sometimes stepping through line by line is necessary. I have had to do some deep debugging before and had to stare at disassembly and reference manuals about the instructions and registers trying to nail down a bug that was in the end caused by a compiler optimization. These tools are indispensable.
If you're working with a low-level language and rely on debugger to catch your mistakes, I don't want you anywhere near any mission-critical code. Not in automotive, not in medical robotics, not in aerospace, not in industrial automation. Your way is guaranteed to introduce a lot more bugs than it's possible to tolerate. Firstly, debuggers do not show you where the problem is. They let you see where the problem manifested. Especially if it's a memory-related problem, you can see its effects far away from the real cause. Then you'll find some hackish workaround and trot away happily, thinking you fixed the bug. The real developers never rely on debuggers. We'd rather stick to MISRA-C harsh rules, use all possible static analysis tools we can find, build extensive testing infrastructure, build zero-overhead logging wherever it's possible. Debuggers will never replace any of this, and will never be of any real added value when you should do all the above anyway.
@@vitalyl1327 It sounds like you're making the same assumption as this article that you can't both use a debugger AND follow best practices. Static analysis doesn't catch everything and can often yield false positives. The very example I gave was not a memory bug. It was a bug caused by compiler optimization as GCC can optimize out NULL checks if it thinks that it's safe to do so. I've come across other compiler bugs before too. Apart from that, the bottom line is that you're going to make mistakes. Your testing and static analysis won't always catch them. Test cases are often the result of previous failures. Knowing how to use a debugger is really important. The reality is that most people are not working on the type of mission critical code that you have outlined. When I was first learning to code, I figured things out with print statements and had no idea that I could set breakpoints and step through with a debugger. A crash was just a crash and I needed to figure out what I did wrong. A lot of new programmers go through that. Knowing how to use a debugger then would've saved me so much time. A lot of the best practices and static analysis tools that exist today, simply did not exist then either. When you're working on old codebases with millions of lines of existing code that you can't just throw away, knowing how to use a debugger is even more important.
@@Spirrwell use a certified compiler (like CompCERT) to make sure that it does what it says on the tin. And, yes, I routinely find bugs in compilers. Never once I had to use a debugger for this, it would not have been useful at all. And often you cannot even debug the compilers, e.g., I found a couple of bugs in CUDA - had to treat it as a black box and did not even attempt disassembling the compiler itself. Logging, instrumentation and fuzzying helps. Learning to jump on a debugger as a first knee-jerk reaction to any bug is really a bad thing. I wish most programmers never knew debuggers exist. Learning the hard way is better. You did not waste time, you gained experience that early debugger users would have never had a chance to earn. As for the older code - I did not have to use debuggers when working on ancient (even back then already ancient) Fortran 77 code base, with barely any static analysis and instrumentation tools available, so I'm not sure why would I need a debugger for a modern MISRA-C loaded with tons of very powerful tooling. Better discipline trumps debuggers massively.
@@vitalyl1327 If you don't use debuggers you live in a fantastic unicorn land. As if static analysis could possibly catch all real-life run-time permutations. People like you mistake code readability and aesthetic with actual running software that actually work. The fact you correlate the usage of a debuggers with finding solutions that only fix the manifestation is the proof of how dumb your argument is.
In my work we develop large monolithic applications. Debuggers are an invaluable tool, they are effectively a print statement of the entire state of the program at the execution of a certain line. I rarely step through programs but I regularly use breakpoints to understand what's really wrong with the current state.
I think anyone who has worked across multiple paradigms knows the reality: the utility of a debugger highly depends on the language and environment you’re writing code for. If you’re writing a self contained C++ executable, debuggers are incredibly useful and much faster than using printf. If you’re writing webapps in JS, then you’re probably better off with logging (since flying by the logs and metrics should be your bread and butter as a web dev regardless). Basically if there are multiple machines involved in code execution (Client-server systems, distributed systems, etc) your only practical option is to lean on logging and metrics, due to the constraints of the environment. Otherwise, learning how to use a debugger will pay dividends.
In the web dev world I've found that the easiest way to not have to resort to the debugger is to write your code in a more functional style where you don't bury side effects like database calls, rest calls or file readers deep down the stack in the context you're working in. If you're doing a rest service for example, don't bury a database write call 3 methods deep behind a conditional statement in a service called by another service. Keep all adapters as high up in the stack as possible and bury the functional code as deep as you can where it can be unit tested and ignored and keep the side effects up high where they can be easily observed and logged. It should never be difficult to detect when an external service or configuration item is not set or performing as expected.
I think people who are against debuggers never really understood the point of debuggers. And I don't blame them, the name is stupid. It doesn't debug your code, it just slows it down. Having robust debug level logging will likely remove the need for a lot of "debugger" work, almost all. However there are times you need to get into the code line by line to see the unexpected logic. But if you are writing multiple console.log or print statements each time you encounter a bug, setting up a debugger will be better in the long term. (You should have good debug logging too ofc)
I find debbugers great for memory issues, no need for break points, just wait for the segfault and walk the stack upwards, but generally I try everything else first, which makes it so I have to have gdb's documentation handy, because I never learn it by heart.
04:56 completely agree. I work in embedded software development, and you do sometimes need to look at what the state of the peripherals registers is at a certain point of the program execution, and that just isn't good enough with printf debugging, because printfs _will_ effect your program's execution flow especially when working with a project that uses an operating system.
For me, a debugger is great when you’re in a part of a code base that is legacy or maintained exclusively by the CTO who only force pushes to master and hasn’t written a test since starting his own company. Ain’t no body got time to write all of the print statements necessary to understand whatever the CTO writes.
I have tried to use printf debugging/logging on embedded systems. It worked about as well as you'd expect: when a segfault happenes, the UART peripheral clock stops, so the print statements would give misleading information about the control flow. I literally could not live without a debugger these days.
I don't find myself needing to use the stepping or breakpoint function of the debugger very often, but whenever i do end up reaching for it i always curse myself for spending the last 15-30 minutes using something else lol. The ability to inspect local state when the debugger stops on an uncaught error is super nice whenever i get one of those, I'm always appreciative of that.
I research malware classification. Everything I write is in Python, mostly prepping data and training DNNs. My general approach is #1) writing simple, reliable, and modular code, #2) pylint to catch type errors and other dumb shit, #3) logging, #4) debugger. 95% of my bugs can be caught/solved with the first three techniques,, but every once and i while, something happens that I SWEAR I would never be able to solve without a debugger.
The value of debuggers highly depends on the types of projects you're working on, how experienced you are with the code base, and how much testing is in place. If you're working on a web API, you have tracing everywhere because you're logging basically everything for security, auditing, and metrics. If you've spent the majority of your career working on one product's code base, you can often intuit the bug source and systemic design flaws from the ticket alone. If your app is microservice-based or otherwise highly decentralized, there's no debugger that is going to support all of your programming environments as well as the intercommunication protocols.
Debuggers are also helpful when you are fixing bugs in a new/unfamiliar codebase. You identify the problem by checking the last log before things broke. Look for that log in the codebase. Start a debugger and run it locally, check the values and fix. Doesn't fix concurrency issues or deployment config issues. Logs work best there.
Learning gdb scripting to iterate a linked list in c was magical, at first i saw debuggers as a simple tool to see programs execute step to step, but with simple scripting you can learn more about the state of your program in any moment.
A good debugger is a complex tool. And you do have to invest time in getting to really know how to use one. Doubly so for your more complex concurrent/multi-threaded applications. How to get a break point to only trigger on the thread you want, etc. And just like printfing, you can easily wind up need to write code to control the debugger, just like you write code to parse the printf statements. I'm not saying one is better than the other, I use both depending on uh. err. DX. With that being said, a simple visual debugger is still a good tool for people learning to code. Are there any that really show people how C code* executes and builds the stack up and uses the heap. *Doesn't have to be C.
Yeah, it's a skill like anything else. I've spent much time teaching people how to use a debugger to help track down things, people just don't know how to use it, even if they programmed a lot. Of course you could do without, but sometimes it can help a lot.
I love to use debuggers when encountering errors, especially when it comes to null pointer exceptions. When a function takes multiple arguments it’s nice to immediately see which variable was null and then find quickly where the “nullification” happens.
For frontend development, I absolutely use the JavaScript debugger, especially when cleaning up spaghetti. Writing stuff from scratch, or places where I reasonably know the codebase, a sort of pseudo TDD, where you write unit tests till they fail works pretty well for me, plus more tests are always good, if writing them doesn't take a lot of time from you.
Whenever there are 3 tools in a toolbox for a purpose, tool #1 will not be the most used for someones out there and someone of those someones will write a blogpost stating that they NEVER use tool #1 including a segment about how they only /rarely/ use tool #1. Then someone else who also works in a place where tool #1 is not an absolute requirement ( ...or never spent the time learning to use tool #1 ) agrees and spreads that you should NEVER use tool #1 and then we got a new entry of common knowledge in the "real programmers do/don't do X" -list on the internet.
Another thing I like to use the debugger for in the not-so-high-performance-space is when I suspect unwanted race-conditions or on async stuff with unexpected side-effects. Its kinda fun to set breakpoints in a way that artificially slows down execution at specific moments and see how that changes behavior. I rarely start using a debugger, its more for experimenting, when I'm not sure what I'm dealing with. Logging is much faster and more reliable when I already have a good idea where the problem is.
I don't think "never use a debugger" is a very defensible position across all domains. Obviously there are problem spaces where the debuggers won't work.. but blanket forsaking a tool... Especially one as powerful as a modern debugger is a great way to waste hours of time. It's really hard to beat the ability to jump into the process at a point in time and examine the state of literally everything. Learn your tools kids.. even if you don't use them often you'll want to know them
Debugger is a specific tool. You don't need a debugger when you do simple code or integrations But you need one when you want to see typing system stuff or reverse engineering stuff
I am Team Carmack; Debuggers all the way. Karnighan's take is the hottest: "the most effective debugging tool is still careful thought,..." I don't care how careful your thought is. The debugger will tell you how it is, not how you assume it is.
And maybe the above is my juniority speaking, but only time will tell. At least I'm not falling for any "oh, but these well known people don't use it, therefore it's bad to use one" nonsense. If you want to use one, do so. If you don't, you've tried it, and found it to not work for you, that's completely fine by me.
the thing I use the debugger the most for is because by hitting breakpoints, I can navigate the call stack very fast. it can help to quickly understand relationships & concepts in complex projects.
Debugger, tests, strong typing, logging etc. are all different tools for different purposes for different situation. Comparing them to ask "Which is better?" is like comparing a race car and a tractor and ask "Which is better?".
I don't believe this article. When you find a bug, the very first goal should be to fix the bug, not to imagine how the program can be better with some form of refactoring (which would take a ton of man-hours because the program isn't owned by just an individual, but a team.). Also, identifying the cause of the bug should give much more insight on how the program can be refactored. Regarding how a debugger doesn't scale, I'm not sure if I'm following. When a bug occurs in a production software and you can't immediately tell why, I think the very first step is to reproduce the bug locally. If it's web, spin up a local server, etc. etc.. If you are able to reproduce the bug locally, that means there is some process with the bug in your local machine. You can start setting breakpoints with the debugger.
Lol. Good luck "setting breakpoints" if a few milliseconds disruption breaks the logic of the code completely. Guess debuggers are only useful for toy code like web and such...
@@metaltyphoon more like you don't have any idea. Let me guess, you're some kind of a web code monkey? Think of a case when your code runs on multiple MCUs, all communicate via various media, like RS485, I2C, CAN, etc. Protocols are time-sensitive. All MCUs workload is hard real-time. All logic depend on properly processing the data external to your system, that you have no control over. Now, good luck debugging any of this with your puny toy approaches, like stop and step through.
I use a debugger probably twice per week, for embedded development. Printing is out of the question for anything at all frequent, because the UART serial console is slow. We do logging at every function entry and exit, which is very helpful, but limited. I had to debug a bug in the driver we got from the manufacturer and that would’ve been brutal without a proper debugger.
Debugging is such a bad name for what debugging is. It just lets you see your bugs happen in slo-mo, it doesn't actually remove them. And sometimes bugs should just stay there to be admired, youknow? Software is an art.
Just recently, I needed a debugger for some Java code because I was helping someone who was getting a null exception. The messages were unclear to me what was actually null and made no sense until I had stepped through the code and in addition ran evaluations. Another instance I needed it was when I was doing upgrades and libraries died in strange ways. I can't print in libraries.... Like... I just think the writer does code that necessarily doesn't need the debugger. Edit: Modern debuggers do handle multiple threads by the way....
There are definitely scenarios when working with extremely large datasets that are being passed through extremely large messy code bases, where you aren't going to be able to rely on a debugger to find a problem that is occuring. A debugger definitely has its usefulness and its place though.
I've been using debuggers more and more lately... from fixing data race conditions in python, to porting complex intertwined grpc projects. It's definitely an irreplaceable tool in some scenarios.
I use debuggers in a way that scales: Every component is small, and has well-defined input and output, and every component knows how to test itself. This is a situation where all the great minds mentioned agree that debuggers can work.
I use debugger all the time to do reverse engineering. It becomes wholesome when I am reversing some malware that was written in C++ or some other OOP language and I am reading Assembly for hours and hours. I feel like looking for a shortcut between the roof and the ground without taking the stairs. Debugger is useful, for malware.
If it took them a decade to learn how to produce reliable software and they dont use a debugger ... frankly, 47 seconds in and I know I have no interest in what this person has to say.
Debuggers, used with breakpoints, are great in web dev for inspecting an API or an unknown data source to see what the internal representation is. It can save hours or even days trying to figure out someone else's output which is an input to your work
I can't imagine development without debuggers, you would have to keep to track with so many things both inside a program flow and the program logic in general by yourself, it is like trying to write complex software using the asm...
With a debugger I like to view the values of several stack frames, rather than just for the method I'm in. Granted, you could just add more logging, but sometimes it goes into third party libraries/frameworks.
to me, what is the most amazing thing about this article is how little people know about "modern IDEs" and their origin, ALL of them have their origin in Smalltalk80 and the Lisp Machines of the 1980s. Then everyone else copied those environments and called them IDEs and they haven't changed a bit since the 1980s, so there is really no distinction between old and modern IDEs
I hardly ever user a debugger I also mainly print. But there are cases especially in assembly that I use them more frequently because it’s not trivial to print something there and often a little mistake is harder to trace because you forgot to push a register onto the stack that gets globbered. Or off by one in loops. But also in reverse engineering (cracking) a debugger is invaluable to set a trap on an address to see when something is called.
Debuggers are meant to be used in two ways: 1. online => by attaching the IPD of a process and when the program crashes inspect the frames /stack. 2. offline => by core dumping memory and process state on a coredump file. The OBVIOUS user case is not that much useful in enterprise/large scale/more than one thread/video game application.
I don't really understand what their argument against it is, but my projects are usually small to medium C programs (15 files max, maybe a few thousand lines). Well-written print statements do unironically work, but I like the ability to change values and peek at memory that a good debugger (like CLion's) provides. For example, it makes it easy to check if a pointer got filled with valid data. I use an awesome logging macro that auto-fills the function name and color codes it before printing the message. It makes errors (red) and warnings (yellow) stand out in the output. Since my function names include the name of the source file, it's always easy to track down where the error happened. For a simple example, these might be printed with the function names in red: texture_load(): Image width is an invalid value 0 texture_load(): Failed to parse DDS header for file "door.dds" entity_create(): Failed to load textures for the new entity. Freeing the other assets from the entity pool. I know the error happened in "texture.c" in the texture_load() function and I can simply search for these messages. Then maybe texture_load() could add an extra line in blue that displays the values of the rest of the DDS header struct.
I'm a fullstack web developer and a debugger is one of my favorite tools I know. I'd say I use it 2-3 times a week. I'll use the tool that is fastest, sometimes a print statement is fastest, sometimes the debugger is fastest. Sometimes I don't know going in. I'll throw in a print statement and I go "huh, how on Earth did we possibly get here" then I'll switch gears to a debugger.
I totally agree with debuggers being bit flaky and not working sometimes. Most annoying problems have usually been that i forgot optimizations on. Best debugging experience i have had is in assembly, that just works.
6:20 Currently, I work at a company where the two projects that we have rely on databases with some tables exceeding billions or even tens of billions of records. The trillions though sound like very hard to reach, and can only exist in very specific platforms that have gained enoguh traction, like Discord for example that has well surpassed the trillions of sent messages.
I Use logs to hone in on where the bug is then write a test that replicates the bug. Then debug the test to see what happens and that generally solves things for me really quickly.
8:35 : Absolutely! It feels like a Hottake. Guys like this are the first to cry "We must rewrite everything" I hate that the Author mixes up using a "Debugger" with "Debugging" code. The Debugger is not that thing that fixes the code or forces you to solve an issue locally. It's just a tool, that helps you to understand what is happening at runtime. Does he expect his electrician to not use a multimeter? Are Car workshops not allowed to use analyzers, to find what is wrong with the board computer of his car? Maybe he prefers to pay for the extra hours, that Service workers would need, without these tools. Or he gets paid by the hour himself. Therefore writing all those print statements and cleaning them up afterward may provide more value to him. 10:45 : Javascript with all its APIs and all the asyncronious services is not a good fit for a debugger, but even there, it has its uses.
When I write C++ I use debugger so that when the program crashes I can see what the wrong value was that caused the crash. I can't know in advance that the program would crash there so logging only helps once I know roughly what the problem is. Otherwise I would have to log literally everything just in case I need the info. But I rarely use breakpoints and never step line by line (I don't think anyone does that). And I've also worked on large projects that had networking and lots of timing calculations that can't account for the program freezing. In which case the debugger is useless.
@@Женя-и1л3е in your case I would rather check my math separately from the code. So then if there is a bug I would know the problem is that I didn't make my code do the same as my math. Which is a much easier thing to debug.
Fun fact: you can have a debugger run until it hits breakpoint(s), or a variable changes; you don't need to "go line by line checking every line". Better yet, if you are already logging, you might have an idea about where the code is "misbehaving", and can jump to that part as mentioned in the last sentence (or by some other means), and get the best of both worlds. Another fun fact: if you think you know where the problem is, and how to fix it, you can do that without debugging; nobody is sitting behind you forcing you to use a debugger. Sidenote, but hackers use crash dumps and logs to asses how programs function, and determine if they can gain access to your device/system/service. If you really log *everything* it just makes things easier for them. Staunchly refusing to use a potentially useful tool, because "in your opinion" it makes worse coders, is simply being elitist for the sake of feeling superior to others.
I guess most of the time when I am debugging, I am speculating on a few things that could have gone wrong and going through and trying to confirm one, this involves "testing" for various conditions, so it might make sense to get two birds with one stone and just write these as tests that you keep around. That said, tests just catch something going wrong, though not necessarily why it went wrong? But if you're debugging the sort of issue that caused a crash, the crash already told you something went wrong, so not sure what you gain. I don't know, its probably worth keeping in mind, some add hoc test you end up coming up with may generalise well and may be worth immortalising, if you already wrote them while debugging, whats the harm in keeping them... Sometimes you just have no idea what could have gone wrong and I guess this is when you inspect state with a debugger, but that probably indicates a lack of familiarity with what is going on, though maybe that cant be helped when you're working with other peoples code and there's bad readability/documentation.
It really depends on project you are working on, and how familiar you're with it If I would give these "geniuses" the projects I am working on, I would like to check what they can do without debugger
Here's my thing with debuggers. I normally don't use them in my own code. However, when I'm at work, I'm (VERY) often working in areas of code that I have no clue how they work. I typically never step through line-by-line or anything though. I'll find the file that seems to be the victim, set a breakpoint where I think the bug is occurring, run the code, and I can immediately see every value in the current context, calling contexts, etc. I can not just see "oh, the value is null so it's going to take the following paths", but I can work my way up the stack trace and see WHY the value is null. And if it's on the frontend, I don't even need to wait to build my code before doing this, I can just do it in the browser.
I definitely prefer a debugger. I did logging for the first 5 years of my career, then learned to use a debugger. When I'm going through unfamiliar code, I really like stepping through and seeing where things are called and what values they hold. I've found I understand the code better and I'm able to find solutions faster.
I'm pretty sure Carmack has talked somewhere about the value of "stepping through a frame" of a game, but I do think it was a while ago, maybe doom 3 days, so it would be interesting to see if the increase in complexity and in the use of multi-threads would have changed his views on that
Debuggers are useful to me when the bug seems like it might be state-related and I'm having a hard time tracking down where the bug actually starts and/or where the bad state is introduced.
Earlier in my career I used to write bad code and hence I had to debug it a lot. It was necessary but very time consuming. If you write good code, you end up spending less time debugging it.
When someone says they don't like debuggers, it's one of the two: 1. Their environment doesn't have a good debugger. 2. They don't know how to use the debugger.
"In Smalltalk, everything happens somewhere else." -- Adele Goldberg. The same holds for all object-oriented or asynchronous or distributed programs, so practically every modern program. That's why debugging line-by-line is hopeless.
I'm a slow thinker and can't hold too many things in my head at once. Debugging is incredibly useful for me for both initial development or bug squashing
This is really where you break the problem down into smaller steps. Yeah it can create a performance hit, but it makes understanding and verifying the code so much easier regardless of debugging method. Single lining everything makes code difficult to debug.
Reverse engineering and exploit development would be so much harder without debuggers. Or even developing in an unfamiliar environment that has no docs like when writing custom Slither detectors just to see the complex object structures at runtime is immensely useful. Debuggers are awesome.
Yeah for reverse engineering I rely on a debugger for sure. During development I hardly do a print will do fine in most cases - except in assembly there I tend to use a debugger because it’s not trivial in my assembly projects to log to the screen.
I can't imagine going through even the 100k line code I'm working on at work right now without stepping through some code. Even if it's just to understand relationships between things. I've also had some crazy problems with unit tests failing because the freaking database randomly starts a new test class with different data.
I have noticed that in rust backend that i currently work on, i usually don't need debugger. I have eprintln on almost all error cases, so it's going to print out something if it fails. And leaving unwraps is pretty close to littering breakpoints 😂
Debuggers are great when combined with unit tests. But I agree that you usually do not want to debug a million lines long program with a traditional debugger.
Back in the day... The editor, compiler, linker, and debugger were all one thing. Nothing to configure. It just worked. All the time. Every time. Now I need to weigh the time I will save using a debugger vs. the time I will spend installing and configuring a debugger. These days there is almost always a faster way to get from red to green than messing with that mess.
For this kind of video, I always wonder, what kind of working environment are they working in? They seem to be all talented programmers, working with talented people, who apparently write good code with good loggings, so they can pinpoint the bug location so quickly. Have they been thrown to a legacy project, which being running for decades, and people can't tell why it is working? Have they met a bug, which creates wrong output from input, without throwing any exceptions, but simply displays wrong data? I'm really curious about the workflow they use to fix such bugs. Am I the only dummy that have to insert several breakpoints to find out where the calculation goes wrong?
While developping, I only run in a debugger, but not necessarily always using it, just that, if I want to put a breakpoint, I can do so right then and there instead of stopping, adding a print and restarting. It's very convenient with nvim-dap, delve and the launch.json config in a Go project. Run-of-the-mill web project? Yeah, I won't be using the debugger a lot.
I think that programmers overthink everything. I have a screw driver that maybe I use once per year. That not make me write a post pondering "screw driver yes or not". Its just a tool that is nice to have when I need it. Yes, i know that its just entertainment, but still it bother me a little.
Fair lol
This.
Exactly a debugger has its place as has logging
Based and debuggerpilled
welcome to the era of social media where everything a person randomly farts out needs to be a controversial hot take
John carmark exclusively codes inside the debugger
he said something along the lines "i like to know what my code is doing, is just faster than trying to guess it out"
As a game dev, debuggers are one of my main tools to pause a get to look at a state and how it transforms. Most bugs i run into don't have anything to do with a higher architecture issue it's just someone in the team including myself forgetting some edge case some tester keeps running into
Debuggers are real time exploratory tools. Loggers are record keeping tools, to painting a picture of the system as it runs. Both have always operated at different scales and both are imminent valuable.
If you've ever worked on a video game, you know that turning on real time logging is a substantial performance hit to varying degrees.
If you want highly performant systems, you don't want logging or print statements because it's a lot of unnecessary operations.
But if you want a highly scaled system you need logging in order for the totality of your systems operations to have a picture painted of it and accessible.
To reject one or the other out of hand, or to mischaracterize one or the other, is just wrong, and dumb.
"Real time", really? Debuggers disrupt the time behaviour of your code, there is no way around it. If your code runs on a number of communicating devices simultaneously, (e.g., a few MCUs communicating over i2c or CAN), no freaking way attaching a debugger to any of them will be of any use at all.
Just learn to debug the right way, without any of those debuggers. Even if debugging involves flipping a pin and connecting an oscilloscope to it. In my well over 30 years career I never had a singe use case for a debugger. Not even once.
@@vitalyl1327 It sounds like you work in a pretty interesting field where debuggers wouldn't be useful, or would be outright problematic. That's interesting. How much can you talk about what kind of work you do? You're working closer to the hardware layer, is that right?
I commonly work in domains where disruption in the time behavior of my code doesn't have negative consequences while I'm working on solutions, and the ability to break into running operations, inspect the call stack, step into function calls, and injecting function calls for inspections and experiments while the main program is paused has incredible (safe) utility and benefits.
@@thebluriam these days most of the systems are very complex and contain multiple parts, some are software, some purely hardware, and there is very little tools available for simulating such systems. Try to find a decent mixed signal simulator that will simultaneously let you debug software running on an MCU and debug how an anaolog circuit will respond to this software behaviour, all in properly simulated time.
So, until we have such simulators, the only real way to debug such systems will be to run them physically, in real time, and then collect as much data as you can while they run - pass all the trace data through available pins if you have any, even blink LEDs and record slow-motion video (I did it a few times, was quite fun), use analog channels to log more data... What is not possible in such scenarios is to pause system at any moment you like and inspect it with a debugger.
And these are systems this world runs on - dozens to hundreds of MCUs in any modern car, MCUs running a lift in your building, MCUs in medical equipment in your hospital, etc.
It means, if we want to sustain the very foundations of our civilisation, we should not train programmers who might eventually end up supporting such systems with an emphasis on interactive debugging. Much better to teach everyone debugging the hard way, and only then tell them that there's such a thing as a debugger that can be handy if your system is not time-sensitive and if all the usual debugging methods failed.
Not the other way around. So, my point is, the hard methods should always be the default, interactive debugging as only a last resort. We'll have better developers this way.
@@thebluriamHe is just an old fart. You can perfectly use an debugger if you work close to hardware and don't look at signal-level communication. Hardware protocols are synchronic, command-based pieces of code that you can step through or use breakpoints with in case of exceptions that you want to have a real time look at. Loggers are actually way less useful in that environment.
Debuggers only stop being as useful in distributed systems, but for everything else, they are a powerful tool. It's just that some people don't know how to use them properly.
Thankfully async logging exists
I find the logger better if you need to see the scenario that happened in a concurrent environment. The debugger is good if you're doing something that is truly single threaded imo. Like a race condition where the break point causes things to synchronize correctly is the devil.
Holy shit, I think I would quit my job and give up on my career as a programmer if I had to spend a single day trying to find a bug like that.
Haha, I just left a comment saying that I sometimes use a debugger to actively make this happen as a sort of test if it is just a dumb race-condition no one has considered
Ngl I've actually used this to test time sensitive code before
@@daniilpintjuk4473 That's not always possible. Either your code is heavily concurrent, or you're wasting the hardware capacity. Having said that, yes, you must always do your best to avoid concurrency where it's not needed. E.g., a single FSM with predictable timing is better than an interrupt-driven system, as you can always reason about the worst case timing and make sure it meets the hard real-time requirements. No way to do it with concurrency present. Yet, there's a form of concurrency that's even harder to debug and that's pretty much unavoidable in any modern system - multiple communicating devices, e.g., a number of MCUs communicating, each can be perfectly single-threaded and sequential, but their interaction still only makes sense in real-time and cannot be paused.
This, right here! Log scenarios, data, and business steps taken. Debugger is for isolation and technical system analysis. I need both.
I never use general breakpoints inside processing datasets. You need to identify the problematic items and conditional break on them or just break outside of the dataset (sometimes just that helps). I'm a both logger and debugger user, and I'm trying to figure out why is this feud against debuggers. The alternatives seem to be: log everything or log-and-try until you find it. None of which I'm comfortable with. The first is too costly both in digging time and resources, and the second is majorly costly in terms of time and very very risky, to the point in which some deployment processes won't even allow it. Indeed, after exhausting the data logs and the debugging analysis, there is this ultimate step of having to admit defeat and go to the product manager and ask for a debugging log ticket. That one needs approvals, a scheduled deployment date, and so on. It will take from a couple of hours to days just to get that one out to figure out what the hell is happening. And don't forget about cleaning up. At some point, someone had set a debugging log in a rather inoffensive place for a particular type of client that where very very rare. A B2B contract added about 500k users of that type overnight and in less than 2 days our logging service reached its quota and blocked all the other logs, basically creating a blank window of logs for about 2 hours.
I mean generally if you start working on a new project in a new job. A debugger can really help understand the whole flow and also helps new guys working on the project to productive quickly
When I started self teaching PHP (my first language) a few years ago, I was flailing in futility until i got xdebug set up. I can't fathom how anyone ever learned any coding without a solid debugger, let alone work productively.
And now that I'm reasonable capable, it just helps me understand other people's code (or even remember my own)
This is what reverse engineering tools shine especially when code is build with debug information. You get nice function blocks with lines showing how they call each other.
Like Ghidra or IDA.
@@nchomey well you can get the basic utility that a debugger provides with print statements. Just takes a lot more time.
Exactly this. Especially when you’re in a legacy code base or a part of the code base maintained by the CTO that force pushes to master every time and hasn’t written a unit test since college.
Just use proper static code analysis tools to map the flows. No need to watch them step by step in a debugger.
A debugger is a godsend when you are learning a new complex language like C++ or Rust. Going through the code line by line, value by value and always rethinking what the values mean, where you are at and what is actually happening right here can be such a tremendous eye-opener and learning experience!
I like to debug into failing unit tests to understand why they fail when it's not immediately obvious from reading the code 😂
Me too bro quicker than writing tests
And if you are on Linux and throw in RR to record the execution and do reverse execution at speed, suddenly you will reduce debug time by 90% - easily.
@@simonfarre4907 what is RR ?
@@captainnoyaux it's a reverse-debugger, which makes it possible to "execute backwards" (it doesn't actually execute backwards but it looks as though it does). This makes it so, you find an error, you can set a watchpoint on a value that changed in a way you did not expect, hit reverse-continue and execute until you find where it got set to a value you did not expect. It records the execution and that recording can be debugged using gdb. It's a _fantastic_ tool. Just google RR debugger.
But its not for web developers though. It's for real developers (just kidding). So any compiled languages will do. But you can't record interpreted languages as that will record the actual VM not the interpreted language, if you understand what I mean.
Sometimes it’s easier to just scrap a proc and rewrite it. If I use a debugger it’s to track down the proc where the bug occurs, and if it’s simple to fix I fix it and if it takes me more than 5 minutes I usually just rewrite it to do what it’s supposed to do from the ground up instead of trying to dissect logic that isn’t obvious.
Debugging is honestly the only tool that makes sense when trying to repair a bug in game dev. Too many moving parts to keep track of otherwise
Even those you can log out. That’s how I mainly debugged. But I guess that’s what the author alludes to, your functions need to be well setup and tested to begin with. So you know where something goes wrong. And then a log is used to pin point it.
@@CallousCoderYou can only log them if you know what you want to log. A debugger lets you interactively explore the state space to discover problems.
@@CallousCoder How do you log a program on a gpu?
@@temper8281 have your kernels dump into dedicated areas of some large array, of course. Then have another kernel that wilp aggregate and descramble the log data.
@@vitalyl1327 or use renderdoc
the problem with this is that these guys are system level programmers (compilers & kernels). they don't build on-top of large frameworks built by others that may not behave as expected or may contain bugs. that's probably why you think that John Carmack has a different view, and rightly so. the problem with things like printf and alert, is that you have to change your code (and not forget to change it back!) and you have to already know in advanced what you want printed, sometimes especially framework objects have so many fields that you just want to look around at a specific breakpoint (again this is not necessarily something these guys need). i don't think people step through all their code that often. we tend to set only 1 or 2 breakpoints
Good comment
a trick i like to do regarding those scattered prints / alerts - add an endline comment like `//__:_` then do find/replace to locate quickly.
true, when I saw how tsoding logged his code he always makes sure to mention that reducing the redundancies is a must or else you will update some part of your and you will end up forgetting to update another.
well, developers who build on top of vast third-party "frameworks" are just doing it wrong. In the vast majority of cases using a third party dependency is rather a liability than help, and can hardly be justified. The programming culture went down the gutter in the recent decades, with developers thinking it's acceptable to drag dozens or even hundreds of third party dependencies for even tiniest of functionality. It's all utterly disgusting. So, yes, attitude towards debuggers is a good marker of what culture programmer is more aligned with, and yes, I'm using this as an interview question to filter out undesirables.
@@vitalyl1327 vast frameworks and dozens of third party dependencies are clearly not the same thing. i agree with the liability that comes with each third party dependency (JavaScript is a good example). but i'm missing all nuance in your reply, such as reinventing the wheel or code from other employees being similar to third party code. limiting the use of third libraries seems more suitable as a company policy, than interview criteria. there is some irony in all this with how these system level programmers, expect us to use their stuff.
how do you feel about candidates that use things like printf and don't know how to use a debugger?
i think it's important to use the right tool for the job. if you know more tools, then it's better. sometimes it's printf and sometimes the right tool happens to be the debugger
I would literally have fired an employee for this take it is so bad.
Agree, I can't imagine anyone who would risk to hire a developer who doesn't use a debugger. He would be a cause of the enormous amount of bugs (liability), which will definitely lead to the bad consequences for the team and the company.
There's one niche of software development which really *needs* debuggers, and that's embedded.
The whole point of a lot of embedded systems is to interact with the real world and external peripherals, so being able to set a breakpoint and see if the serial transmission on a board you just designed works or not is invaluable.
Oftentimes, the debugging isn't even to verify your software, but just to see if the software plays along with the hardware.
Combine this with the fact that you might not have a printf implemented, you might not have enough memory for printf, or printf would severely impact your program execution (say, if you use semihosting) makes debuggers way more attractive.
There are solutions like SEGGER RTT out there that work very well for giving you tools you're just 'used to have' on general purpose computer debugging - and it achieves this over a debug link. Saving logs directly to the memory of your chip, reading said memory while the processor does other shit through the debug interface.
I would generally disagree with this. I have just started a new job working with a transportation management system which monitors and controls the highways in massive states like Texas and Florida. The project has been active for almost 25 years. I don’t even know what I would do trying to keep track of what is going on without using a debugger with breakpoints and being able to see the callstack or see how certain methods change values. Trying to just put a bunch of print statements would be a nightmare.
I could be misinterpreting what the author is saying, but I also think “don’t debug, just write better code” is not a good point? Like yeah, no shit, I would want to write better code. Too bad that the job I arrived at already had 3 million lines of code written.
Don't put a bunch of print statements. Instrument your code automatically (trivial to do with LLVM), use static analysis to extract call graphs, use profilers to see where most of the time is spent, etc. Interactive debugging is such a barbaric and unjustified approach when there's a lot of fine tools available.
@@vitalyl1327 those tools don't exist in all environments
@@xybersurfer where they don't exist, they can be created. I wrote a Fortran-77 static code analysis tool once, spent like 4-6 hours on it, and it was enough to build a call graph of a huge code base, and also to extract BNF syntax from a hand-written parser in this Fortran code. I would have spent months stepping through all of these in a debugger.
@@vitalyl1327 i don't think you can expect most programmers to also be compiler writers. but i like your attitude and i would definitely be down for that personally, as i do have experience with BNF and writing parsers. but it would actually have to pay off. i could see how some very specific bugs can be found using a profiler, but i have doubts about finding bugs using static analysis. how would that be different from theorem proving, which is extremely time consuming?
@@xybersurfer if your language is very limited (say, MISRA-C level of limited), proving theorems is way easier than for something too generic.
Graphics debuggers (renderdoc/nvidia nsight) are incredibly complex and take about as much time to learn as to learn rust. However fixing issues with 0 error codes, 0 logging possibilities, and thousands of threads with only color values to debug is just plain impossible.
The debuggers are this complex because of how complex GPUs are. There's no way that almost any modern video game/ game engines would be made without using these.
I never had any good experience with NSight. And it's not really a debugger, it's just a way to access performance counters, something akin to gperf for CPUs.
My most fulfilling experience with debugging a GPU code was when I had a complete RTL model of the said GPU running in Verilator, and I could instrument the hardware as much as I like to monitor code performance for my specific use cases. Obviously, if you're not the GPU vendor, this option is not for you...
I'm not a "always run your code in a debugger" person like Carmack, but debuggers are so important. If you're working in a lower level language and running in a debugger, you are going to catch your mistakes faster. In this way, you won't need to do in depth debugging later. Most of the time a debugger is gonna catch a silly mistake that saves you hours later.
Sometimes stepping through line by line is necessary. I have had to do some deep debugging before and had to stare at disassembly and reference manuals about the instructions and registers trying to nail down a bug that was in the end caused by a compiler optimization. These tools are indispensable.
If you're working with a low-level language and rely on debugger to catch your mistakes, I don't want you anywhere near any mission-critical code. Not in automotive, not in medical robotics, not in aerospace, not in industrial automation. Your way is guaranteed to introduce a lot more bugs than it's possible to tolerate.
Firstly, debuggers do not show you where the problem is. They let you see where the problem manifested. Especially if it's a memory-related problem, you can see its effects far away from the real cause. Then you'll find some hackish workaround and trot away happily, thinking you fixed the bug.
The real developers never rely on debuggers. We'd rather stick to MISRA-C harsh rules, use all possible static analysis tools we can find, build extensive testing infrastructure, build zero-overhead logging wherever it's possible. Debuggers will never replace any of this, and will never be of any real added value when you should do all the above anyway.
@@vitalyl1327 It sounds like you're making the same assumption as this article that you can't both use a debugger AND follow best practices. Static analysis doesn't catch everything and can often yield false positives. The very example I gave was not a memory bug. It was a bug caused by compiler optimization as GCC can optimize out NULL checks if it thinks that it's safe to do so. I've come across other compiler bugs before too.
Apart from that, the bottom line is that you're going to make mistakes. Your testing and static analysis won't always catch them. Test cases are often the result of previous failures. Knowing how to use a debugger is really important.
The reality is that most people are not working on the type of mission critical code that you have outlined. When I was first learning to code, I figured things out with print statements and had no idea that I could set breakpoints and step through with a debugger. A crash was just a crash and I needed to figure out what I did wrong. A lot of new programmers go through that. Knowing how to use a debugger then would've saved me so much time.
A lot of the best practices and static analysis tools that exist today, simply did not exist then either. When you're working on old codebases with millions of lines of existing code that you can't just throw away, knowing how to use a debugger is even more important.
@@Spirrwell use a certified compiler (like CompCERT) to make sure that it does what it says on the tin.
And, yes, I routinely find bugs in compilers. Never once I had to use a debugger for this, it would not have been useful at all. And often you cannot even debug the compilers, e.g., I found a couple of bugs in CUDA - had to treat it as a black box and did not even attempt disassembling the compiler itself. Logging, instrumentation and fuzzying helps.
Learning to jump on a debugger as a first knee-jerk reaction to any bug is really a bad thing. I wish most programmers never knew debuggers exist. Learning the hard way is better. You did not waste time, you gained experience that early debugger users would have never had a chance to earn.
As for the older code - I did not have to use debuggers when working on ancient (even back then already ancient) Fortran 77 code base, with barely any static analysis and instrumentation tools available, so I'm not sure why would I need a debugger for a modern MISRA-C loaded with tons of very powerful tooling. Better discipline trumps debuggers massively.
@@vitalyl1327 If you don't use debuggers you live in a fantastic unicorn land. As if static analysis could possibly catch all real-life run-time permutations. People like you mistake code readability and aesthetic with actual running software that actually work. The fact you correlate the usage of a debuggers with finding solutions that only fix the manifestation is the proof of how dumb your argument is.
In my work we develop large monolithic applications. Debuggers are an invaluable tool, they are effectively a print statement of the entire state of the program at the execution of a certain line. I rarely step through programs but I regularly use breakpoints to understand what's really wrong with the current state.
I think anyone who has worked across multiple paradigms knows the reality: the utility of a debugger highly depends on the language and environment you’re writing code for. If you’re writing a self contained C++ executable, debuggers are incredibly useful and much faster than using printf. If you’re writing webapps in JS, then you’re probably better off with logging (since flying by the logs and metrics should be your bread and butter as a web dev regardless).
Basically if there are multiple machines involved in code execution (Client-server systems, distributed systems, etc) your only practical option is to lean on logging and metrics, due to the constraints of the environment. Otherwise, learning how to use a debugger will pay dividends.
Invariably = without variation, consistently. Inevitably = in such a way that cannot be prevented.
So they're not the same word :D
In the web dev world I've found that the easiest way to not have to resort to the debugger is to write your code in a more functional style where you don't bury side effects like database calls, rest calls or file readers deep down the stack in the context you're working in. If you're doing a rest service for example, don't bury a database write call 3 methods deep behind a conditional statement in a service called by another service. Keep all adapters as high up in the stack as possible and bury the functional code as deep as you can where it can be unit tested and ignored and keep the side effects up high where they can be easily observed and logged. It should never be difficult to detect when an external service or configuration item is not set or performing as expected.
excellent, exactly my approach
I think people who are against debuggers never really understood the point of debuggers. And I don't blame them, the name is stupid. It doesn't debug your code, it just slows it down. Having robust debug level logging will likely remove the need for a lot of "debugger" work, almost all. However there are times you need to get into the code line by line to see the unexpected logic.
But if you are writing multiple console.log or print statements each time you encounter a bug, setting up a debugger will be better in the long term. (You should have good debug logging too ofc)
I find debbugers great for memory issues, no need for break points, just wait for the segfault and walk the stack upwards, but generally I try everything else first, which makes it so I have to have gdb's documentation handy, because I never learn it by heart.
04:56 completely agree. I work in embedded software development, and you do sometimes need to look at what the state of the peripherals registers is at a certain point of the program execution, and that just isn't good enough with printf debugging, because printfs _will_ effect your program's execution flow especially when working with a project that uses an operating system.
For me, a debugger is great when you’re in a part of a code base that is legacy or maintained exclusively by the CTO who only force pushes to master and hasn’t written a test since starting his own company.
Ain’t no body got time to write all of the print statements necessary to understand whatever the CTO writes.
I have tried to use printf debugging/logging on embedded systems. It worked about as well as you'd expect: when a segfault happenes, the UART peripheral clock stops, so the print statements would give misleading information about the control flow. I literally could not live without a debugger these days.
I don't find myself needing to use the stepping or breakpoint function of the debugger very often, but whenever i do end up reaching for it i always curse myself for spending the last 15-30 minutes using something else lol. The ability to inspect local state when the debugger stops on an uncaught error is super nice whenever i get one of those, I'm always appreciative of that.
I research malware classification. Everything I write is in Python, mostly prepping data and training DNNs. My general approach is #1) writing simple, reliable, and modular code, #2) pylint to catch type errors and other dumb shit, #3) logging, #4) debugger. 95% of my bugs can be caught/solved with the first three techniques,, but every once and i while, something happens that I SWEAR I would never be able to solve without a debugger.
First thing to note is that the author seems to only ever debug their own code.
The value of debuggers highly depends on the types of projects you're working on, how experienced you are with the code base, and how much testing is in place. If you're working on a web API, you have tracing everywhere because you're logging basically everything for security, auditing, and metrics. If you've spent the majority of your career working on one product's code base, you can often intuit the bug source and systemic design flaws from the ticket alone. If your app is microservice-based or otherwise highly decentralized, there's no debugger that is going to support all of your programming environments as well as the intercommunication protocols.
Debuggers are also helpful when you are fixing bugs in a new/unfamiliar codebase. You identify the problem by checking the last log before things broke. Look for that log in the codebase. Start a debugger and run it locally, check the values and fix.
Doesn't fix concurrency issues or deployment config issues. Logs work best there.
Learning gdb scripting to iterate a linked list in c was magical, at first i saw debuggers as a simple tool to see programs execute step to step, but with simple scripting you can learn more about the state of your program in any moment.
Debuggers are good for getting to know a Codebase
A good debugger is a complex tool. And you do have to invest time in getting to really know how to use one. Doubly so for your more complex concurrent/multi-threaded applications. How to get a break point to only trigger on the thread you want, etc. And just like printfing, you can easily wind up need to write code to control the debugger, just like you write code to parse the printf statements. I'm not saying one is better than the other, I use both depending on uh. err. DX. With that being said, a simple visual debugger is still a good tool for people learning to code. Are there any that really show people how C code* executes and builds the stack up and uses the heap.
*Doesn't have to be C.
Yeah, it's a skill like anything else. I've spent much time teaching people how to use a debugger to help track down things, people just don't know how to use it, even if they programmed a lot. Of course you could do without, but sometimes it can help a lot.
Maybe DDD ? Don't know enough about it to be sure but maybe it has some of the things you mentioned.
I love to use debuggers when encountering errors, especially when it comes to null pointer exceptions. When a function takes multiple arguments it’s nice to immediately see which variable was null and then find quickly where the “nullification” happens.
For frontend development, I absolutely use the JavaScript debugger, especially when cleaning up spaghetti.
Writing stuff from scratch, or places where I reasonably know the codebase, a sort of pseudo TDD, where you write unit tests till they fail works pretty well for me, plus more tests are always good, if writing them doesn't take a lot of time from you.
Whenever there are 3 tools in a toolbox for a purpose, tool #1 will not be the most used for someones out there and someone of those someones will write a blogpost stating that they NEVER use tool #1 including a segment about how they only /rarely/ use tool #1. Then someone else who also works in a place where tool #1 is not an absolute requirement ( ...or never spent the time learning to use tool #1 ) agrees and spreads that you should NEVER use tool #1 and then we got a new entry of common knowledge in the "real programmers do/don't do X" -list on the internet.
To me a debugger is just a fancy printf.
Another thing I like to use the debugger for in the not-so-high-performance-space is when I suspect unwanted race-conditions or on async stuff with unexpected side-effects. Its kinda fun to set breakpoints in a way that artificially slows down execution at specific moments and see how that changes behavior.
I rarely start using a debugger, its more for experimenting, when I'm not sure what I'm dealing with. Logging is much faster and more reliable when I already have a good idea where the problem is.
working with microservices means having good logging is usually the only way to actually find and fix bugs
I don't think "never use a debugger" is a very defensible position across all domains.
Obviously there are problem spaces where the debuggers won't work.. but blanket forsaking a tool... Especially one as powerful as a modern debugger is a great way to waste hours of time.
It's really hard to beat the ability to jump into the process at a point in time and examine the state of literally everything.
Learn your tools kids.. even if you don't use them often you'll want to know them
Yeh at that point it’s an objectively shit take. Debugger is sometimes the perfect tool. Sometimes not.
I'm mostly use debuger to get familiar with a legacy code that lacks documentation 😢
Debugger is a specific tool.
You don't need a debugger when you do simple code or integrations
But you need one when you want to see typing system stuff or reverse engineering stuff
I am Team Carmack; Debuggers all the way. Karnighan's take is the hottest: "the most effective debugging tool is still careful thought,..."
I don't care how careful your thought is. The debugger will tell you how it is, not how you assume it is.
And maybe the above is my juniority speaking, but only time will tell. At least I'm not falling for any "oh, but these well known people don't use it, therefore it's bad to use one" nonsense.
If you want to use one, do so. If you don't, you've tried it, and found it to not work for you, that's completely fine by me.
Also, I typically enter an application via a unit test or other. I typically don't run it completely from start to finish (though sometimes I do)
The “slo-mo” quote is great, because sometimes you need to watch the magic trick in slo-mo to understand the sleigh of hand.
Someone once said
"You don't need a debugger, until someone ask you to debug their thousand lines of codes"
the thing I use the debugger the most for is because by hitting breakpoints, I can navigate the call stack very fast.
it can help to quickly understand relationships & concepts in complex projects.
Debugger, tests, strong typing, logging etc. are all different tools for different purposes for different situation.
Comparing them to ask "Which is better?" is like comparing a race car and a tractor and ask "Which is better?".
I don't believe this article. When you find a bug, the very first goal should be to fix the bug, not to imagine how the program can be better with some form of refactoring (which would take a ton of man-hours because the program isn't owned by just an individual, but a team.). Also, identifying the cause of the bug should give much more insight on how the program can be refactored.
Regarding how a debugger doesn't scale, I'm not sure if I'm following. When a bug occurs in a production software and you can't immediately tell why, I think the very first step is to reproduce the bug locally. If it's web, spin up a local server, etc. etc.. If you are able to reproduce the bug locally, that means there is some process with the bug in your local machine. You can start setting breakpoints with the debugger.
Lol. Good luck "setting breakpoints" if a few milliseconds disruption breaks the logic of the code completely. Guess debuggers are only useful for toy code like web and such...
@@vitalyl1327you have no idea wtf u r talking about lol
@@metaltyphoon more like you don't have any idea. Let me guess, you're some kind of a web code monkey?
Think of a case when your code runs on multiple MCUs, all communicate via various media, like RS485, I2C, CAN, etc. Protocols are time-sensitive. All MCUs workload is hard real-time. All logic depend on properly processing the data external to your system, that you have no control over.
Now, good luck debugging any of this with your puny toy approaches, like stop and step through.
I use a debugger probably twice per week, for embedded development. Printing is out of the question for anything at all frequent, because the UART serial console is slow. We do logging at every function entry and exit, which is very helpful, but limited. I had to debug a bug in the driver we got from the manufacturer and that would’ve been brutal without a proper debugger.
Debugging is such a bad name for what debugging is. It just lets you see your bugs happen in slo-mo, it doesn't actually remove them. And sometimes bugs should just stay there to be admired, youknow? Software is an art.
I would cry if i didn't have a debugger, for the banking apps i work on. Especially on those old apps, where some files are over 8000 lines long
Just recently, I needed a debugger for some Java code because I was helping someone who was getting a null exception. The messages were unclear to me what was actually null and made no sense until I had stepped through the code and in addition ran evaluations. Another instance I needed it was when I was doing upgrades and libraries died in strange ways. I can't print in libraries.... Like... I just think the writer does code that necessarily doesn't need the debugger.
Edit: Modern debuggers do handle multiple threads by the way....
There are definitely scenarios when working with extremely large datasets that are being passed through extremely large messy code bases, where you aren't going to be able to rely on a debugger to find a problem that is occuring. A debugger definitely has its usefulness and its place though.
I've been using debuggers more and more lately... from fixing data race conditions in python, to porting complex intertwined grpc projects. It's definitely an irreplaceable tool in some scenarios.
I use debuggers in a way that scales: Every component is small, and has well-defined input and output, and every component knows how to test itself. This is a situation where all the great minds mentioned agree that debuggers can work.
I use debugger all the time to do reverse engineering. It becomes wholesome when I am reversing some malware that was written in C++ or some other OOP language and I am reading Assembly for hours and hours. I feel like looking for a shortcut between the roof and the ground without taking the stairs. Debugger is useful, for malware.
If it took them a decade to learn how to produce reliable software and they dont use a debugger ... frankly, 47 seconds in and I know I have no interest in what this person has to say.
Should you use a debugger or should you write tests or should you add logging?
Yes. You should use all of these.
Debuggers, used with breakpoints, are great in web dev for inspecting an API or an unknown data source to see what the internal representation is. It can save hours or even days trying to figure out someone else's output which is an input to your work
I can't imagine development without debuggers, you would have to keep to track with so many things both inside a program flow and the program logic in general by yourself, it is like trying to write complex software using the asm...
With a debugger I like to view the values of several stack frames, rather than just for the method I'm in. Granted, you could just add more logging, but sometimes it goes into third party libraries/frameworks.
to me, what is the most amazing thing about this article is how little people know about "modern IDEs" and their origin, ALL of them have their origin in Smalltalk80 and the Lisp Machines of the 1980s. Then everyone else copied those environments and called them IDEs and they haven't changed a bit since the 1980s, so there is really no distinction between old and modern IDEs
I hardly ever user a debugger I also mainly print. But there are cases especially in assembly that I use them more frequently because it’s not trivial to print something there and often a little mistake is harder to trace because you forgot to push a register onto the stack that gets globbered. Or off by one in loops.
But also in reverse engineering (cracking) a debugger is invaluable to set a trap on an address to see when something is called.
Debuggers are meant to be used in two ways:
1. online => by attaching the IPD of a process and when the program crashes inspect the frames /stack.
2. offline => by core dumping memory and process state on a coredump file.
The OBVIOUS user case is not that much useful in enterprise/large scale/more than one thread/video game application.
I don't really understand what their argument against it is, but my projects are usually small to medium C programs (15 files max, maybe a few thousand lines). Well-written print statements do unironically work, but I like the ability to change values and peek at memory that a good debugger (like CLion's) provides. For example, it makes it easy to check if a pointer got filled with valid data.
I use an awesome logging macro that auto-fills the function name and color codes it before printing the message. It makes errors (red) and warnings (yellow) stand out in the output. Since my function names include the name of the source file, it's always easy to track down where the error happened.
For a simple example, these might be printed with the function names in red:
texture_load(): Image width is an invalid value 0
texture_load(): Failed to parse DDS header for file "door.dds"
entity_create(): Failed to load textures for the new entity. Freeing the other assets from the entity pool.
I know the error happened in "texture.c" in the texture_load() function and I can simply search for these messages. Then maybe texture_load() could add an extra line in blue that displays the values of the rest of the DDS header struct.
I'm a fullstack web developer and a debugger is one of my favorite tools I know. I'd say I use it 2-3 times a week. I'll use the tool that is fastest, sometimes a print statement is fastest, sometimes the debugger is fastest. Sometimes I don't know going in. I'll throw in a print statement and I go "huh, how on Earth did we possibly get here" then I'll switch gears to a debugger.
I totally agree with debuggers being bit flaky and not working sometimes. Most annoying problems have usually been that i forgot optimizations on. Best debugging experience i have had is in assembly, that just works.
6:20 Currently, I work at a company where the two projects that we have rely on databases with some tables exceeding billions or even tens of billions of records. The trillions though sound like very hard to reach, and can only exist in very specific platforms that have gained enoguh traction, like Discord for example that has well surpassed the trillions of sent messages.
I Use logs to hone in on where the bug is then write a test that replicates the bug. Then debug the test to see what happens and that generally solves things for me really quickly.
8:35 : Absolutely! It feels like a Hottake. Guys like this are the first to cry "We must rewrite everything"
I hate that the Author mixes up using a "Debugger" with "Debugging" code. The Debugger is not that thing that fixes the code or forces you to solve an issue locally. It's just a tool, that helps you to understand what is happening at runtime.
Does he expect his electrician to not use a multimeter? Are Car workshops not allowed to use analyzers, to find what is wrong with the board computer of his car? Maybe he prefers to pay for the extra hours, that Service workers would need, without these tools.
Or he gets paid by the hour himself. Therefore writing all those print statements and cleaning them up afterward may provide more value to him.
10:45 : Javascript with all its APIs and all the asyncronious services is not a good fit for a debugger, but even there, it has its uses.
When I write C++ I use debugger so that when the program crashes I can see what the wrong value was that caused the crash. I can't know in advance that the program would crash there so logging only helps once I know roughly what the problem is. Otherwise I would have to log literally everything just in case I need the info. But I rarely use breakpoints and never step line by line (I don't think anyone does that). And I've also worked on large projects that had networking and lots of timing calculations that can't account for the program freezing. In which case the debugger is useless.
@@Женя-и1л3е in your case I would rather check my math separately from the code. So then if there is a bug I would know the problem is that I didn't make my code do the same as my math. Which is a much easier thing to debug.
Fun fact: you can have a debugger run until it hits breakpoint(s), or a variable changes; you don't need to "go line by line checking every line". Better yet, if you are already logging, you might have an idea about where the code is "misbehaving", and can jump to that part as mentioned in the last sentence (or by some other means), and get the best of both worlds. Another fun fact: if you think you know where the problem is, and how to fix it, you can do that without debugging; nobody is sitting behind you forcing you to use a debugger. Sidenote, but hackers use crash dumps and logs to asses how programs function, and determine if they can gain access to your device/system/service. If you really log *everything* it just makes things easier for them.
Staunchly refusing to use a potentially useful tool, because "in your opinion" it makes worse coders, is simply being elitist for the sake of feeling superior to others.
Using a debugger is usually faster than manual logging for me.
Debugging is great for new joiners to the project, and if you swithing between programming languages often
I guess most of the time when I am debugging, I am speculating on a few things that could have gone wrong and going through and trying to confirm one, this involves "testing" for various conditions, so it might make sense to get two birds with one stone and just write these as tests that you keep around. That said, tests just catch something going wrong, though not necessarily why it went wrong? But if you're debugging the sort of issue that caused a crash, the crash already told you something went wrong, so not sure what you gain. I don't know, its probably worth keeping in mind, some add hoc test you end up coming up with may generalise well and may be worth immortalising, if you already wrote them while debugging, whats the harm in keeping them...
Sometimes you just have no idea what could have gone wrong and I guess this is when you inspect state with a debugger, but that probably indicates a lack of familiarity with what is going on, though maybe that cant be helped when you're working with other peoples code and there's bad readability/documentation.
11:05 Primeagen being an awk guy instead of sed+scripting-language makes so much sense to me.
Not everyone is Rob Pike to just think about the bug and come with a solution. Mere mortals sometimes need to actually see what's going on.
It really depends on project you are working on, and how familiar you're with it
If I would give these "geniuses" the projects I am working on, I would like to check what they can do without debugger
Here's my thing with debuggers. I normally don't use them in my own code. However, when I'm at work, I'm (VERY) often working in areas of code that I have no clue how they work. I typically never step through line-by-line or anything though.
I'll find the file that seems to be the victim, set a breakpoint where I think the bug is occurring, run the code, and I can immediately see every value in the current context, calling contexts, etc. I can not just see "oh, the value is null so it's going to take the following paths", but I can work my way up the stack trace and see WHY the value is null. And if it's on the frontend, I don't even need to wait to build my code before doing this, I can just do it in the browser.
I definitely prefer a debugger. I did logging for the first 5 years of my career, then learned to use a debugger. When I'm going through unfamiliar code, I really like stepping through and seeing where things are called and what values they hold. I've found I understand the code better and I'm able to find solutions faster.
Daily dose of "reject tool X", gracefully neglecting its valid use case
I'm pretty sure Carmack has talked somewhere about the value of "stepping through a frame" of a game, but I do think it was a while ago, maybe doom 3 days, so it would be interesting to see if the increase in complexity and in the use of multi-threads would have changed his views on that
The way he works: after writing new code he debug steps through it to ensure that the state and logic are correct. It's pretty much what I do too.
Debuggers are useful to me when the bug seems like it might be state-related and I'm having a hard time tracking down where the bug actually starts and/or where the bad state is introduced.
So he does use a debugger
Earlier in my career I used to write bad code and hence I had to debug it a lot. It was necessary but very time consuming. If you write good code, you end up spending less time debugging it.
When someone says they don't like debuggers, it's one of the two:
1. Their environment doesn't have a good debugger.
2. They don't know how to use the debugger.
"In Smalltalk, everything happens somewhere else." -- Adele Goldberg.
The same holds for all object-oriented or asynchronous or distributed programs, so practically every modern program. That's why debugging line-by-line is hopeless.
The creator of Python: "I don't debug, I just use print statements."
It's all making sense now.
He's expanding his tools beyond a simple debugger.
I'm a slow thinker and can't hold too many things in my head at once. Debugging is incredibly useful for me for both initial development or bug squashing
Kernighan with the print statements comments had me rolling. It really do be like that
This is really where you break the problem down into smaller steps. Yeah it can create a performance hit, but it makes understanding and verifying the code so much easier regardless of debugging method. Single lining everything makes code difficult to debug.
Reverse engineering and exploit development would be so much harder without debuggers. Or even developing in an unfamiliar environment that has no docs like when writing custom Slither detectors just to see the complex object structures at runtime is immensely useful. Debuggers are awesome.
Yeah for reverse engineering I rely on a debugger for sure. During development I hardly do a print will do fine in most cases - except in assembly there I tend to use a debugger because it’s not trivial in my assembly projects to log to the screen.
I can't imagine going through even the 100k line code I'm working on at work right now without stepping through some code. Even if it's just to understand relationships between things.
I've also had some crazy problems with unit tests failing because the freaking database randomly starts a new test class with different data.
I have noticed that in rust backend that i currently work on, i usually don't need debugger. I have eprintln on almost all error cases, so it's going to print out something if it fails. And leaving unwraps is pretty close to littering breakpoints 😂
Debuggers are great when combined with unit tests. But I agree that you usually do not want to debug a million lines long program with a traditional debugger.
Back in the day... The editor, compiler, linker, and debugger were all one thing. Nothing to configure. It just worked. All the time. Every time. Now I need to weigh the time I will save using a debugger vs. the time I will spend installing and configuring a debugger. These days there is almost always a faster way to get from red to green than messing with that mess.
6:18 "No one gets into the trillions."
mhhhm, NSA? They gotta store all our emails and calls somewhere, right?
For this kind of video, I always wonder, what kind of working environment are they working in? They seem to be all talented programmers, working with talented people, who apparently write good code with good loggings, so they can pinpoint the bug location so quickly. Have they been thrown to a legacy project, which being running for decades, and people can't tell why it is working? Have they met a bug, which creates wrong output from input, without throwing any exceptions, but simply displays wrong data? I'm really curious about the workflow they use to fix such bugs. Am I the only dummy that have to insert several breakpoints to find out where the calculation goes wrong?
While developping, I only run in a debugger, but not necessarily always using it, just that, if I want to put a breakpoint, I can do so right then and there instead of stopping, adding a print and restarting. It's very convenient with nvim-dap, delve and the launch.json config in a Go project. Run-of-the-mill web project? Yeah, I won't be using the debugger a lot.
It really depends on the version of Pascal he had. I started with TP 3 and it was just a blank screen where you can write code in
Saying debuggers are unproductive is like trying to hammer a screw and saying hamners are unproductive 😆