I'm loving this video series. I was taught OOP in college, but I quickly grew resentful of things like inheritance and over-complicated encapsulation. But I've had a hard time defining and naming which kind of programming style I actually do advocate. This series has helped me a lot with that.
It's really refreshing to see someone explain why OOP is bad and how OOP could be good (in a more recent video than this one), because I kept saying for at least half a decade that OOP is not a good programming paradigm in the way it is being taught, used, and advertised, but I couldn't quite externalize the exact problems it has. I had a hard time properly understanding what "state" and "logic" really meant, in essence. Until he called them "data" and "actions", which is when it clicked, for me. The reason is because OOP made me confuse things, because an object inherently by design had both data and actions, and the data and actions are tightly coupled.
"Speculative generalization" is a good phrase. I was looking for a phrase that is analogous to "premature optimization" in terms of premature generalizations that I've seen many, many programmers do (especially the ones who think themselves as very "smart"), and it's just as prevalent in the programming world as premature-optimization, and your description of "speculative generalization" is exactly what I was thinking. It should be taken with the same amount of skepticism and perhaps derision as the premature optimization whenever one sees it, since the speculative/premature generalization makes the code less maintainable, and (somewhat ironically but also predictably) makes the code a lot harder to be generalized down the road according to the ACTUAL requirements. It should be taught to everybody as one of the "anti-patterns".
I looked at "premature optimization" to be mistaken for "misguided high level design" for a long time (because of a quote I heard from a Henley talk about how you "do everything top-down except the first time"), but yeah that works too.
donald knuth's admonition against "premature optimization" is widely misquoted and misunderstood. Its better to give people the tools to understand something, then give them general rules to follow.
4 роки тому+12
The most extreme case of speculative generalisation would be to replace the entire product with an interpreter and let the end user write the actual code. :D
6:34 Yes, for example in Minecraft. Armour stands were at some point decided to inherit the "mob" class (which normally deals with sheep, zombies, dragons, etc. and not items, XP orbs, arrows, etc.), because they share a bunch of properties (gravity, flushed by water, get destroyed when attacked enough, can "wear" armour, …), so pretty much every time something new gets added that does something to all mobs, it almost always has a few bugs with armour stand in the first snapshot, which then need to get special-cased out in the armour stand class later. For example when a new enemy gets added that attacks everything, then it also attacks armour stands.
That's just a wrong use of inheritance, i.e. bad design. You don't use inheritance because types of objects have similar data or functions and you want to share code, but when you want to apply the Liskov substitution principle. Obviously an armour stand is something quite different than a mob type and should never be used as a substitute for such, so it shouldn't inherit from that class either.
@@IkeFoxbrush Isn't a "bad use of inheritance" exactly what leads to inheritance related bugs? If you declare inheritance the norm for type definitions you're maximizing the risk of some inheritance being more trouble than help.
@@volbla By the same line of reasoning all programming leads to programming related bugs. So if we stop programming do we solve all software problems? I don't think thats's a particularly useful argument. I also don't see inheritance as the norm for type definitions. It's one of many tools in a software developer's toolbox and should be used with measure. Even in OOP it's hardly needed and often overused.
@@IkeFoxbrush _"Even in OOP it's hardly needed and often overused."_ I think that's exactly the point that both I, FaRo and Brian are trying to make. Some schools of thought (or workplaces, like the Minecraft development) overuse inheritance because it's useful in theory. In other words, they consider it the norm when defining new types. In practice that leads to some nasty bugs, such as armor stands getting attacked by monsters. In my eyes "Inheritance is overused" is the same statement as "There is a culture that normalizes inheritance." The problem isn't that it exists. Of course it can be useful, just like any other tool. The problem was all along that it's overused. That's what leads to a bigger risk of mistakes.
With 40+ years of programming I feel safe to say that objects (as a datatype) are "good". They serve a purpose; a solution to certain problems. Object 'orientation' however is just silly. It's like saying there is 'variable programming' vs 'array oriented programming'. The whole debate of pp vs oop revolves around a false dilemma in my opinion.
That is the "OOP is just a tool"-argument. But the tool is probably suboptimal. OOP 'objects' does not refer to modeling real objects or a collection of data. Encapsulating state and logic together is a central idea of OOP, which is supposed to modularize the code and achieve massive scalability. Seems like an awkward idea to me that creates a host of problems you could avoid by not doing that. You can achieve the same thing (modularization) by using modules and not coupling state and logic. This can look similar to OOP, you can still do 'speaker.increase_volume(10)', that's just a matter of syntactic sugar but I can't find any advantage that coupling state and logic provides, just problems.
@@pik910 but as data types it is useful. Just imagine complex numbers. Isnt it a good idea to couple state and logic (like multiplying to complex numbers). I prefer a*b instead of multiplyComplex(a,b)
@@andik70This is the thing and I totally agree, down the line we’re always ultimately losing some amount of precision, sure, but it’s the cost of time and convenience. Is it more efficient for us to actually code literally more efficiently, or is it more efficient for us to have an easier, faster and more simplistic way to code? OOP being criticised is interesting from a computer science perspective, but to the majority of working programmers it’s nothing but a thought experiment
15 years in. There has never really been a time where speculation turned out in my favour. Everytime the boss comes around with a new idea which my speculation did not predict. As such my glorious framework coudln't accomdate this "rapid iterating". wasted time, and had to be redsigned a lot, didn't prevent bugs anyway. Nowadays. I just make the code as bare-bones as possible. Cleanup where neccesary.
@Trevis Schiffer OOP advocates make similar claims as socialists do whenever someone explains the failures of either system. OOP works, you just have to plan better and do it right.
Great video! You make some good points about the useful parts of OOP. Unfortunately, many people take OOP to the extreme and make things way too complicated. That is why I usually try not to use OOP unless necessary and don't go overboard with it.
By the time this man made this video, react was still mostly OOP. Now, most of that part is deprecated in favor of modularization and functional programming. Not only that, but this is more and more becoming the norm amongst widely used frameworks. In other words, this man was a visionary many years ago despite many people calling him crazy
I remember when the video came out, everybody was furious calling him names saying he is a n00b that never worked on a real system before, now everybody either agrees that OOP is bad or they say it's only good in specific situations (which is not really OOP since now the system uses objects but it's not _oriented_ by them).
React was always modeled on functional programming ideas. The fact that they used the class syntax doesn't make it OOP. In fact the switch to only function components and hooks has made the framework incredibly prone to spaghetti, ie stateful and effectful code in every part of the system creating hard to debug problems. The quality of code in codebases I've worked in has steadily decreased since I started working with react in 2014.
@@rumble1925yeah idk what the op is talking about. React has been about composition since day 1. And I agree React has been getting worse, it’s a spaghetti hooks western
it doesn't matter what Alan Kay says, the meaning of OOP has changed now. Meanings change as their usage changes, and the modern OO languages are generally accepted to be "OOP".
@@ysink As does the meaning of "great performance relative to the hardware". Feels like we're leaving more performance on the table than we realized. Definitely explains why a hardware upgrade doesn't feel as "significant" as it used to be 20-30 years ago. Take the drive for minimal size/max hardware utilization from the 1970s (back when Smalltalk was more in demand) to the early 1990s and bring it to today's hardware (with consideration for different u-code) and you'll be amazed with the results.
The farther away you get from the hardware, the more performance you're pretty much guaranteed to leave behind. More than that though, i feels like programmers neither care nor try anymore. Just copy and pray-st from SO.
@@SimGunther Part of the problem with hardware upgrades not feeling as significant as they used to is because they are actually not as significant as they used to be. In the 80's processor speed would literally double from one year to the next. I'm not joking, you would literally go from a 33mhz processor to a 66mhz processor. That's freaking huge, and very very noticeable. Today the yearly increase in raw processor speed is something like 5-15%. Moore's Law was originally coined around this yearly doubling in processor speed. As the doubling slowed down it got stretched to 18 months and beyond, and then the definition altered to a system's total capability rather than its processor speed. Also involved here is the processor stopped being the bottleneck for the perception of performance in most cases. Ever since the 90's its hard drive speed and ram quantity/speed that have dictated the "feel" of a computer. You can look at arguments on old computer forums debating which drives were the fastest and how you should format them to get the most performance out of them - because even a small change there gave a noticeable performance boost. That's why SSD's were such a game changer when they came out - they were an order of magnitude faster than any spinning disk on the market, and that dramatically loosened the bottleneck that was hard drive speed. They are still more significant for general performance than processor speed. That means a computer with a 10 year old processor but a modern SSD and plenty of RAM isn't going to feel significantly different than a modern system until you start doing some processor intensive work, which is not something most people will regularly do. You can think of it as though modern computers can lift heavier weights than they used to but they can't walk any faster. Well, most people don't go around carrying heavy weights all the time, they are just walking, so you don't notice it. Modern software, on the other hand, does take advantage of the fact that processors can lift more weights by adding more and more features. It doesn't feel any faster, but you're doing a lot more work without you even knowing it.
@@jeffwells641 Otherwise a well-explaining comment, but unfortunately I have to disagree with your two last sentences. Modern software have very little or no added functionality, way too often even reduced. But the weight has indeed been grown even hundredsfold, mostly purely due to poor professionality of SW development, and carelessness of developers, as it is nowadays too easy to resort on "just buy new devices, they are cheap".
OO modularizes speculatively. Honestly I couldn't have said it better myself. I'm a Go developer and I recently started a job with a lot of Java developers. It's been hard having to go through these code bases that are overly abstract for the sake of abstract. Leading to simple things being done in complex ways. Go and to a lesser extent Rust are really going to save this industry. Go emphasis on simplicity has made my designs far more focused and minimal. And Ilve these videos that patiently deconstruct the dogma of OOP. As a former Java developer it even took me years to unlearn my brainwashing of OOP. I wish there were more people out here talking about this stuff. I also think young developers need to year these ideas.
I won't say to a lesser extent. Both Rust and Go have the module system down pretty good, but for me it was Rust that made me realize what I really needed in C++ out of the classes, and what was unneeded, simply because higher-level languages have had module system for quite some time and Rust proved there isn't anything about this idea that restricts it to high-level code.
I am a beginner programmer and I'm crazy, some people say OOP is good and others say it's bad, I do not enough knowledge to distinguish and analyze by my own what should I do. Btw I study python and use pygame and django
..You know, im glad that Lua was my first language. Procedural coding is how i code normally, coding OOP in even C++ is very, very strange. Breaks it all to pieces and really, REALLY hard to understand what im reading. Procedurally im usually writing in one function, because im usually not doing anything bigger than a few hundred lines... yet. Still dont understand the point of making unnessary functions, why have 17 lines when you can use 6 instead.
I really like OOP but only because I finally understood that I should not over-modularize things. If I have to download, parse and export a file, I should put all this in 1 method if I always have to call these 3 steps all at once. There is no need to separate them just (as you said it yourself) for the sake of abstract. I always start coding procedurally, then when I need to use a part of the code several times I put it in a function, then when I need to use several functions that are part of a same "group" I create a class *or* a main function, depending on the problem I need to solve. Putting everything in classes and separate methods just because "that's what OOP is about" is really stupid. You can do very dumb things with Go or Rust too. The problem is not really the language or the paradigm, it's who uses it.
I used to do speculative modularization. Nowadays is anything but speculative. Lets say you have to display a page of avocados so the user can select the avocado it prefers to add it to a cart. So, classes: PageOfAvocados, Avocado, Cart. There is nothing speculative with that. Feature requirements and how they categorize and break down naturally dictate all your classes. If a feature is way too complex than your clients won't understand it either. A noob like I was, back in the past would go something like: PageOfAvocados, SelectedAvocados, Avocado, Filter, CartController, Cart, FruitSuper, AvocadoSelectController,....
I think anyone worth their salt who's coded for more than a couple weeks in a dynamic language celebrated for OO, like C#, Java, Ruby, etc begins to pick up procedural coding habits out of necessity whether they realize it or not. Even if they think their doing OOP. I tend to think OOP is pretty useful for dealing with dependancy, probably as a bias because it's what I cut my teeth on before going back to learn K&R C. But I also agree with about 95% of what you've said in these videos. In all honesty OOP and it's gurus just have bad theory and a Messianic complex, and anyone who follows their conventions will eventually get sick breaking themselves on the rocks of endless encapsulation and debugging for the stupid class interactions and inheritance they write for themselves, and just say "fuck this noise." And look to static classes and static methods called from main for their salvation because it easier to have one throat to choke when something breaks.
in fact, every GOOD swdev books points against too much inheritance, against multiple inheritance, favoring rather interfaces and logical composition with separate top-down layers and common-sense approach, these days... but too much people, even "teachers" are ABUSING objects for everything/always, ideally agile, on standups :-/
@@7alken It's telling that even OOP gurus finally admitted that the old solution (composition) is way better than their fancy philosophical construct (inheritance). It didn't used to be this way. They used to tell you to add some third class to manage the interactions between classes on different nodes of the hierarchy, adding even more complexity to an already complex system. Now they try to cover their tracks and pretend they never spoused this madness, that using composition doesn't deviate from OOP's original vision, and that's the "right way" of doing OOP. Redefining the concept to keep it alive.
@@Vitorruy1 any very strict rules are nazi, except the roads
4 роки тому+33
9:23 Oh yes, I had to work in such a codebase for a while… A method "doThing" called a method "doTheThing", which called "doThatThing", which called "reallyDoThing"… and then it often eventually just lead to a library method, of which the source code was not available.
And you if need something more you better create a data pipeline that goes through all the layers instead of getting the data directly, otherwise it's not """"""clean""""""" code.
Your videos are very delightful and useful. They teach properties, problems and philosophies rather than merely recipes(also they go straight to the point).
Just write code that does what you want it to do; stop obsessing over what type of code it is or what "philosophies" it adheres to. Make the machine do the least work possible to safely change the data the way you need it to securely, fitting the code into the right category of design should be done in retrospect if at all.
When you brought up the separation between state and logic modules, it immediately reminded me of pure functions vs the IO monad in Haskell. I think that's one of the reasons functional programming has such a good reputation. It enforces a separation between logic and state management while encouraging a coding style that keeps the latter to a minimum.
Huh. I thought functional programming has such a bad reputation for being unintuitive and hard to learn, basically because it isn't imperative. I myself like functional programming (I'm a math major anyway) but I totally understand why my computer science friends dislike it, it's very different and you couldn't really think in terms of loops and iterations with steps anymore, unlike imperative which always lets you choose between iteration and recursion, etc
This video was released a few hours ago, so I’m assuming everyone who’s commenting actively follows the channel... so what’s the confusion? People are complaining that no code was shown or that Brian is flip-flopping, but if you’ve watched his videos... he’s explained this before, and given many code examples. Brian’s problem with OO has always been abstraction for abstraction’s sake. He’s demonstrated how imperative code is more straightforward and gives a clearer picture of the system rather than the individual part, and he’s argued that context is important to software-that a developer should be expected to understand the system and not just the arbitrary chunk of logic. This video just expands on these ideas. Watch his other vids if you’re still confused; it’s there; it’s good stuff.
"He’s demonstrated how imperative code is more straightforward and gives a clearer picture of the system rather than the individual part" Which is fine when you're designing a system from square one. However, when you're maintaining and/or extending it, most of your effort is spent rearranging and/or reusing the parts, in which case properly-done OOP is going to save you big time.
Aram Fingal not in my experience. I’ve seen systems that were certainly properly arranged OOP, but they were so over-engineered that they might as well have been spaghetti code. They were prime examples of exactly what Brian’s described in the original video - abstraction for abstraction’s sake, confusing ServiceFactoryInterfaceVerbNoun names, and the actual logic buried behind layers of object-passing nonsense. I know the first video made a lot of people angry but he was bang on with it, in my opinion. Not saying we’ll ever have to give up OOP, because hahaha good luck with that, but I hope people think differently about how they structure their OO code.
@@claireryan7644 "abstraction for abstraction’s sake, confusing ServiceFactoryInterfaceVerbNoun names, and the actual logic buried behind layers of object-passing nonsense" That's not properly-done OOP. Yes, I've dealt with fresh college grads' code where half the lines consisted of one-line methods and the associated Javadoc, Java inquisitors whose sole purpose in life is to condemn deviations from the orthodoxy without producing any code themselves, etc. The fact that some people take the paradigm to a ridiculous extent does not invalidate the paradigm. And a lot of the advice in that original video was horrible. Have fun profiling code where every function is 1000 lines long with only comments separating the different tasks.
Aram Fingal relax, man, I’m just telling you my experience. I’m a senior dev and I have seen a lot of really terrible systems. Yeah, this is an extreme example, but the problem is that it’s not rare and they’re not built by college grads who don’t know any better. They’re built by devs who think they’re doing it right. OOP run amok is a problem and that’s why I think Brian makes some good points. You don’t have to follow his advice to the letter - I don’t - but it did make me think more about how to structure my code. Honestly, I’ve had to debug giant enterprise OOP systems and procedural thousand line functions, and I find both fun? I mean, I love my job, it’d be weird if I didn’t.
I programmed for 3 years with OOP in mind and could never do it perfectly. Not even decently, i always wasted time on thinking about abstractions, there goes 3 years wasting my time on OOP, how ever when i tried to write algorithms like sorting algorithm without thinking about objects i was always faster
a to b, get working "prototype" then you can start doing "refactoring" or more like starting from a again (you fully understand the complexity beyond the problem), only with the idea of how it could be encapsulated and I'm sure you will see much better code without huge thinking, I admit I have kinda problem with OOP too :D
Why would you try to hammer a square peg into a round hole. OOP is a tool that has a specific use, if you try to use it for everything its going to seem bad. If you use it for its intended use, it will be useful. You don't need objects to write a simple sort and doing so is bad and you should feel bad.
I found the idea of splitting into state and logic modules quite similar to what data-oriented design proposes. Although the idea there is more about collecting things in large arrays so they can be process efficiently together. Improved modularity and better handling of cross-cutting concerns is a nice benefit though.
Thanks. Good follow up up to the previous video (which I've only just watched). After years of being a slave to OOD, I have evolved to a similar technique. Pushing as much logic as possible to a central core of pure functions, which are fed by (unpure) apis written in a more OO style. Moving from C# to F# has helped tremendously with this.
9:15 I have always felt this way about OOP, but I can never tell whether it's my inability to understand the code or the code's inability to be understandable.
It can be very readable. It just gets abused by people that like to write complex convoluted code as some sort of "hey look at me" cock measuring contest.
I don't code much, but I've listened to most of Brian's viedeo out of just the pleasure of hearing him elaborate his opinions. Very well exposed, good luck with your future projects!
Hi! Thanks for the interesting video. FORTRAN 90 language has a very nice module programing style. I worked with some computational tools written in FORTRAN 90 and everything was a module and had the public and private interfaces. In all honesty, it was very very easy to follow the code and understand the codes. I totally agree with the module oriented programming. It is really nice and efficient.
I have enjoyed this series. I agree with many of the strong points in earlier videos, and completely with the presentation of those points in this video. I laughed out loud when you said UML was fucking useless in the last video. I'm a systems engineer and it sounds like you would believe the number of times I find a modeler head down in their model, which has grown into an enormous unwieldy beast, obsessing over notation and symbology. But it takes them quite some digging to remember why they're modeling in the first place.
In C code design in our big work projects, we call a C file (.c + .h pair) a “class”, which effectively acts as a static class, much like your concept of a module.
It's the concept of modules which is important, to break up a large project into manageable parts. A class is usually a poor solution to gain namespacing/module behaviour.
@@csmusic6505 Haskell is awful... no i'm just kidding... I know it's me that's the problem and not the language... but I'll be damned if trying to learn it wasn't the worst decision I ever made. Put me off of learning programming for like a year. C++ was a breeze to learn compared to it and that says something because C++ is dense. *edit* F it. I'ma f****** do it again! Dusting off that text book and figuring out where I left off. I feel like I've finally had long enough to overcome by PTSD from last time I tried this.
@@cranknlesdesires I probably put another 30-50 hours in before i got burnt out again. But I learned a lot more this time. Probably one more go at it like that and ill be a genuine novice.
I think your state/logic modules need an example. Otherwise this 9 minute video very clearly formulates what i was always feeling about modularization and OO but never could succinctly formulate. Thank you for that.
@@YYYValentine He's probably talking about either Object-Oriented Programming is Embarrassing: 4 Short Examples or Object-Oriented Programming is Garbage: 3800 SLOC Example. I think the 3800 SLOC video is the one where he really goes into re-organizing the code and splitting out the logic from the state.
@@VivekYadav-ds8ozhow do I know what is a real world go/rust project? The NES emulator written in go which is shown in the OOP is Garbage video seems to me like being a real world go project but I guess its badly written. Else it would not be in this video.
OOP creates the need for a bunch of shitty design patterns that are just not needed if another paradigm is favored. Yes that other paradigm will require different abstractions that do constitute in a way design patterns themselves but its just not the same.
This video I found thought provoking. Without counter arguments, a clean distinction between state and logic sounds nice. What I both disagree and agree with is preemptive creation of abstractions. At least the way it was presented, it sounds like start coding and worry about design later. Design is about abstractions. If you have a good design, then creation the abstractions just an exercise of writing code. I do not agree with trial and error programming. It has the fundamental flaw that your resulting code seems to work, but you never put enough time into design(creating abstractions) to know if the end result is ideal. I work with this type of code all of the time. It's passing tests, it passed QA, it's been running live in prod for decades and no one is complaining, therefore it must be "correct". Nope. Many dozens of people have read the code, but few find the logic flaws that masquerade as working code, but silently corrupting data in novel ways that few could even fathom. Correctness of code requires design and abstractions are just codified reflection of the design. Abstractions are required for any correct code, regardless if it's implemented as an abstraction in the code.
Design is about fitting the problems to solve, abstraction is a tool in that it can reduce complexity / provide guarantees and increase efficiency (if done right). But the wrong, too many, or even missing critical abstractions all contribute to problems. The wrong and too many are the most devastating however, so avoid abstractions that serve no purpose in the near future. With respect to trial and error being bad and needing design first, that is just being evasive IMO. Designs requires trial and error iterations as well and there is no logic in thinking that the process needs to be fully separated from creating code. Nothing is perfect right away and lots of trial and error is how people learn and grow. The best working designs have lots of conscious iterations performed on them, each to stress it in a different way and are not grounded in a rigid methodology. Its a mental process foremost, not something that can be encapsulated in "tests" all that well, make sketches, verify assertions and solve the puzzle. And this is why there is no set recipe, only talent and experience factor in and its also why its delusional the industry tries to commodity the work as if its a packaging factory. A development process is like creating a sculpture from nothing, with the only differences that there are objective results to be met and a rich toolbox you can learn and use. There are techniques that help, defensive programming for one and seeking out the simpler kind of solutions first. As for long "working" code that silently corrupts data, that likely has nothing to do with how its made and more with the experience level and requirements at the time of making. If a design would have been made first, it would not have been any better as a persons capacity and experience is the same. Most don't know what they do not know, but don't get me started on that. Today's typical developers look like monkeys to me now. Still, some of what I see I used to have as well, in the end, its that all important experience over time. If there was a recipe we could all follow for perfect results, that would be convenient, but there is not.
@@TheEVEInspiration I would say thinking about the problem throughly can probably reduce the amount of iterations. I guess this is the main point behind "design first". It can therefore reduce development time. When only limited time is available (which always is), it can also increase quality. However, I think it's more important that the developer is actually willing to try different approaches. This might also include that the developer might have to refactor lage portions of the code he has already written.
Truth be told the most important thing is the KISS (Keep It Simple Stupid) principle nothing is more important than that. When it comes down to it the only thing you truly want is to understand the code from others as fast as possible without thinking too much about it. So if others read your code and understand it as fast as possible then you achieved KISS if not then you didn't. The programming paradigms like OOP, procedural and so on are theoretically only trying to help you achieve KISS. And how can you achieve KISS in a complex program? You need to use an enormous amount of time in the planning of the said program there is no good way around that. Well at least this is my personal opionion :)
The issue is less how simply you archive a goal, but the sheer number of them. OOP philosophy encourages high abstraction, high encapsulation, high constrain .... it gives you much more goals than just delivering the feature requiring you to juggle a billion of made-up concerns which leads to complexity even with KISS.
What we need is programs that are centred around "types". I've used scare quotes, because I'm anticipating someone is going to tell me that what I'm about to describe is not, properly speaking, a type in the same way that an int or a float is. What I mean here by a "type" is a set of state-carrying variables, each of which is itself a "type" or is a genuine primitive type such as a float or int which does not resemble a set, which is how the word is normally used. For a lack of a better word, let's proceed. The idea behind OOP is to bundle mutable state with methods for mutating that state - these methods providing the "interface" to that state. This requires the external caller of a method to know more than he really should have to. The caller generally knows, at least partially, what kind of mutation of state they want to achieve, or at least, what class of mutations. And the caller generally does not care HOW this mutation is achieved, so long as the mutation is predictable. So, really, the caller should not have to know which method they are calling or what a object's methods do. They should only have to provide a representation of the state they want to achieve, and the object should figure out how to move from its current state to a state in the target class. So instead of "public methods", we should have public variables - public not in the sense that they can be mutated by external code directly ( i.e. without submitting a mutation request to the object ) but that the object's user is expected to know that those variables exist in order to be able to tender state-mutation requests properly. And this expectation is usually made of an object's user in practice ANYWAY, since the caller of some state-mutating method only calls the method *because* they ultimately intend to mutate some state, somewhere ( possibly not directly in the object in question ). An "object" should also have attached to it a set of rules about how it is allowed to move through state-space. Some mutations may be forbidden. For example, if you have a type supporting some non-reversible operation, and a boolean to mark whether that operation has been applied, that value should always initialize to False and you should never be allowed to mutate the boolean variable from True back to False; which is to say, all requests for that effect should be denied by the object. This produces a change in the meaning of "object", because methods are not a part of the object, only typed variables and their mutation rules are. It's a little closer to how Smalltalk objects interact via "messages", but here the method name can never be contained in the message, and the message always takes the form of a class of type values that you want to the object to move into. So if the methods are not actually a part of the object, should they belong to the object? I don't see any necessity for this. You can follow exactly the prescription Brian makes here, and separate state-mutating variables from logic. Here logic is represented by the "objects" (again, scare-quotes signify that we're abusing terminology) and state-management can be separated out into simple procedures, which is an inversion of the normal approach.
I recommend functional programming. You achieve everything you need without abstraction for abstractions sake. Peter Norvig (director of research at Google) demonstrates that 16 out of the 23 patterns in Design Patterns are simplified or eliminated (via direct language support) in Lisp or Dylan.
Design patterns are bandaids for the deficiencies of OOP. There's no question about that. The main reason why a couple of the design patterns even exist is because functions aren't first class members. If we lived in some ideal fantasy land where writing in a functional programming language was maintainable for large codebases, then yes. Writing within a purely functional paradigm would be ideal. But we don't live in that fantasy land.
Man that ending picture of Alice is exactly how I feel when I look at some legacy code. And the Chesire cat is the comments of the lead engineer who quit the job 3 years ago.
A great complement to your "OOP sucks" videos! So may I summarise the argument here as that (a) a well designed and appropriate level of modularity is an essential strategy to control and comprehend all complex computational systems, (b) concentrating on artfully defining good stable data types and abstractions (ie *what* is being shared between modules) is usually far more important in writing modular code than concentrating on defining module interfaces (ie. what the modules actually *do* to the data), and that (c) most of the problems with object oriented programming that you were referring to in your earlier videos were related to OOAD's propensity of taking patterns which may sometimes be useful to control complexity at high levels of scale, and rigidly (dogmatically) applying them in situations where they unnecessarily *add* complexity. Or, more poetically perhaps: coding complex systems is an art, and modularity is its paintbrush...
Im trying to get into OOP and the picture you put of the empty box is how my head feels. In theory I very much get it and it's implied usefullness. But when i sit down and try it out my head feels so stretched and empty and a mess. I don't know where the hell the start but at the same time I do know where to start. But im in a state of overthinking it for what, im wasting time.
Even when I write in Java, I tend to only write procedurally unless I want to represent a noun. In that case, I write a class - or multiple classes in a hierarchy - that contain the components of that noun. For example, a button would have a position, a size, and a displaying state. This is neater than programming in C, which tends to turn into confusing spaghetti code because of headers and source files. However, there are no "verb" classes in my code. This makes sense, and is probably one of the biggest criticisms of yours leveled onto OO. The truth is, the problem with OO is that people misuse it and make things confusing. It's not good or bad. It's like any other paradigm, it has its uses. Some things it makes sense for, others it doesn't, but until people stop using it like ketchup on food, it'll have a bad reputation.
I always felt something similar to relational modeling would be a better goal for module organization. OOP tends to force one into a hierarchical and/or nested model. When it goes outside of those to escape the limits of trees it becomes a big ball of pointers, like what databases used to be before relational came along. For example, if you want to control your reference to a "parent" scope, you can using a foreign key: you have an ID number to the parent scope module "table". Such perhaps may even be computed run-time. Relational better manages cross-references than RAM pointers. And hierarchical file systems for managing code are similarly limiting. We are outgrowing trees and nested-ness to manage code.
OK, programmers seem to be obsessed with the way they organize their source code, without ever giving a thought to the obvious: every capable compiler will completely flatten whatever elaborate code organization you came up with for the purposes of your "team" (aka group of people who can't stand each other and who don't talk to each other except when the entire building is on fire). What you would have to think about, instead, is the runtime organization and dynamic execution of your code, which has little to do with how you define interfaces, datatypes etc.. That is mostly a function of the actual control code that you put in.
I think this is a great refinement to the orignal video, as someone who would label themselves a OOP programmer, its the way I was 'raised' so to speak, I found myself resoundingly agreeing with this entire video. Its all about moderation, abstraction and ecapsualtion are absolutely useful concepts but can definitely be taken too far. I would like to think that no single paradigm is best, and that the best programmers are those that have a solid grasp of the basics of all paradigms and that the best languages/frameworks are the ones that allow us to use a healthy mix of all those concepts. I would like to think that modern programming, for the most part, is multi-paradigm
This is my understanding of what was presented along with questions. Am I misunderstanding any of these concepts? State Modules: Understanding: Stores state of the program, avoid merging state with various logic units to help managing state across many features while also providing a "simple" interface to obtain said state. Question: Say we have an inventory and common functionality is to add or remove items from the inventory. Since this functionality only modifies the state it belongs to, inventory, and nothing else, I feel it would be okay in this case to merge state and logic into one module? Logic Modules: Understanding: Essentially pure functions that recieves state and outputs a 'copied' transformed state. These should avoid directly modifying program state and instead defer that role to whatever called it. By this logic, could this be considered a form of pipe and filter mechanism? Question: How would the actual state updating take place, a different kind of module that essentially links the two? "Controller Modules" To address the previous question you essentially have another form of module that orchestrates these interactions. Is this completely off base? For example, say I have the inventory again and some mechanism to pick up an item to be added to the inventory. 1 - The controller recieves a pick up request 2 - The controller goes to state management to obtain Inventory state from a state module 3 - The controller goes to a Logic Module to transform the pick up item Id to the actual item to be stored in the inventory 4 - The controller gets an "Item to be stored" from 3 5 - The controller updates Inventory (using the provided transformations in the Inventory state module, else a logic module) 6 - The controller requests an Inventory state save by passing it to a "Serializer" Logic module 7 - The controller goes to whatever manages the database and passes the serialized inventory for saving Step 6 is where I can see a bit of mess. Would it be better to have all "pure" state transformations contained within the same state module such as "Inventory", or split it up into "Inventory" and "InventoryLogic"?
To me, the first thing I imagined when you talked about splitting logic and function modules was actually functional programming. At a first glance the distinction between state and logic seems to map to the difference in Haskell between functions in the IO monad and any other function.
I had a hard time properly understanding what "state" and "logic" really meant, in essence. Until he called them "data" and "actions", which is when it clicked, for me. The reason is because OOP made me confuse things, because an object inherently by design had both data and actions, and the data and actions are tightly coupled. It's really refreshing to see someone explain why OOP is bad and how OOP could be good (in a more recent video than this one), because I kept saying for at least half a decade that OOP is not a good programming paradigm in the way it is being taught, used, and advertised, but I couldn't quite externalize the exact problems it has.
Interesting video series. A lot of it is over my head but I'm beginning to grasp the differences between styles and what's dogma vs what's reality. When I compare what people say about programming against the quality of programs, I see abstraction as a stopgap for lack of competence or willpower, kind of like owning a car makes it less likely for people to walk or ride a bike even in situations where it would make sense. Maybe this is a result of the industry pushing for results prior to things being ready, and people have to take shortcuts as a result. In the video game industry, I think of a game like Pokemon Blue/Red vs games of today. Games today can sometimes barely get past the damn title screen without some sort of bug, while people can literally build a working chatroom inside of the original Pokemon games and those suckers will just plow through every bug like they aren't even there. Bugs became features that sometimes made the game more fun and replayable, while today things often just break. If we're really abstracting and segregating or whatever the dogma is, I would think several pieces of the machine can break before the full program goes down, but most often that isn't the case. The main question I ask is this: How much extra stuff should I have to learn/read for the sake of making my program more readable? I started my programming journey with Python, and when I try to read other people's programs, I often see imports for stuff even I could do, as a rookie, with a few lines of code. Instead, they import an entire module to accomplish a simple thing, which tells me they aren't actively considering the necessity of what they're doing, they're just pressing buttons in the right order to make the machine go, because that's what they were taught by other people who were taught the same thing by the last guy.
The two main problems with OOP are that it asks you to move run-time decisions into compile time, which requires endless re-factoring if you made the wrong design choice and/or the requirements change. The second one is that it forces you to make poor data layout decisions that are based on form over function. That can literally destroy performance in a few lines of purist code.
Huh, that's interesting, because there's one part of a library that I have where there's code that does some dangerous stuff if it's not handled properly, so I made an abstraction on top of it so I don't have to worry about handling those special cases when I need to use the underlying functions elsewhere.
@@ethashamuddinmohammed1255 Given that Haskell is purely functional and pretty much everything is immutable, all code deals with logic by default. In this sense, you'll be dealing mostly with "how do I make functions that can be chained together in order to come up with a deterministic result". Operations relying on state on the other hand e.g. IO, or operations that in general, produce a "side effect", are abstracted generally through Monads, which iirc are just data types that wrap around values, but with a "side effect" attached when you want to "get" those values. Monads can also be sequenced together to produce a desirable result, while also maintaining the side effects e.g. obtaining a random integer from a user-defined range can be expressed through "the side effect of asking the user for input is crucial to produce the side effect of getting a random integer"
@@ethashamuddinmohammed1255 It's a bit difficult to talk about Haskell without going through the theoretical stuff since it heavily relies on those theories to accomplish what they do. Here's a short FizzBuzz solution in Haskell. wiki.haskell.org/Fizzbuzz `fizz` is a function that takes an integer and returns the appropriate response string for that integer. This is part of the logic module, since it's not really able to access external state nor does it have any explicit internal state. You have to explicitly pass it the value that you want. `main` is the entry point `IO monad`. Since you can consider the `IO monads` that serve as the abstraction for IO, as part of the state module, it can go ahead and reach out to the outside world to print information, while also being able to use functions in the logic module. tl;dr: Logic modules -> pure functions; State modules -> monads and other abstractions
@@ethashamuddinmohammed1255 The main reason for monads is 'returning an error' from a function, which wouldn't otherwise be possible without side effects. Let's say you have a:: Int -> Int, b:: Int -> Int and you call b . a, but there is a possibility that a returns an error, so you change the declarations to a:: Int -> Maybe Int, b:: Maybe Int -> Maybe Int, where Maybe is an example of a monad that either has a value of Int or is an 'error' - a value named Nothing - and it comes prepackaged with Haskell. Turns out this is convenient in general anytime you need to replicate exiting a function chain and even elsewhere. Maybe monad and the IO monad are the most used examples of a monad.
I absolutely agree with this video! I have seen proprietary frameworks (in Java) with crazy chains of abstract classes and inheritance and NotImplementedExceptions to know that you cannot push OOP too much. And yes, you still need “some” OOP ideas to split things up when they are “too big”. Just like many comments below said, don’t let yourself stuck in a certain paradigm.
This is what have been serving me well: 1. A requirement usually can be split into a small number of sub cases, and this is where I think OOP would help a lot: let a component handle a sub case. 2. Within a component, use procedural programming, as it’s likely a list of actions you have to go through while keeping tab of your current state. 3. When an action is too big, then I write pure functions to deal with them. This also likely makes code in 2 looks like just going through a short and sweet bucket list
Late comment but: this video sounds like an introduction to erlang without any erlang. You should probably check it out if you haven’t already since erlang basically does exactly this: encourages the use of modules, relegates state-code to be as small as possible, and forces the logic code to be purely functional. This kind of programming style also has the benefit of baking in notions of concurrent programming. What’s not so obvious is that with a bit of extra thought, you can build in remarkable fault tolerance. Seriously, check it out. Sure the syntax isn’t great, but the design philosophy sounds like it’s right up your alley!
Some really good ideas here, I've been thinking about similar things too. It feels like OOP is kind of a halfway house, some of the principles like polymorphism are great and some like inheritance can be positively bad, I've seen class hierarchies 10 levels deep, there's no way that is good design
@@n8style I'm sure you'd tell me encapsulation is the third. Slap whoever taught you that. You can have OOP without them. I can give you examples of OOP languages without them.
@@khatdubell not sure what you think OOP is but am certainly curious about these OOP languages that apparently don't have any of the 3. If you mean you can write code that doesn't take advantage of the 3 in an OOP language that offers them then I'll be pointing and laughing just so you know lol
@@n8style No, i mean there are languages, widely accepted as OOP languages that don't have those features. Simula, the grandaddy of them all didn't have any encapsulation i believe. Visual Basic, if memory serves, doesn't have any inheritance. There are a large amount of OOP languages out there, i'm not gonna go through an exhaustive list. The point stands, there are many languages that are considered OOP that don't have what everyone considers to be absolute to the existence of OOP. Its jus that modern OOP implement these things because they are, in general, good things and it doesn't make sense to exclude them. How they came to be considered pillars, i have no idea. They're just good features every paradigm should use if they can.
100% agreed. What irritates me about OOD is that is feasable by it's proponents to predict future cases and all possible scenarios and this is IMO pure arrogance. When you start using keywords like "public" and "private" is like you are saying that you know what somebody should access/call and what not in every possible scenarios present and future and since nobody is omniscient many people will be pissed of by your choices of limitating access to some parts of the code.
Exactly this. One man's idea of pure design is the next man's roadblock when changing the design. In many cases source code cannot be modified, so you are at the complete mercy of the designer. Countless times I've had to *rewrite* existing code just so I could gain access to hidden data I needed. Simplicity should not be confused with stupification and distrust.
This is something I had always had to argue with my team lead, what to keep private and what to keep public. I mean in your own microservice what is even the point of public and private variables? May be my knowledge is limited and my experience is less, enforcing a certain "this is the way" is too damn arrogant.
It's not always impossible to know what users should and shouldn't be doing. You should assume they'll be following best practices for using your code.
I had to wrangle a pile of "legacy" code recently and they must have had some dice with generic nouns and adjectives on it, and for every identifier they rolled it a few times. it was madness. couldn't tell head from tail on that beast. had to FIRST rename everything to foo/bar/baz, literally, so these things wouldn't sound so similar. once the misleading words were defanged, only THEN I could start seeing the structure that was there, and rename once more, to something that actually made sense..
Although I don't agree with everything you say, I'm quite intrigued with your monologues. I'd love to see a debate between you and Huw Collingborne, the Smalltalk guy. Not an argument, just a pro/con discussion about OOP following proper debate rules. Your opinions tend to be polar opposites, even concerning which came first, the object-oriented hen or the procedural egg. It could be quite the event, especially if you did it live. You'd have all us nerds on the edges of our seats. 🙃
His argument that OOP splits stuff not when it gets to big but preemptively I feel is slightly off. With OOP stuff is split based on context. Not that there isn't a problem with this ofcourse as it truelly depends on how indepth you want to define the context. As an example lets take the classic car example. How far will you go. You can seperate it into the body, the engine, the drive shaft, gearbox, tires, etc. But lets look at the tires (mainly cause they are simple and im not a car person). You can define these as 1 single object which defines its properties based on a couple of variables and provides some calculations methods. Should be fine for anything other then indepth realistic physics simulation. However if you really want to split the atom and get to the quarks you can say, "well yes but the tire has the bokts and then the tire, and the thing that the tire is arround, and then the thing that makes the tire look nice" (I told you not a car person). You could split all these things up into smaller sub classes and further abstract everything out. However at this part you have to look at about 6 classes just to get the entire picture while when written as one it can be completely representend in one class. However on the other side if you dont abstract the car you end up with a file thats thousands of lines long and that makes it difficult to just look at 1 part of it.
This exactly. The problem with writing one module that contains 1000s of lines of code to cover everything for the tire is when reusing that module elsewhere, every car gets the exact same tire assembly, unless the copy of the module is changed internally, which now introduces more problems with code maintenance because several modules will be very similar with slight changes. The problem is not with OOP; the problem is with people creating spaghetti code whether using OOP, Procedural programming, or any other code paradigm.
@@jajwarehouse1 yes if you have a massive class with several different ways of behaving, means you have a lot if statements and it becomes hard to follow one path. On the otherside if you abstract too much you have too many sub clases and it becomes difficult to see the entire logic as all the parts are in different places and you almost need to have 7 classes open in different windows.
One of the most obnoxious things to me about C++ is how every time I need to use libraries and parts of the language I'm all not that familiar with, I end up having to jump all over the documentation at each line I code in order to write something I would be able to implement in plain C in 10 minutes, and as a bonus the initial code will often contain tons of runtime bugs because I wrongly assumed at some point a method's behavior based on its name or by analogy with similar classes. Alternatively, the language and libraries will accept code using a combination of advanced features, only to realize later on that the program is doing something else because some feature is not supported. Yesterday for instance I discovered the hard way that STL containers so not support polymorphic pointers. or rather any polymorphic pointer stored in the container will lose any association with the derived class. This kind of problems not only make programming a lot less fun, but also make you wonder what's the point of attempting so many features and such complex structure, if they later break down for so many non-trivial cases.
That you take a lot of time to understand the libraries of C++ is probably just because you're used to programming in C and know its libraries. I mean I have mostly no clue about libraries in C++ myself, but it's safely not harder to basic or even advanced stuff than in C. Could be wrong though. C++, as a language itself, is pretty much a mess, I can agree with that. Sometimes I have the feeling that there's 10 ways of doing the same thing and that 7 of those have no reason to exist.
@Zamundaaa My main issue is not whether I have learned the libraries enough or not, but rather that there are so many of them, and yet they are often not reliable enough. After many years of programming, I have come to the conclusion that there is some irreducible complexity intrinsic to every task, and that you can ignore it or delegate it to some library at your own peril. The moment, for instance, you introduce a new C++ algorithm class to substitute characters in a string, something that can be done in ONE line of code, is the moment you are polluting the language with useless garbage, while increasing the size of the documentation for no good reason. Now I suddenly have to look up what are the fail states of this new class-function, because there is no way I will remember them off the top of my head next time I'll have to use them, or how does it parse or deal with C-like terminated strings, return and linefeed characters, etc. You basicaly traded one line of code for one documentation lookup, which takes at least 10-100 times longer to find, read and understand.
@@KilgoreTroutAsf yeah I can definitely understand that. Rather have a good library than a thousand bad ones. That's also a reason why I redo most minor stuff myself, at least in my own personal projects. Even with Java, where most stuff (that I came across) is done pretty well in the standard library. I got my own "Utils" project with some stuff missing from the standard library or where I don't need the full default implementation but rather a specific thing. It's pretty useful. Not necessarily too practical when you write software for others though.
The state vs. logic separation is bullshit. Both build from same primitives and separation does not have a function. Private/public is also bullshit because their duality serves no technical purpose. Hiding things vs. letting something vary (exist typing) has huge differences. OOP conflates ideas such as these together arbitrarily. OOP fails to convey the idea that state and process are same thing. OOP enforces too early breaking into submodules that make no sense. It's a net-negative grabbag of crappy language design ideas that will haunt people for decaces.
I would love to see a video about you make an application with oo, and with your concept, to see it in practice. I am very optimistic about your way of doing it.
I'm happy that this validates how I've been coding a c lately. I'm in the process of re-writing an old piece of code and I'm pretty much following these conventions. But I got all of this just reading about opaque types and handles.
Excellent video and I like to add that there is more to programming than code. Data- and process modelling are far more important than how to do the actual coding because if either gets done too wrong, the code at the end never fits or operates well.
The way I think of it is OOP is an expert friendly paradigm, and most people just aren't experts, so they'll end up using the power they get destructively by mistake (a.k.a. shooting themselves in the foot.) You inevitably need to decide how to group logic and data, which is a combinatorial problem, and most of the combinations are just bad. People who aren't strong conceptual thinkers (which is quite a rare trait) will inevitably stumble on some arbitrary design which by chance is almost always bad. Sometimes it's high coupling, which is reasonably easy to deal with (analogous to the accretion problem in the video), and sometimes it's low cohesion, which IMO is the real killer. One responsibility smeared across too many actors. However. I do see expert friendly being equated with "bad" a lot. There seems to be some arrogance tied to that: "I tried it, and I wasn't immediately great at it, so it must be bad." Then starts the rationalization game of finding arguments for why it is categorically bad, but that often just comes out as a not-thorough-enough explanation.
My first OOP was AutoCAD's AutoLisp. Now I live in various microcontroller Assembly. Languages, like Atmel, Intel 8051, and remarkably RCA 1802 COSMAC. C# looks like Java looks like Ruby looks like Yo mama looks like Python; if you have done it as long as I have. Alan Turing was right.
I don't have much programming experience, but from what I have experienced, OOP seems to be useful for mathematics related things, where everything that could ever be done is already well defined beyond contestation. You won't add a matrix and a complex number together, so it's clear you won't need to adapt to that.
There's three problems with that: a problem with common implementations, a problem with methods and a problem with mutability. Languages like Java, Python and Ruby implement objects as heap-allocated, with a pointer indirection which taxes performance with short-lived objects. This makes fast math calculations nigh impossible using objects. The other problem is conceptual: objects are commonly seen as "receiving and transmitting messages", through their methods. However, most mathematical constructions have "operators" that are most commonly binary, that is, they relate _two_ objects. This is incompatible with the classical "method" thinking. The final thing is that objects are commonly thought to encapsulate state and have persistent identities. In mathematics, all state is immutable and there is no identities. (That is, even if you construct π twice, they are the same, never-changing π). These "impedance-mismatches" make OOP not very good at representing mathematical things.
A matrix is just data. An array would be a better representation of a matrix than a class, where you would still have to use an array to hold data. Also, for a complex number you can use a struct or atuple, there's nothing mandating you should have a class to represent complex numbers. If you represent complex numbers as struct and use an array to hold a matrix of complex numbers, you would write code faster to multiply two matrices together than if you write a class for everything and try to over abstract and over generalise. The goal of writing code is to get things done, not to satisfy the aesthetic needs of design gurus.
This appears to all be written from personal experience (this and the previous video). My experience has been completely different, with bad experiences with functional and procedural code and some good and bad experiences with OOP. There's great deal of language like "usually", "sometimes", "often" without any clear cut examples or comparisons. I would like to know some samples Mr. Will is basing his assumptions and opinions on.
It's all dependent on your use case. These are all focused on enterprise and server programming from the looks of it. Get into games, and one'll see that a focus on state will reign supreme.
This idea is pretty similar to Redux and React, All the state is kept in stores, all the changes in state are applied by logic functions called reducers and action creators. and react just displays the UI based on the state
As a well meant advice: I'd recommend steering away from getting into react if you haven't used it much and/or have to work on a larger / complex project. I've started working on a decently big project using react about 2 years ago. The idea behind it really is great performance-wise, but the way it requires you to declare state dependencies is terrible. It's depending on the developer to notify it of any possible change to dependencies of the visuals of the current element, which then invokes the reaction functions that had said dependency registered. If you provide an non-primitive as dependency, it won't detect changes to it's properties, so you might be scratching your head for a while asking yourself why the UI doesn't reflect the values you see in the debugger. With the callback shenanigans you'll see the current values in the debugger, but React didn't apply them to the DOM, as the object reference hasn't changed. If you try to update the state inside a reaction method, you'll get a runtime-error, basically stating 'you can't do that'. In a situation where you only can determine that you need to update the state within in that method, you'll have to force a re-render, as you have to update the state outside of the reaction method first so the reaction callback can see the change. You basically add a react reference object with the only purpose being to trigger a redraw. If that somehow triggers a loop, you'll get another runtime-error. The way it handles the state will not allow you to easily debug it, with current values represented in the developer tools effectively always being out-of-date, so they created a complete react DOM explorer as a browser extension. Because it bloats up it's own virtual DOM so much, it's pretty hard to navigate, and it requires you to manually click on every state variable of an object to retrieve it's current value. The state variables don't even have names, because it doesn't provide a means for that. All state variables are simply saved in an array, where the order of occurrence determines their index. This also means that you're not allowed to increase or decrease the amount of state variables for an element. it may see easily resolvable by just 'planning ahead', but development doesn't work that way, unless you have unlimited resources, and therefore can take your time to plan beforehand, read the complete documentation, experiment, etc. I probably spent a lot more time debugging react-related issues instead of moving forward with programming. I'm a programmer at heart, so i personally didn't mind it that much, but if my goal is programming speed, i'd rather use a mundane environment.
I think of OOP as a tool. Some problems (like the actor model) map really well to OO. Others work better with procedural methods. I'm a big fan of using the right tool for the job (I totally agree on the over use of encapsulation btw).
I don't see how actor model actually works with OOP. The Actor is an entity which is responsible for so many things and can generate other actors too via messages. This is all just a mess.
The separation of state and logic is pretty much monads and functions in FP. It seems to me like any disertation of good "OO" just always devolves into FP. But the issue with procedural you dont address is that you often end up with anemic code - large reuses of value objects that cause widespread typing dependencies. Thats where FP solves the issue with widespread polymorphism.
FP doesn't solve any of the problems OOP creates. It simply creates a lot of new issues of its own. Memory hunger, for instance, because you are not allowed to do in-place computations.
I don’t think you got the inheritance part clear, it may encourage people to use inheritance just to “reuse data member declarations” forgetting all about LiskovSP. Let’s first separate public inheritance (the most common one, like Java) from private inheritance (what you exemplified with go). To put simply, public inheritance inherit data AND the interface, private inheritance inherit data WITHOUT changing the interface. Using the public when you actually needed the private is a mistake that many OO code does. If the language doesn’t support an equivalent private one (go embedding, private mixins or traits, etc), just use composition instead.
Long time no video. Glad to see another one. But honestly I don't think you will be able to convince anybody with these videos who doesn't already know how bad OO programming is, because humans ...
I think the paradigm used should adapt to the problem at hand. So I kinda disagree with this, because there have been times when OO has served me beautifully, and times when it has not. I think your idea would suffer from that as well. From where I stand, I seems like it's ultimately situational. Any language which enforces a specific paradigm is also a language that restricts certain use, and makes performance compromises for style. Basically just use C++.
You would probably appreciate a new language i have just released, called Beads, that uses the State-Action-Pattern, where you have a small nugget of mutable state, and the view code does read-only access on the state, and then the event tracking code updates the state. It includes a graph database and a layout/event/drawing system in the language, so you can build things without external frameworks or libraries. It completely rejects the OOP paradigm. No classes, objects, constructors, destructors, functors, monoids, etc. I am confident you will like it.
@@lepidoptera9337 I was referring to Mr. Will, as he is not a big fan of the OOP paradigm. I don't know what you mean by reducing control flow; Beads uses a 2-way branch, a multi-way branch, and a looping construct that maps to conventional languages rather closely. The innovation in sequencing of computation is the automatic refresh which occurs when state variables which were modified are changed. This is very unusual, and i can't think of another language with that feature.
@@edwarddejong8025 That's because you don't want such a feature. That's exactly the kind of thing that happens in hardware all the time and it's a nightmare to debug. I would assume that you never had to write FPGA or chip designs in VHDL or Verilog, so you re blue-eyed about these things. You probably think that execution order doesn't matter and that one shall leave such things to the compiler. In practice that's a deadly mistake whenever programs have non-trivial side effects.
@@lepidoptera9337 Since you haven't read the 130 page user manual, or spent any time on the examples, you can't accurately critique the very handy feature that is included in Beads. In a graphical interactive program, you have a model carried in your state variables, and when the model changes, the pure read-only access to the state variables renders the screen. When the user responds with events such as a finger tap our mouse click, then the state is updated and the refresh cycle happens. In web apps, where the browser is fairly slow to render, minimizing the amount of the screen that is re-layed out and rendered is of great benefit. I can assure you that inside a typical function, i am not re-ordering the instructions as this would cause chaos for the author. If you enjoy the state-action-model methodology, and enjoy the clarity of Modula-2 coupled with the convenient indent-significant syntax of Python, you will like Beads. It's an experimental language, and starting out with users taking it for a spin. I don't design hardware, but i have been programming daily since 1970.
I'm loving this video series. I was taught OOP in college, but I quickly grew resentful of things like inheritance and over-complicated encapsulation. But I've had a hard time defining and naming which kind of programming style I actually do advocate. This series has helped me a lot with that.
It's really refreshing to see someone explain why OOP is bad and how OOP could be good (in a more recent video than this one), because I kept saying for at least half a decade that OOP is not a good programming paradigm in the way it is being taught, used, and advertised, but I couldn't quite externalize the exact problems it has.
I had a hard time properly understanding what "state" and "logic" really meant, in essence. Until he called them "data" and "actions", which is when it clicked, for me. The reason is because OOP made me confuse things, because an object inherently by design had both data and actions, and the data and actions are tightly coupled.
I had a problem. So I thought to use Java. Now I have a ProblemFactory.
Look into functional programming.
@@seriouscat2231 Lol, got me with this one :D
"Speculative generalization" is a good phrase. I was looking for a phrase that is analogous to "premature optimization" in terms of premature generalizations that I've seen many, many programmers do (especially the ones who think themselves as very "smart"), and it's just as prevalent in the programming world as premature-optimization, and your description of "speculative generalization" is exactly what I was thinking. It should be taken with the same amount of skepticism and perhaps derision as the premature optimization whenever one sees it, since the speculative/premature generalization makes the code less maintainable, and (somewhat ironically but also predictably) makes the code a lot harder to be generalized down the road according to the ACTUAL requirements. It should be taught to everybody as one of the "anti-patterns".
I looked at "premature optimization" to be mistaken for "misguided high level design" for a long time (because of a quote I heard from a Henley talk about how you "do everything top-down except the first time"), but yeah that works too.
“Prefactoring” :)
donald knuth's admonition against "premature optimization" is widely misquoted and misunderstood.
Its better to give people the tools to understand something, then give them general rules to follow.
The most extreme case of speculative generalisation would be to replace the entire product with an interpreter and let the end user write the actual code. :D
But.... What ARE the actual requirements?
6:34 Yes, for example in Minecraft. Armour stands were at some point decided to inherit the "mob" class (which normally deals with sheep, zombies, dragons, etc. and not items, XP orbs, arrows, etc.), because they share a bunch of properties (gravity, flushed by water, get destroyed when attacked enough, can "wear" armour, …), so pretty much every time something new gets added that does something to all mobs, it almost always has a few bugs with armour stand in the first snapshot, which then need to get special-cased out in the armour stand class later. For example when a new enemy gets added that attacks everything, then it also attacks armour stands.
That's just a wrong use of inheritance, i.e. bad design. You don't use inheritance because types of objects have similar data or functions and you want to share code, but when you want to apply the Liskov substitution principle. Obviously an armour stand is something quite different than a mob type and should never be used as a substitute for such, so it shouldn't inherit from that class either.
@@IkeFoxbrush agree, the minecraft example is not good
@@IkeFoxbrush Isn't a "bad use of inheritance" exactly what leads to inheritance related bugs? If you declare inheritance the norm for type definitions you're maximizing the risk of some inheritance being more trouble than help.
@@volbla By the same line of reasoning all programming leads to programming related bugs. So if we stop programming do we solve all software problems? I don't think thats's a particularly useful argument. I also don't see inheritance as the norm for type definitions. It's one of many tools in a software developer's toolbox and should be used with measure. Even in OOP it's hardly needed and often overused.
@@IkeFoxbrush _"Even in OOP it's hardly needed and often overused."_
I think that's exactly the point that both I, FaRo and Brian are trying to make. Some schools of thought (or workplaces, like the Minecraft development) overuse inheritance because it's useful in theory. In other words, they consider it the norm when defining new types. In practice that leads to some nasty bugs, such as armor stands getting attacked by monsters.
In my eyes "Inheritance is overused" is the same statement as "There is a culture that normalizes inheritance." The problem isn't that it exists. Of course it can be useful, just like any other tool. The problem was all along that it's overused. That's what leads to a bigger risk of mistakes.
Glad to see an update. Watched "OOP is Bad" several times. Excellent advice.
With 40+ years of programming I feel safe to say that objects (as a datatype) are "good". They serve a purpose; a solution to certain problems. Object 'orientation' however is just silly. It's like saying there is 'variable programming' vs 'array oriented programming'. The whole debate of pp vs oop revolves around a false dilemma in my opinion.
This is how I've seen it
That is the "OOP is just a tool"-argument. But the tool is probably suboptimal.
OOP 'objects' does not refer to modeling real objects or a collection of data. Encapsulating state and logic together is a central idea of OOP, which is supposed to modularize the code and achieve massive scalability. Seems like an awkward idea to me that creates a host of problems you could avoid by not doing that.
You can achieve the same thing (modularization) by using modules and not coupling state and logic. This can look similar to OOP, you can still do 'speaker.increase_volume(10)', that's just a matter of syntactic sugar but I can't find any advantage that coupling state and logic provides, just problems.
Array oriented programming languages are a thing actually, though rather irrelevant.
@@pik910 but as data types it is useful. Just imagine complex numbers. Isnt it a good idea to couple state and logic (like multiplying to complex numbers). I prefer a*b instead of multiplyComplex(a,b)
@@andik70This is the thing and I totally agree, down the line we’re always ultimately losing some amount of precision, sure, but it’s the cost of time and convenience. Is it more efficient for us to actually code literally more efficiently, or is it more efficient for us to have an easier, faster and more simplistic way to code? OOP being criticised is interesting from a computer science perspective, but to the majority of working programmers it’s nothing but a thought experiment
Best narratives on OOP and procedural programming. Brian gets it and can explain it!
15 years in. There has never really been a time where speculation turned out in my favour. Everytime the boss comes around with a new idea which my speculation did not predict. As such my glorious framework coudln't accomdate this "rapid iterating". wasted time, and had to be redsigned a lot, didn't prevent bugs anyway. Nowadays. I just make the code as bare-bones as possible. Cleanup where neccesary.
That wasn't real OOP. Real OOP has never been tried.
This made me laugh, well done!
Made me laugh as well. Going to use that I think.
haha
@Trevis Schiffer OOP advocates make similar claims as socialists do whenever someone explains the failures of either system. OOP works, you just have to plan better and do it right.
Denmark?
Great video! You make some good points about the useful parts of OOP. Unfortunately, many people take OOP to the extreme and make things way too complicated. That is why I usually try not to use OOP unless necessary and don't go overboard with it.
By the time this man made this video, react was still mostly OOP. Now, most of that part is deprecated in favor of modularization and functional programming. Not only that, but this is more and more becoming the norm amongst widely used frameworks. In other words, this man was a visionary many years ago despite many people calling him crazy
I remember when the video came out, everybody was furious calling him names saying he is a n00b that never worked on a real system before, now everybody either agrees that OOP is bad or they say it's only good in specific situations (which is not really OOP since now the system uses objects but it's not _oriented_ by them).
React was always modeled on functional programming ideas. The fact that they used the class syntax doesn't make it OOP. In fact the switch to only function components and hooks has made the framework incredibly prone to spaghetti, ie stateful and effectful code in every part of the system creating hard to debug problems. The quality of code in codebases I've worked in has steadily decreased since I started working with react in 2014.
@@rumble1925yeah idk what the op is talking about. React has been about composition since day 1. And I agree React has been getting worse, it’s a spaghetti hooks western
"This is not the OOP I had in mind"
- Alan Kay, 1997 OOPSLA in reference to Java/C++
it doesn't matter what Alan Kay says, the meaning of OOP has changed now. Meanings change as their usage changes, and the modern OO languages are generally accepted to be "OOP".
@@ysink As does the meaning of "great performance relative to the hardware". Feels like we're leaving more performance on the table than we realized. Definitely explains why a hardware upgrade doesn't feel as "significant" as it used to be 20-30 years ago. Take the drive for minimal size/max hardware utilization from the 1970s (back when Smalltalk was more in demand) to the early 1990s and bring it to today's hardware (with consideration for different u-code) and you'll be amazed with the results.
The farther away you get from the hardware, the more performance you're pretty much guaranteed to leave behind.
More than that though, i feels like programmers neither care nor try anymore.
Just copy and pray-st from SO.
@@SimGunther Part of the problem with hardware upgrades not feeling as significant as they used to is because they are actually not as significant as they used to be. In the 80's processor speed would literally double from one year to the next. I'm not joking, you would literally go from a 33mhz processor to a 66mhz processor. That's freaking huge, and very very noticeable.
Today the yearly increase in raw processor speed is something like 5-15%. Moore's Law was originally coined around this yearly doubling in processor speed. As the doubling slowed down it got stretched to 18 months and beyond, and then the definition altered to a system's total capability rather than its processor speed.
Also involved here is the processor stopped being the bottleneck for the perception of performance in most cases. Ever since the 90's its hard drive speed and ram quantity/speed that have dictated the "feel" of a computer. You can look at arguments on old computer forums debating which drives were the fastest and how you should format them to get the most performance out of them - because even a small change there gave a noticeable performance boost. That's why SSD's were such a game changer when they came out - they were an order of magnitude faster than any spinning disk on the market, and that dramatically loosened the bottleneck that was hard drive speed. They are still more significant for general performance than processor speed.
That means a computer with a 10 year old processor but a modern SSD and plenty of RAM isn't going to feel significantly different than a modern system until you start doing some processor intensive work, which is not something most people will regularly do. You can think of it as though modern computers can lift heavier weights than they used to but they can't walk any faster. Well, most people don't go around carrying heavy weights all the time, they are just walking, so you don't notice it. Modern software, on the other hand, does take advantage of the fact that processors can lift more weights by adding more and more features. It doesn't feel any faster, but you're doing a lot more work without you even knowing it.
@@jeffwells641 Otherwise a well-explaining comment, but unfortunately I have to disagree with your two last sentences.
Modern software have very little or no added functionality, way too often even reduced. But the weight has indeed been grown even hundredsfold, mostly purely due to poor professionality of SW development, and carelessness of developers, as it is nowadays too easy to resort on "just buy new devices, they are cheap".
OO modularizes speculatively. Honestly I couldn't have said it better myself. I'm a Go developer and I recently started a job with a lot of Java developers. It's been hard having to go through these code bases that are overly abstract for the sake of abstract. Leading to simple things being done in complex ways.
Go and to a lesser extent Rust are really going to save this industry. Go emphasis on simplicity has made my designs far more focused and minimal. And Ilve these videos that patiently deconstruct the dogma of OOP.
As a former Java developer it even took me years to unlearn my brainwashing of OOP. I wish there were more people out here talking about this stuff. I also think young developers need to year these ideas.
I won't say to a lesser extent. Both Rust and Go have the module system down pretty good, but for me it was Rust that made me realize what I really needed in C++ out of the classes, and what was unneeded, simply because higher-level languages have had module system for quite some time and Rust proved there isn't anything about this idea that restricts it to high-level code.
I am a beginner programmer and I'm crazy, some people say OOP is good and others say it's bad, I do not enough knowledge to distinguish and analyze by my own what should I do. Btw I study python and use pygame and django
..You know, im glad that Lua was my first language. Procedural coding is how i code normally, coding OOP in even C++ is very, very strange. Breaks it all to pieces and really, REALLY hard to understand what im reading.
Procedurally im usually writing in one function, because im usually not doing anything bigger than a few hundred lines... yet. Still dont understand the point of making unnessary functions, why have 17 lines when you can use 6 instead.
I really like OOP but only because I finally understood that I should not over-modularize things. If I have to download, parse and export a file, I should put all this in 1 method if I always have to call these 3 steps all at once. There is no need to separate them just (as you said it yourself) for the sake of abstract. I always start coding procedurally, then when I need to use a part of the code several times I put it in a function, then when I need to use several functions that are part of a same "group" I create a class *or* a main function, depending on the problem I need to solve. Putting everything in classes and separate methods just because "that's what OOP is about" is really stupid. You can do very dumb things with Go or Rust too. The problem is not really the language or the paradigm, it's who uses it.
I used to do speculative modularization. Nowadays is anything but speculative.
Lets say you have to display a page of avocados so the user can select the avocado it prefers to add it to a cart. So, classes: PageOfAvocados, Avocado, Cart. There is nothing speculative with that. Feature requirements and how they categorize and break down naturally dictate all your classes. If a feature is way too complex than your clients won't understand it either.
A noob like I was, back in the past would go something like: PageOfAvocados, SelectedAvocados, Avocado, Filter, CartController, Cart, FruitSuper, AvocadoSelectController,....
I think anyone worth their salt who's coded for more than a couple weeks in a dynamic language celebrated for OO, like C#, Java, Ruby, etc begins to pick up procedural coding habits out of necessity whether they realize it or not. Even if they think their doing OOP. I tend to think OOP is pretty useful for dealing with dependancy, probably as a bias because it's what I cut my teeth on before going back to learn K&R C. But I also agree with about 95% of what you've said in these videos. In all honesty OOP and it's gurus just have bad theory and a Messianic complex, and anyone who follows their conventions will eventually get sick breaking themselves on the rocks of endless encapsulation and debugging for the stupid class interactions and inheritance they write for themselves, and just say "fuck this noise." And look to static classes and static methods called from main for their salvation because it easier to have one throat to choke when something breaks.
in fact, every GOOD swdev books points against too much inheritance, against multiple inheritance, favoring rather interfaces and logical composition with separate top-down layers and common-sense approach, these days... but too much people, even "teachers" are ABUSING objects for everything/always, ideally agile, on standups :-/
@@7alken It's telling that even OOP gurus finally admitted that the old solution (composition) is way better than their fancy philosophical construct (inheritance).
It didn't used to be this way.
They used to tell you to add some third class to manage the interactions between classes on different nodes of the hierarchy, adding even more complexity to an already complex system.
Now they try to cover their tracks and pretend they never spoused this madness, that using composition doesn't deviate from OOP's original vision, and that's the "right way" of doing OOP.
Redefining the concept to keep it alive.
@@Vitorruy1 any very strict rules are nazi, except the roads
9:23 Oh yes, I had to work in such a codebase for a while… A method "doThing" called a method "doTheThing", which called "doThatThing", which called "reallyDoThing"… and then it often eventually just lead to a library method, of which the source code was not available.
And you if need something more you better create a data pipeline that goes through all the layers instead of getting the data directly, otherwise it's not """"""clean""""""" code.
Your videos are very delightful and useful. They teach properties, problems and philosophies rather than merely recipes(also they go straight to the point).
Just write code that does what you want it to do; stop obsessing over what type of code it is or what "philosophies" it adheres to. Make the machine do the least work possible to safely change the data the way you need it to securely, fitting the code into the right category of design should be done in retrospect if at all.
When you brought up the separation between state and logic modules, it immediately reminded me of pure functions vs the IO monad in Haskell. I think that's one of the reasons functional programming has such a good reputation. It enforces a separation between logic and state management while encouraging a coding style that keeps the latter to a minimum.
Huh. I thought functional programming has such a bad reputation for being unintuitive and hard to learn, basically because it isn't imperative. I myself like functional programming (I'm a math major anyway) but I totally understand why my computer science friends dislike it, it's very different and you couldn't really think in terms of loops and iterations with steps anymore, unlike imperative which always lets you choose between iteration and recursion, etc
This video was released a few hours ago, so I’m assuming everyone who’s commenting actively follows the channel... so what’s the confusion? People are complaining that no code was shown or that Brian is flip-flopping, but if you’ve watched his videos... he’s explained this before, and given many code examples. Brian’s problem with OO has always been abstraction for abstraction’s sake. He’s demonstrated how imperative code is more straightforward and gives a clearer picture of the system rather than the individual part, and he’s argued that context is important to software-that a developer should be expected to understand the system and not just the arbitrary chunk of logic. This video just expands on these ideas. Watch his other vids if you’re still confused; it’s there; it’s good stuff.
The confusion is that some people that did a tutorial on PHP are very opinionated and will ruin any sensible discussion about programming theory.
"He’s demonstrated how imperative code is more straightforward and gives a clearer picture of the system rather than the individual part" Which is fine when you're designing a system from square one. However, when you're maintaining and/or extending it, most of your effort is spent rearranging and/or reusing the parts, in which case properly-done OOP is going to save you big time.
Aram Fingal not in my experience. I’ve seen systems that were certainly properly arranged OOP, but they were so over-engineered that they might as well have been spaghetti code. They were prime examples of exactly what Brian’s described in the original video - abstraction for abstraction’s sake, confusing ServiceFactoryInterfaceVerbNoun names, and the actual logic buried behind layers of object-passing nonsense.
I know the first video made a lot of people angry but he was bang on with it, in my opinion. Not saying we’ll ever have to give up OOP, because hahaha good luck with that, but I hope people think differently about how they structure their OO code.
@@claireryan7644 "abstraction for abstraction’s sake, confusing ServiceFactoryInterfaceVerbNoun names, and the actual logic buried behind layers of object-passing nonsense" That's not properly-done OOP. Yes, I've dealt with fresh college grads' code where half the lines consisted of one-line methods and the associated Javadoc, Java inquisitors whose sole purpose in life is to condemn deviations from the orthodoxy without producing any code themselves, etc. The fact that some people take the paradigm to a ridiculous extent does not invalidate the paradigm. And a lot of the advice in that original video was horrible. Have fun profiling code where every function is 1000 lines long with only comments separating the different tasks.
Aram Fingal relax, man, I’m just telling you my experience. I’m a senior dev and I have seen a lot of really terrible systems. Yeah, this is an extreme example, but the problem is that it’s not rare and they’re not built by college grads who don’t know any better. They’re built by devs who think they’re doing it right. OOP run amok is a problem and that’s why I think Brian makes some good points. You don’t have to follow his advice to the letter - I don’t - but it did make me think more about how to structure my code.
Honestly, I’ve had to debug giant enterprise OOP systems and procedural thousand line functions, and I find both fun? I mean, I love my job, it’d be weird if I didn’t.
Nicely stated, especially about need vs. speculation :)
I programmed for 3 years with OOP in mind and could never do it perfectly. Not even decently, i always wasted time on thinking about abstractions, there goes 3 years wasting my time on OOP, how ever when i tried to write algorithms like sorting algorithm without thinking about objects i was always faster
a to b, get working "prototype" then you can start doing "refactoring" or more like starting from a again (you fully understand the complexity beyond the problem), only with the idea of how it could be encapsulated and I'm sure you will see much better code without huge thinking, I admit I have kinda problem with OOP too :D
Why would you try to hammer a square peg into a round hole.
OOP is a tool that has a specific use, if you try to use it for everything its going to seem bad.
If you use it for its intended use, it will be useful.
You don't need objects to write a simple sort and doing so is bad and you should feel bad.
I found the idea of splitting into state and logic modules quite similar to what data-oriented design proposes. Although the idea there is more about collecting things in large arrays so they can be process efficiently together. Improved modularity and better handling of cross-cutting concerns is a nice benefit though.
Thanks. Good follow up up to the previous video (which I've only just watched). After years of being a slave to OOD, I have evolved to a similar technique. Pushing as much logic as possible to a central core of pure functions, which are fed by (unpure) apis written in a more OO style. Moving from C# to F# has helped tremendously with this.
9:15
I have always felt this way about OOP, but I can never tell whether it's my inability to understand the code or the code's inability to be understandable.
its your fault
It can be very readable. It just gets abused by people that like to write complex convoluted code as some sort of "hey look at me" cock measuring contest.
@@Ryan-xq3kl Ok boomer lmao
hahaha. People "think" they can grasp that level of abstractions, but most brains can only grasp those that can be visualized. Sucks. I know. 😅
I don't code much, but I've listened to most of Brian's viedeo out of just the pleasure of hearing him elaborate his opinions. Very well exposed, good luck with your future projects!
Hi! Thanks for the interesting video. FORTRAN 90 language has a very nice module programing style. I worked with some computational tools written in FORTRAN 90 and everything was a module and had the public and private interfaces. In all honesty, it was very very easy to follow the code and understand the codes. I totally agree with the module oriented programming. It is really nice and efficient.
I have enjoyed this series. I agree with many of the strong points in earlier videos, and completely with the presentation of those points in this video.
I laughed out loud when you said UML was fucking useless in the last video. I'm a systems engineer and it sounds like you would believe the number of times I find a modeler head down in their model, which has grown into an enormous unwieldy beast, obsessing over notation and symbology. But it takes them quite some digging to remember why they're modeling in the first place.
In C code design in our big work projects, we call a C file (.c + .h pair) a “class”, which effectively acts as a static class, much like your concept of a module.
That's a little confusing given that C already uses the word "object" to mean "some entity stored in memory".
It's the concept of modules which is important, to break up a large project into manageable parts. A class is usually a poor solution to gain namespacing/module behaviour.
I do the same division for C programs. And it was inspired in the concepts of packages from Ada, which are similar to static classes from OOP.
This video, among other inspirations, has led me to start developing a BASIC-like language that I think fits this ideal for programming.
This approach is used within haskell 1 to 1.
The more I learn the more I realize how amazing Haskell is
@@csmusic6505 Haskell is awful... no i'm just kidding... I know it's me that's the problem and not the language... but I'll be damned if trying to learn it wasn't the worst decision I ever made. Put me off of learning programming for like a year. C++ was a breeze to learn compared to it and that says something because C++ is dense. *edit* F it. I'ma f****** do it again! Dusting off that text book and figuring out where I left off. I feel like I've finally had long enough to overcome by PTSD from last time I tried this.
@@xcvsdxvsx how did you go?
@@cranknlesdesires I probably put another 30-50 hours in before i got burnt out again. But I learned a lot more this time. Probably one more go at it like that and ill be a genuine novice.
@@xcvsdxvsx which textbook did you use? Haskell from first principle (haskellbook.com/) is probably the best beginner book in my opinion.
I think your state/logic modules need an example. Otherwise this 9 minute video very clearly formulates what i was always feeling about modularization and OO but never could succinctly formulate. Thank you for that.
Maybe you know by now, but the guy has a bunch of different videos on the subject, one concerning a larger example.
@@storerestore Which one is it?
@@YYYValentine He's probably talking about either Object-Oriented Programming is Embarrassing: 4 Short Examples or Object-Oriented Programming is Garbage: 3800 SLOC Example. I think the 3800 SLOC video is the one where he really goes into re-organizing the code and splitting out the logic from the state.
Look no farther than practical examples of real world Go and Rust code.
@@VivekYadav-ds8ozhow do I know what is a real world go/rust project? The NES emulator written in go which is shown in the OOP is Garbage video seems to me like being a real world go project but I guess its badly written. Else it would not be in this video.
OOP creates the need for a bunch of shitty design patterns that are just not needed if another paradigm is favored. Yes that other paradigm will require different abstractions that do constitute in a way design patterns themselves but its just not the same.
I hit like after your comments about interfaces at about 6:00 . Spot on ;
This video I found thought provoking. Without counter arguments, a clean distinction between state and logic sounds nice. What I both disagree and agree with is preemptive creation of abstractions. At least the way it was presented, it sounds like start coding and worry about design later. Design is about abstractions. If you have a good design, then creation the abstractions just an exercise of writing code.
I do not agree with trial and error programming. It has the fundamental flaw that your resulting code seems to work, but you never put enough time into design(creating abstractions) to know if the end result is ideal. I work with this type of code all of the time. It's passing tests, it passed QA, it's been running live in prod for decades and no one is complaining, therefore it must be "correct". Nope. Many dozens of people have read the code, but few find the logic flaws that masquerade as working code, but silently corrupting data in novel ways that few could even fathom.
Correctness of code requires design and abstractions are just codified reflection of the design. Abstractions are required for any correct code, regardless if it's implemented as an abstraction in the code.
Design is about fitting the problems to solve, abstraction is a tool in that it can reduce complexity / provide guarantees and increase efficiency (if done right).
But the wrong, too many, or even missing critical abstractions all contribute to problems.
The wrong and too many are the most devastating however, so avoid abstractions that serve no purpose in the near future.
With respect to trial and error being bad and needing design first, that is just being evasive IMO. Designs requires trial and error iterations as well and there is no logic in thinking that the process needs to be fully separated from creating code. Nothing is perfect right away and lots of trial and error is how people learn and grow.
The best working designs have lots of conscious iterations performed on them, each to stress it in a different way and are not grounded in a rigid methodology.
Its a mental process foremost, not something that can be encapsulated in "tests" all that well, make sketches, verify assertions and solve the puzzle.
And this is why there is no set recipe, only talent and experience factor in and its also why its delusional the industry tries to commodity the work as if its a packaging factory.
A development process is like creating a sculpture from nothing, with the only differences that there are objective results to be met and a rich toolbox you can learn and use.
There are techniques that help, defensive programming for one and seeking out the simpler kind of solutions first.
As for long "working" code that silently corrupts data, that likely has nothing to do with how its made and more with the experience level and requirements at the time of making. If a design would have been made first, it would not have been any better as a persons capacity and experience is the same. Most don't know what they do not know, but don't get me started on that. Today's typical developers look like monkeys to me now. Still, some of what I see I used to have as well, in the end, its that all important experience over time. If there was a recipe we could all follow for perfect results, that would be convenient, but there is not.
@@TheEVEInspiration I would say thinking about the problem throughly can probably reduce the amount of iterations. I guess this is the main point behind "design first". It can therefore reduce development time. When only limited time is available (which always is), it can also increase quality. However, I think it's more important that the developer is actually willing to try different approaches. This might also include that the developer might have to refactor lage portions of the code he has already written.
Truth be told the most important thing is the KISS (Keep It Simple Stupid) principle nothing is more important than that. When it comes down to it the only thing you truly want is to understand the code from others as fast as possible without thinking too much about it.
So if others read your code and understand it as fast as possible then you achieved KISS if not then you didn't.
The programming paradigms like OOP, procedural and so on are theoretically only trying to help you achieve KISS.
And how can you achieve KISS in a complex program? You need to use an enormous amount of time in the planning of the said program there is no good way around that.
Well at least this is my personal opionion :)
The issue is less how simply you archive a goal, but the sheer number of them. OOP philosophy encourages high abstraction, high encapsulation, high constrain .... it gives you much more goals than just delivering the feature requiring you to juggle a billion of made-up concerns which leads to complexity even with KISS.
What we need is programs that are centred around "types". I've used scare quotes, because I'm anticipating someone is going to tell me that what I'm about to describe is not, properly speaking, a type in the same way that an int or a float is. What I mean here by a "type" is a set of state-carrying variables, each of which is itself a "type" or is a genuine primitive type such as a float or int which does not resemble a set, which is how the word is normally used. For a lack of a better word, let's proceed.
The idea behind OOP is to bundle mutable state with methods for mutating that state - these methods providing the "interface" to that state. This requires the external caller of a method to know more than he really should have to. The caller generally knows, at least partially, what kind of mutation of state they want to achieve, or at least, what class of mutations.
And the caller generally does not care HOW this mutation is achieved, so long as the mutation is predictable.
So, really, the caller should not have to know which method they are calling or what a object's methods do. They should only have to provide a representation of the state they want to achieve, and the object should figure out how to move from its current state to a state in the target class.
So instead of "public methods", we should have public variables - public not in the sense that they can be mutated by external code directly ( i.e. without submitting a mutation request to the object ) but that the object's user is expected to know that those variables exist in order to be able to tender state-mutation requests properly. And this expectation is usually made of an object's user in practice ANYWAY, since the caller of some state-mutating method only calls the method *because* they ultimately intend to mutate some state, somewhere ( possibly not directly in the object in question ).
An "object" should also have attached to it a set of rules about how it is allowed to move through state-space. Some mutations may be forbidden. For example, if you have a type supporting some non-reversible operation, and a boolean to mark whether that operation has been applied, that value should always initialize to False and you should never be allowed to mutate the boolean variable from True back to False; which is to say, all requests for that effect should be denied by the object.
This produces a change in the meaning of "object", because methods are not a part of the object, only typed variables and their mutation rules are. It's a little closer to how Smalltalk objects interact via "messages", but here the method name can never be contained in the message, and the message always takes the form of a class of type values that you want to the object to move into.
So if the methods are not actually a part of the object, should they belong to the object?
I don't see any necessity for this. You can follow exactly the prescription Brian makes here, and separate state-mutating variables from logic. Here logic is represented by the "objects" (again, scare-quotes signify that we're abusing terminology) and state-management can be separated out into simple procedures, which is an inversion of the normal approach.
I recommend functional programming. You achieve everything you need without abstraction for abstractions sake. Peter Norvig (director of research at Google) demonstrates that 16 out of the 23 patterns in Design Patterns are simplified or eliminated (via direct language support) in Lisp or Dylan.
Design patterns are bandaids for the deficiencies of OOP. There's no question about that. The main reason why a couple of the design patterns even exist is because functions aren't first class members.
If we lived in some ideal fantasy land where writing in a functional programming language was maintainable for large codebases, then yes. Writing within a purely functional paradigm would be ideal. But we don't live in that fantasy land.
just use python whenever applicable - bang
@@someoneelse5005 I would if not for the GIL
I'm happy to see a voice of sanity in sea of "fad followers". Thanks for the time and effort to make this series!
Man that ending picture of Alice is exactly how I feel when I look at some legacy code. And the Chesire cat is the comments of the lead engineer who quit the job 3 years ago.
A great complement to your "OOP sucks" videos! So may I summarise the argument here as that (a) a well designed and appropriate level of modularity is an essential strategy to control and comprehend all complex computational systems, (b) concentrating on artfully defining good stable data types and abstractions (ie *what* is being shared between modules) is usually far more important in writing modular code than concentrating on defining module interfaces (ie. what the modules actually *do* to the data), and that (c) most of the problems with object oriented programming that you were referring to in your earlier videos were related to OOAD's propensity of taking patterns which may sometimes be useful to control complexity at high levels of scale, and rigidly (dogmatically) applying them in situations where they unnecessarily *add* complexity.
Or, more poetically perhaps: coding complex systems is an art, and modularity is its paintbrush...
Im trying to get into OOP and the picture you put of the empty box is how my head feels. In theory I very much get it and it's implied usefullness. But when i sit down and try it out my head feels so stretched and empty and a mess. I don't know where the hell the start but at the same time I do know where to start. But im in a state of overthinking it for what, im wasting time.
Basic OOP expertise is important. Mixing OOP and functional is the key to a successful project.
Even when I write in Java, I tend to only write procedurally unless I want to represent a noun. In that case, I write a class - or multiple classes in a hierarchy - that contain the components of that noun. For example, a button would have a position, a size, and a displaying state. This is neater than programming in C, which tends to turn into confusing spaghetti code because of headers and source files. However, there are no "verb" classes in my code. This makes sense, and is probably one of the biggest criticisms of yours leveled onto OO. The truth is, the problem with OO is that people misuse it and make things confusing. It's not good or bad. It's like any other paradigm, it has its uses. Some things it makes sense for, others it doesn't, but until people stop using it like ketchup on food, it'll have a bad reputation.
I always felt something similar to relational modeling would be a better goal for module organization. OOP tends to force one into a hierarchical and/or nested model. When it goes outside of those to escape the limits of trees it becomes a big ball of pointers, like what databases used to be before relational came along. For example, if you want to control your reference to a "parent" scope, you can using a foreign key: you have an ID number to the parent scope module "table". Such perhaps may even be computed run-time. Relational better manages cross-references than RAM pointers. And hierarchical file systems for managing code are similarly limiting. We are outgrowing trees and nested-ness to manage code.
OK, programmers seem to be obsessed with the way they organize their source code, without ever giving a thought to the obvious: every capable compiler will completely flatten whatever elaborate code organization you came up with for the purposes of your "team" (aka group of people who can't stand each other and who don't talk to each other except when the entire building is on fire). What you would have to think about, instead, is the runtime organization and dynamic execution of your code, which has little to do with how you define interfaces, datatypes etc.. That is mostly a function of the actual control code that you put in.
I think this is a great refinement to the orignal video, as someone who would label themselves a OOP programmer, its the way I was 'raised' so to speak, I found myself resoundingly agreeing with this entire video. Its all about moderation, abstraction and ecapsualtion are absolutely useful concepts but can definitely be taken too far. I would like to think that no single paradigm is best, and that the best programmers are those that have a solid grasp of the basics of all paradigms and that the best languages/frameworks are the ones that allow us to use a healthy mix of all those concepts. I would like to think that modern programming, for the most part, is multi-paradigm
This is my understanding of what was presented along with questions. Am I misunderstanding any of these concepts?
State Modules:
Understanding:
Stores state of the program, avoid merging state with various logic units to help managing state across many features while also providing a "simple" interface to obtain said state.
Question:
Say we have an inventory and common functionality is to add or remove items from the inventory. Since this functionality only modifies the state it belongs to, inventory, and nothing else, I feel it would be okay in this case to merge state and logic into one module?
Logic Modules:
Understanding:
Essentially pure functions that recieves state and outputs a 'copied' transformed state. These should avoid directly modifying program state and instead defer that role to whatever called it. By this logic, could this be considered a form of pipe and filter mechanism?
Question:
How would the actual state updating take place, a different kind of module that essentially links the two?
"Controller Modules"
To address the previous question you essentially have another form of module that orchestrates these interactions. Is this completely off base?
For example, say I have the inventory again and some mechanism to pick up an item to be added to the inventory.
1 - The controller recieves a pick up request
2 - The controller goes to state management to obtain Inventory state from a state module
3 - The controller goes to a Logic Module to transform the pick up item Id to the actual item to be stored in the inventory
4 - The controller gets an "Item to be stored" from 3
5 - The controller updates Inventory (using the provided transformations in the Inventory state module, else a logic module)
6 - The controller requests an Inventory state save by passing it to a "Serializer" Logic module
7 - The controller goes to whatever manages the database and passes the serialized inventory for saving
Step 6 is where I can see a bit of mess. Would it be better to have all "pure" state transformations contained within the same state module such as "Inventory", or split it up into "Inventory" and "InventoryLogic"?
To me, the first thing I imagined when you talked about splitting logic and function modules was actually functional programming. At a first glance the distinction between state and logic seems to map to the difference in Haskell between functions in the IO monad and any other function.
I had a hard time properly understanding what "state" and "logic" really meant, in essence. Until he called them "data" and "actions", which is when it clicked, for me. The reason is because OOP made me confuse things, because an object inherently by design had both data and actions, and the data and actions are tightly coupled.
It's really refreshing to see someone explain why OOP is bad and how OOP could be good (in a more recent video than this one), because I kept saying for at least half a decade that OOP is not a good programming paradigm in the way it is being taught, used, and advertised, but I couldn't quite externalize the exact problems it has.
1:38 This sounds like some kind of hybrid of pure functional and imperative to me.
This is truly great. Thank you.
What are your thoughts on Entity Component System (ECS)?
Interesting video series. A lot of it is over my head but I'm beginning to grasp the differences between styles and what's dogma vs what's reality. When I compare what people say about programming against the quality of programs, I see abstraction as a stopgap for lack of competence or willpower, kind of like owning a car makes it less likely for people to walk or ride a bike even in situations where it would make sense. Maybe this is a result of the industry pushing for results prior to things being ready, and people have to take shortcuts as a result.
In the video game industry, I think of a game like Pokemon Blue/Red vs games of today. Games today can sometimes barely get past the damn title screen without some sort of bug, while people can literally build a working chatroom inside of the original Pokemon games and those suckers will just plow through every bug like they aren't even there. Bugs became features that sometimes made the game more fun and replayable, while today things often just break. If we're really abstracting and segregating or whatever the dogma is, I would think several pieces of the machine can break before the full program goes down, but most often that isn't the case.
The main question I ask is this: How much extra stuff should I have to learn/read for the sake of making my program more readable? I started my programming journey with Python, and when I try to read other people's programs, I often see imports for stuff even I could do, as a rookie, with a few lines of code. Instead, they import an entire module to accomplish a simple thing, which tells me they aren't actively considering the necessity of what they're doing, they're just pressing buttons in the right order to make the machine go, because that's what they were taught by other people who were taught the same thing by the last guy.
The two main problems with OOP are that it asks you to move run-time decisions into compile time, which requires endless re-factoring if you made the wrong design choice and/or the requirements change. The second one is that it forces you to make poor data layout decisions that are based on form over function. That can literally destroy performance in a few lines of purist code.
Huh, that's interesting, because there's one part of a library that I have where there's code that does some dangerous stuff if it's not handled properly, so I made an abstraction on top of it so I don't have to worry about handling those special cases when I need to use the underlying functions elsewhere.
This sounds like what I do in haskell.
Can you refer to an example? I would like to understand this...
@@ethashamuddinmohammed1255 Given that Haskell is purely functional and pretty much everything is immutable, all code deals with logic by default. In this sense, you'll be dealing mostly with "how do I make functions that can be chained together in order to come up with a deterministic result".
Operations relying on state on the other hand e.g. IO, or operations that in general, produce a "side effect", are abstracted generally through Monads, which iirc are just data types that wrap around values, but with a "side effect" attached when you want to "get" those values. Monads can also be sequenced together to produce a desirable result, while also maintaining the side effects e.g. obtaining a random integer from a user-defined range can be expressed through "the side effect of asking the user for input is crucial to produce the side effect of getting a random integer"
@@cyclonic5206 Another long theoretical reply. I need code.
@@ethashamuddinmohammed1255 It's a bit difficult to talk about Haskell without going through the theoretical stuff since it heavily relies on those theories to accomplish what they do.
Here's a short FizzBuzz solution in Haskell. wiki.haskell.org/Fizzbuzz
`fizz` is a function that takes an integer and returns the appropriate response string for that integer. This is part of the logic module, since it's not really able to access external state nor does it have any explicit internal state. You have to explicitly pass it the value that you want.
`main` is the entry point `IO monad`. Since you can consider the `IO monads` that serve as the abstraction for IO, as part of the state module, it can go ahead and reach out to the outside world to print information, while also being able to use functions in the logic module.
tl;dr: Logic modules -> pure functions; State modules -> monads and other abstractions
@@ethashamuddinmohammed1255 The main reason for monads is 'returning an error' from a function, which wouldn't otherwise be possible without side effects. Let's say you have a:: Int -> Int, b:: Int -> Int and you call b . a, but there is a possibility that a returns an error, so you change the declarations to a:: Int -> Maybe Int, b:: Maybe Int -> Maybe Int, where Maybe is an example of a monad that either has a value of Int or is an 'error' - a value named Nothing - and it comes prepackaged with Haskell. Turns out this is convenient in general anytime you need to replicate exiting a function chain and even elsewhere. Maybe monad and the IO monad are the most used examples of a monad.
This is getting at the same points as “boundaries” by Gary Bernhardt
Computer science needs analytic metaphysics.
Structs encapsulate data. Modules encapsulate code. Objects are instantiable modules.
It would be nice to see what you call logic and state to make it easier to understand what you mean.
More of these videos, please
I absolutely agree with this video! I have seen proprietary frameworks (in Java) with crazy chains of abstract classes and inheritance and NotImplementedExceptions to know that you cannot push OOP too much.
And yes, you still need “some” OOP ideas to split things up when they are “too big”.
Just like many comments below said, don’t let yourself stuck in a certain paradigm.
This is what have been serving me well:
1. A requirement usually can be split into a small number of sub cases, and this is where I think OOP would help a lot: let a component handle a sub case.
2. Within a component, use procedural programming, as it’s likely a list of actions you have to go through while keeping tab of your current state.
3. When an action is too big, then I write pure functions to deal with them. This also likely makes code in 2 looks like just going through a short and sweet bucket list
Amazing video ! I've seen it twice so far
Late comment but: this video sounds like an introduction to erlang without any erlang. You should probably check it out if you haven’t already since erlang basically does exactly this: encourages the use of modules, relegates state-code to be as small as possible, and forces the logic code to be purely functional. This kind of programming style also has the benefit of baking in notions of concurrent programming. What’s not so obvious is that with a bit of extra thought, you can build in remarkable fault tolerance. Seriously, check it out. Sure the syntax isn’t great, but the design philosophy sounds like it’s right up your alley!
Encapsulation is a meme.
So trying to access a private member is a meme ??!!
ThePrimeagen sent me, subbed!
Some really good ideas here, I've been thinking about similar things too. It feels like OOP is kind of a halfway house, some of the principles like polymorphism are great and some like inheritance can be positively bad, I've seen class hierarchies 10 levels deep, there's no way that is good design
Neither of those concepts are inherent to OOP
@@khatdubell They're literally 2 of the 3 pillars of OOP
@@n8style I'm sure you'd tell me encapsulation is the third.
Slap whoever taught you that.
You can have OOP without them.
I can give you examples of OOP languages without them.
@@khatdubell not sure what you think OOP is but am certainly curious about these OOP languages that apparently don't have any of the 3. If you mean you can write code that doesn't take advantage of the 3 in an OOP language that offers them then I'll be pointing and laughing just so you know lol
@@n8style No, i mean there are languages, widely accepted as OOP languages that don't have those features.
Simula, the grandaddy of them all didn't have any encapsulation i believe.
Visual Basic, if memory serves, doesn't have any inheritance.
There are a large amount of OOP languages out there, i'm not gonna go through an exhaustive list.
The point stands, there are many languages that are considered OOP that don't have what everyone considers to be absolute to the existence of OOP.
Its jus that modern OOP implement these things because they are, in general, good things and it doesn't make sense to exclude them.
How they came to be considered pillars, i have no idea. They're just good features every paradigm should use if they can.
We need examples!
100% agreed. What irritates me about OOD is that is feasable by it's proponents to predict future cases and all possible scenarios and this is IMO pure arrogance. When you start using keywords like "public" and "private" is like you are saying that you know what somebody should access/call and what not in every possible scenarios present and future and since nobody is omniscient many people will be pissed of by your choices of limitating access to some parts of the code.
Exactly this. One man's idea of pure design is the next man's roadblock when changing the design. In many cases source code cannot be modified, so you are at the complete mercy of the designer. Countless times I've had to *rewrite* existing code just so I could gain access to hidden data I needed. Simplicity should not be confused with stupification and distrust.
This is something I had always had to argue with my team lead, what to keep private and what to keep public. I mean in your own microservice what is even the point of public and private variables? May be my knowledge is limited and my experience is less, enforcing a certain "this is the way" is too damn arrogant.
It's not always impossible to know what users should and shouldn't be doing. You should assume they'll be following best practices for using your code.
I had to wrangle a pile of "legacy" code recently and they must have had some dice with generic nouns and adjectives on it, and for every identifier they rolled it a few times. it was madness. couldn't tell head from tail on that beast. had to FIRST rename everything to foo/bar/baz, literally, so these things wouldn't sound so similar. once the misleading words were defanged, only THEN I could start seeing the structure that was there, and rename once more, to something that actually made sense..
Although I don't agree with everything you say, I'm quite intrigued with your monologues.
I'd love to see a debate between you and Huw Collingborne, the Smalltalk guy. Not an argument, just a pro/con discussion about OOP following proper debate rules. Your opinions tend to be polar opposites, even concerning which came first, the object-oriented hen or the procedural egg. It could be quite the event, especially if you did it live. You'd have all us nerds on the edges of our seats. 🙃
His argument that OOP splits stuff not when it gets to big but preemptively I feel is slightly off. With OOP stuff is split based on context. Not that there isn't a problem with this ofcourse as it truelly depends on how indepth you want to define the context. As an example lets take the classic car example. How far will you go. You can seperate it into the body, the engine, the drive shaft, gearbox, tires, etc. But lets look at the tires (mainly cause they are simple and im not a car person). You can define these as 1 single object which defines its properties based on a couple of variables and provides some calculations methods. Should be fine for anything other then indepth realistic physics simulation. However if you really want to split the atom and get to the quarks you can say, "well yes but the tire has the bokts and then the tire, and the thing that the tire is arround, and then the thing that makes the tire look nice" (I told you not a car person). You could split all these things up into smaller sub classes and further abstract everything out. However at this part you have to look at about 6 classes just to get the entire picture while when written as one it can be completely representend in one class. However on the other side if you dont abstract the car you end up with a file thats thousands of lines long and that makes it difficult to just look at 1 part of it.
This exactly. The problem with writing one module that contains 1000s of lines of code to cover everything for the tire is when reusing that module elsewhere, every car gets the exact same tire assembly, unless the copy of the module is changed internally, which now introduces more problems with code maintenance because several modules will be very similar with slight changes. The problem is not with OOP; the problem is with people creating spaghetti code whether using OOP, Procedural programming, or any other code paradigm.
@@jajwarehouse1 yes if you have a massive class with several different ways of behaving, means you have a lot if statements and it becomes hard to follow one path. On the otherside if you abstract too much you have too many sub clases and it becomes difficult to see the entire logic as all the parts are in different places and you almost need to have 7 classes open in different windows.
@@pigboiii It's almost as if the solution to bad code is learning to program better ;)
One of the most obnoxious things to me about C++ is how every time I need to use libraries and parts of the language I'm all not that familiar with, I end up having to jump all over the documentation at each line I code in order to write something I would be able to implement in plain C in 10 minutes, and as a bonus the initial code will often contain tons of runtime bugs because I wrongly assumed at some point a method's behavior based on its name or by analogy with similar classes. Alternatively, the language and libraries will accept code using a combination of advanced features, only to realize later on that the program is doing something else because some feature is not supported.
Yesterday for instance I discovered the hard way that STL containers so not support polymorphic pointers. or rather any polymorphic pointer stored in the container will lose any association with the derived class.
This kind of problems not only make programming a lot less fun, but also make you wonder what's the point of attempting so many features and such complex structure, if they later break down for so many non-trivial cases.
That you take a lot of time to understand the libraries of C++ is probably just because you're used to programming in C and know its libraries. I mean I have mostly no clue about libraries in C++ myself, but it's safely not harder to basic or even advanced stuff than in C. Could be wrong though.
C++, as a language itself, is pretty much a mess, I can agree with that. Sometimes I have the feeling that there's 10 ways of doing the same thing and that 7 of those have no reason to exist.
@Zamundaaa
My main issue is not whether I have learned the libraries enough or not, but rather that there are so many of them, and yet they are often not reliable enough.
After many years of programming, I have come to the conclusion that there is some irreducible complexity intrinsic to every task, and that you can ignore it or delegate it to some library at your own peril.
The moment, for instance, you introduce a new C++ algorithm class to substitute characters in a string, something that can be done in ONE line of code, is the moment you are polluting the language with useless garbage, while increasing the size of the documentation for no good reason. Now I suddenly have to look up what are the fail states of this new class-function, because there is no way I will remember them off the top of my head next time I'll have to use them, or how does it parse or deal with C-like terminated strings, return and linefeed characters, etc.
You basicaly traded one line of code for one documentation lookup, which takes at least 10-100 times longer to find, read and understand.
@@KilgoreTroutAsf yeah I can definitely understand that. Rather have a good library than a thousand bad ones. That's also a reason why I redo most minor stuff myself, at least in my own personal projects. Even with Java, where most stuff (that I came across) is done pretty well in the standard library. I got my own "Utils" project with some stuff missing from the standard library or where I don't need the full default implementation but rather a specific thing. It's pretty useful. Not necessarily too practical when you write software for others though.
The state vs. logic separation is bullshit. Both build from same primitives and separation does not have a function. Private/public is also bullshit because their duality serves no technical purpose. Hiding things vs. letting something vary (exist typing) has huge differences. OOP conflates ideas such as these together arbitrarily. OOP fails to convey the idea that state and process are same thing. OOP enforces too early breaking into submodules that make no sense. It's a net-negative grabbag of crappy language design ideas that will haunt people for decaces.
Object orientated programming is alot of over abstracted bollocks.
This looks like just normal programming.
What!?!?
I just don't know what to believe anymore :(
Stop believing. Think.
I would love to see a video about you make an application with oo, and with your concept, to see it in practice. I am very optimistic about your way of doing it.
I'd really want to see you collaborate with Johnathan Blow on the language he is working on.
I'm happy that this validates how I've been coding a c lately. I'm in the process of re-writing an old piece of code and I'm pretty much following these conventions. But I got all of this just reading about opaque types and handles.
Interesting observations. I tend to agree.
Excellent video and I like to add that there is more to programming than code.
Data- and process modelling are far more important than how to do the actual coding because if either gets done too wrong, the code at the end never fits or operates well.
"meaning requires context" win.
The philosophy about interfaces and data types presented in the end seems similar to the philosophy best supported by Rust
This is Composition over Inheritance with extra steps.
The way I think of it is OOP is an expert friendly paradigm, and most people just aren't experts, so they'll end up using the power they get destructively by mistake (a.k.a. shooting themselves in the foot.) You inevitably need to decide how to group logic and data, which is a combinatorial problem, and most of the combinations are just bad. People who aren't strong conceptual thinkers (which is quite a rare trait) will inevitably stumble on some arbitrary design which by chance is almost always bad. Sometimes it's high coupling, which is reasonably easy to deal with (analogous to the accretion problem in the video), and sometimes it's low cohesion, which IMO is the real killer. One responsibility smeared across too many actors.
However. I do see expert friendly being equated with "bad" a lot. There seems to be some arrogance tied to that: "I tried it, and I wasn't immediately great at it, so it must be bad." Then starts the rationalization game of finding arguments for why it is categorically bad, but that often just comes out as a not-thorough-enough explanation.
Cache misses aren't expert friendly, even though OOP virtually guarantees them.
My first OOP was AutoCAD's AutoLisp.
Now I live in various microcontroller Assembly. Languages, like Atmel, Intel 8051, and remarkably RCA 1802 COSMAC.
C# looks like Java looks like Ruby looks like Yo mama looks like Python; if you have done it as long as I have.
Alan Turing was right.
I don't have much programming experience, but from what I have experienced, OOP seems to be useful for mathematics related things, where everything that could ever be done is already well defined beyond contestation. You won't add a matrix and a complex number together, so it's clear you won't need to adapt to that.
There's three problems with that: a problem with common implementations, a problem with methods and a problem with mutability. Languages like Java, Python and Ruby implement objects as heap-allocated, with a pointer indirection which taxes performance with short-lived objects. This makes fast math calculations nigh impossible using objects. The other problem is conceptual: objects are commonly seen as "receiving and transmitting messages", through their methods. However, most mathematical constructions have "operators" that are most commonly binary, that is, they relate _two_ objects. This is incompatible with the classical "method" thinking. The final thing is that objects are commonly thought to encapsulate state and have persistent identities. In mathematics, all state is immutable and there is no identities. (That is, even if you construct π twice, they are the same, never-changing π). These "impedance-mismatches" make OOP not very good at representing mathematical things.
A matrix is just data. An array would be a better representation of a matrix than a class, where you would still have to use an array to hold data. Also, for a complex number you can use a struct or atuple, there's nothing mandating you should have a class to represent complex numbers. If you represent complex numbers as struct and use an array to hold a matrix of complex numbers, you would write code faster to multiply two matrices together than if you write a class for everything and try to over abstract and over generalise. The goal of writing code is to get things done, not to satisfy the aesthetic needs of design gurus.
Math is made out of pure functions and immutable values. "Mutable objects" aren't really a thing in math.
The concepts in this video are deeply similar to those in the book "Data-Oriented Programming" by Yehonathan Sharvit
This appears to all be written from personal experience (this and the previous video). My experience has been completely different, with bad experiences with functional and procedural code and some good and bad experiences with OOP. There's great deal of language like "usually", "sometimes", "often" without any clear cut examples or comparisons. I would like to know some samples Mr. Will is basing his assumptions and opinions on.
I think it's over-engineered "enterprise"Java
It's all dependent on your use case. These are all focused on enterprise and server programming from the looks of it. Get into games, and one'll see that a focus on state will reign supreme.
Look at Brian's other videos. He specializes in games.
This idea is pretty similar to Redux and React, All the state is kept in stores, all the changes in state are applied by logic functions called reducers and action creators. and react just displays the UI based on the state
As a well meant advice: I'd recommend steering away from getting into react if you haven't used it much and/or have to work on a larger / complex project.
I've started working on a decently big project using react about 2 years ago. The idea behind it really is great performance-wise, but the way it requires you to declare state dependencies is terrible. It's depending on the developer to notify it of any possible change to dependencies of the visuals of the current element, which then invokes the reaction functions that had said dependency registered. If you provide an non-primitive as dependency, it won't detect changes to it's properties, so you might be scratching your head for a while asking yourself why the UI doesn't reflect the values you see in the debugger. With the callback shenanigans you'll see the current values in the debugger, but React didn't apply them to the DOM, as the object reference hasn't changed.
If you try to update the state inside a reaction method, you'll get a runtime-error, basically stating 'you can't do that'. In a situation where you only can determine that you need to update the state within in that method, you'll have to force a re-render, as you have to update the state outside of the reaction method first so the reaction callback can see the change. You basically add a react reference object with the only purpose being to trigger a redraw.
If that somehow triggers a loop, you'll get another runtime-error.
The way it handles the state will not allow you to easily debug it, with current values represented in the developer tools effectively always being out-of-date, so they created a complete react DOM explorer as a browser extension. Because it bloats up it's own virtual DOM so much, it's pretty hard to navigate, and it requires you to manually click on every state variable of an object to retrieve it's current value. The state variables don't even have names, because it doesn't provide a means for that. All state variables are simply saved in an array, where the order of occurrence determines their index. This also means that you're not allowed to increase or decrease the amount of state variables for an element.
it may see easily resolvable by just 'planning ahead', but development doesn't work that way, unless you have unlimited resources, and therefore can take your time to plan beforehand, read the complete documentation, experiment, etc.
I probably spent a lot more time debugging react-related issues instead of moving forward with programming.
I'm a programmer at heart, so i personally didn't mind it that much, but if my goal is programming speed, i'd rather use a mundane environment.
Great video, I was thinking the same thing for some of what you mentioned recently too.
I think of OOP as a tool. Some problems (like the actor model) map really well to OO. Others work better with procedural methods. I'm a big fan of using the right tool for the job (I totally agree on the over use of encapsulation btw).
I don't see how actor model actually works with OOP. The Actor is an entity which is responsible for so many things and can generate other actors too via messages. This is all just a mess.
You should make your own programming language!
The separation of state and logic is pretty much monads and functions in FP. It seems to me like any disertation of good "OO" just always devolves into FP. But the issue with procedural you dont address is that you often end up with anemic code - large reuses of value objects that cause widespread typing dependencies. Thats where FP solves the issue with widespread polymorphism.
FP doesn't solve any of the problems OOP creates. It simply creates a lot of new issues of its own. Memory hunger, for instance, because you are not allowed to do in-place computations.
I don’t think you got the inheritance part clear, it may encourage people to use inheritance just to “reuse data member declarations” forgetting all about LiskovSP. Let’s first separate public inheritance (the most common one, like Java) from private inheritance (what you exemplified with go). To put simply, public inheritance inherit data AND the interface, private inheritance inherit data WITHOUT changing the interface. Using the public when you actually needed the private is a mistake that many OO code does. If the language doesn’t support an equivalent private one (go embedding, private mixins or traits, etc), just use composition instead.
Long time no video. Glad to see another one. But honestly I don't think you will be able to convince anybody with these videos who doesn't already know how bad OO programming is, because humans ...
I think the paradigm used should adapt to the problem at hand. So I kinda disagree with this, because there have been times when OO has served me beautifully, and times when it has not. I think your idea would suffer from that as well. From where I stand, I seems like it's ultimately situational. Any language which enforces a specific paradigm is also a language that restricts certain use, and makes performance compromises for style.
Basically just use C++.
You would probably appreciate a new language i have just released, called Beads, that uses the State-Action-Pattern, where you have a small nugget of mutable state, and the view code does read-only access on the state, and then the event tracking code updates the state. It includes a graph database and a layout/event/drawing system in the language, so you can build things without external frameworks or libraries. It completely rejects the OOP paradigm. No classes, objects, constructors, destructors, functors, monoids, etc. I am confident you will like it.
Why in the world do you think that I would like if that you reduce control flow to one of a dozen useful models at the language level????
@@lepidoptera9337 I was referring to Mr. Will, as he is not a big fan of the OOP paradigm. I don't know what you mean by reducing control flow; Beads uses a 2-way branch, a multi-way branch, and a looping construct that maps to conventional languages rather closely. The innovation in sequencing of computation is the automatic refresh which occurs when state variables which were modified are changed. This is very unusual, and i can't think of another language with that feature.
@@edwarddejong8025 That's because you don't want such a feature. That's exactly the kind of thing that happens in hardware all the time and it's a nightmare to debug. I would assume that you never had to write FPGA or chip designs in VHDL or Verilog, so you re blue-eyed about these things. You probably think that execution order doesn't matter and that one shall leave such things to the compiler. In practice that's a deadly mistake whenever programs have non-trivial side effects.
@@lepidoptera9337 Since you haven't read the 130 page user manual, or spent any time on the examples, you can't accurately critique the very handy feature that is included in Beads. In a graphical interactive program, you have a model carried in your state variables, and when the model changes, the pure read-only access to the state variables renders the screen. When the user responds with events such as a finger tap our mouse click, then the state is updated and the refresh cycle happens. In web apps, where the browser is fairly slow to render, minimizing the amount of the screen that is re-layed out and rendered is of great benefit.
I can assure you that inside a typical function, i am not re-ordering the instructions as this would cause chaos for the author.
If you enjoy the state-action-model methodology, and enjoy the clarity of Modula-2 coupled with the convenient indent-significant syntax of Python, you will like Beads. It's an experimental language, and starting out with users taking it for a spin.
I don't design hardware, but i have been programming daily since 1970.
@@edwarddejong8025 I have seen these things decades ago and they always sucked. No need to look at another failed concept, thank you very much.
Look to Erlang (Elixir) processes and message passing between them.
If you appreciated this suggested approach, you may be interested in the C3 Language.
This clears up a lot.