Nowadays the feeling I have is that Java developers and frameworks tend to use less and less inheritance. What I hate about Java programming is not actually the OO/DesignPatterns abuse (which are becoming more rare nowadays) but the abuse of Annotations and reflections, which brings implicit magic to you program and totally hide control flow, so frameworks become Documentation/CopyPaste/GuessTryRepeat Oriented Programming. This is the annoying part.
Where do I put the breakpoint or print statement when using annotations and reflection? If code doesn't run procedurally line by line, then I'm just too dumb to debug it.
My take: OOP is the skeuomorphism of programming. It was very useful for helping programmers conceptualize what was happening in their heads with the metaphor of objects interacting. But now, experienced programmers are realizing that code can be much simpler, do the same work, and be more aligned with the reality of the hardware with procedural programming.
No, OOP is still best for stuff you need to modify and maintain over a long period with multiple programmers. It's really more of a defence mechanism against the things that can go wrong. On the other hand, in systems programming(including cloud systems etc) building general environments rather than specific apps - anything that's not going to change much and it'd probably going to need complete rewriting when it does change - procedural is all you need. And it's better because you are closer to what's actually happening under the hood. You need OOP for apps and interfaces, but for systems and environments, procedural is better
@@bozdowleder2303 What is it about OOP that enables this in your opinion? What kind of OOP are you talking about? My experience is exactly contrary to that. Java/C++ etc etyle OO programs have a lot of state that is hidden but gets modified and that state affects their behaviour. This means that a method call that gives no outward hints can change the state and cause some subsequent method call to behave in unexpected ways. Other problem related to this is code reuse. Where the problem is again that methods do not act independently but most of the time they depend on the state of the object. In both these cases the state may not be simple data types but other objects which tends to multiply this effect. If you're talking about Erlang style OOP which is pretty close to original meaning of object oriented.. well, that's entirely different kettle of fish and I would agree that there are aspects that help build reliable programs.
@@MrChelovek68 In interface design oop is more intuitive. For example it's hard to imagine a procedural version of CSS. But otherwise it's more about protecting you from the bad things that can happen when the same code base is maintained and modified by multiple programmers.
@@bozdowleder2303 This is really the most important point. OOP is for the type of software that developers don’t control the flow, the user does. Objects “do what they do”, it’s skeuomorphic model of the world, the user pushes around the bits. It’s perfect for UI’s, and websites. If it’s a database, the user asks questions, defines filters, and the OO software gives answers, presented as the user wants them. The quid pro quo, is that there is simply no control flow to debug. The answer to “what it should it do end-to-end in this case” is “meh, I don’t know, let’s do it and see, as long as the bits meet their API functionality that’s all I can ask”. Whereas non-user-facing software, the software has a defined flow at design-time. If you want to model the weather it takes an input file of observations, it runs some physics equations, and produces an output file. Or the embedded controller for a jet engine. I don’t want the jet engine to be one of a Class of engines, there’s just one and it better not explode. This should be procedural. Anything else is just un-debuggable madness, because even the developer doesn’t have a clue what it’s supposed to do in particular circumstances, as that has all been abstracted away. If you want to watch somebody squirm, try getting one of those OOP functional people, give them a stack-frame, and ask them what went wrong. They can’t. They can’t even understand the question.
What if you could declare procedures inside structures and compiler would then add implicit first argument named e.g. "self". That will probably never take on.
@@isuckatthisgame No need for any hype. Procedural is the original and best, it has been with us since forever, it's the way computers work. All else is an attempt to impose human hallucinations (abstractions) onto reality.
@@toby9999 it's completely true. At the same time the features classes provide could be delivered in a different way (think of how go and rust deliver these features without classes). In OOP languages sadly since the class is the main abstraction, you are limited to this ideology of function and state that are tied to a module.
As someone who learnt Procedural programming with Pascal and Ansi C, OOP always seemed weird and more complicated to me. Not saying it's not useful in some cases, to me it just seems overcomplicated when Procedural Programming can do the job just fine. K.I.S.S.
If you look at modern frameworks l>e Laravel that use tons of classes, you'll find that in virtually every case they're using a single instance for every class. Zend Framework/Laminas is even more extreme, as you don't ever instantiate a class, instead you're forced to use a "factory" to get a reference to the one singleton instance. That is not OOP, that is procedural programming with objects. If you don't ever use more than one object for each class, you don't need objects. Your class is just a package with package variables.
The more extreme part about Laravel, Symphony is that you get to using methods such as HTML::escape and load something like 20 classes on startup of the program just to call htmlspecialchars inside the method. After that people wonder why a small login screen takes 500mb of RAM and is a 100mb+ project instead of using 100K of RAM for a 100K project assets and js included.
You forget about one very important aspect of Object-Oriented Programming. It's the ergonomics of discovering what could be done with ADT by simply hitting the “.”. And that is the fact why Object-Oriented is so popular, especially for creating programming APIs and that’s why strong typing is now prevalent
in languages that you know/use .... in languages that are actually important - COBOL and RPG - these things aren't optional. And OO is a non-issue, it attempts to solve problems we don't actually have on real operating systems.
@@andreydonkrot- "begin/end" just came over from Algol 60, the parent of "Algol-like" languages. I thought that was the right way to do it. C changed the notation, and it was so influential that almost every language followed suit. But because the C developers didn't know where to put the braces, now hardly anybody knows, to the detriment of clarity in programming. Edit: Actually, it started with "B", the predecessor of "C", or perhaps with BCPL. Edit: I just looked up some BCPL code from Martin Richard's website (the creator of BCPL). He did, indeed, introduce the braces, and he uses them in Algol style, i.e. according to Dijkstra's rule as set forth in "A Method of Programming." So he is not at fault, as I expected.
Inheritance is good if used to add/compose one layer of functionality. Take the standard webpage or controller object and add auth, logging, and configuration to the base class. Makes it easy to make global changes to cross cutting functionality. This can be accomplished in other ways however. The real problem with OOP is layers and it’s slow. I worked on systems that had 3 layers and they weren’t too bad. Then I worked on a system where single responsibility was taken literally and each layer had one line of code in it. That thinking led to 25 layers, massive complexity, that was impossible to step thru.
In the ‘80’s I was taught ADT programming - Abstract Data Type - with Pascal and Modula. When OOP showed up it was you data definitions and their procedures stuffed in single file.
Richard, along with your reasons for this trend back to procedural, there may be a wider scope affect at play here: the lifting up and out of the so-called OOP pillars (messaging) concepts to higher inter-org abstractions. There was a time when local compilation and local services and (custom) libraries were part of a smaller local geographic and administrative ecosystem. The notion of a looser coupled set of non-local services , SOA, micro-services whatever [WAN -ish], has put the interface farther up and out, relying on a higher degree of inter-org definition, accepted standards, and/or trust. So it makes sense that procedural code would rise from the ashes again, the "pillars of OOP" have been subsumed by other inter-cloud interfacing standards, API's , what not. If for no other reason, there is more use of procedural coding as the simple local cohesive of all the many published "WAN" interfaces.
I think there is also a meta-point about how trendy and fashionable languages and paradigms can be. The zeal for FP feels like how OOP was, and the zeal for Rust is similar to how people spoke about Java.
problem with Java - aside from the fact that it's inherently crappy - is that it is now under the control of a psychopath billionaire. Organizations like Bank of Nova Scotia have already banned the use of Java in their organizations (as in LAST YEAR).
A simple form of oo can be done even in COBOL without oo language extensions. Just make one DATA DIVISION + PROCEDURE DIVISION per object, an ENTRY for each public method …
Great talk, OOP always seemed dodgy, particularly w.r.t. maintenance. Modules (Pascal had them) were the crux of modern programming. Now we have AI for code generation and better compilers fixing a lot of the old problems, its back to 'C'
The speaker doesn't really remember the '90s. He was a kid. What this history doesn't include is the explosion of C with classes type dialects that came out in the late '80s and early '90s. There was also an explosion of Pascals with classes and pretty much every language received a class based object system. That's because object oriented programming was PHENOMENALLY POPULAR and I feel that we're being undersold on this because people's memory of this era are so poor. You really just need to ask a programmer who remembers those days a little better Despite this I agree with the main point of this video that OOPS peaked a while ago and that more and more programmers are wanting ditch more more procedural and functional alternatives. In my opinion, OOP was a bit of a mistake and the problem set which suits late bound encapsulated objects is actually pretty small.
OOP was great for SDKs and Frameworks and that is probably why it took off because it solved the problems of the SDK and Framework publishers (and OS api abstraction in particular). It’s also great if you get the domain model right, but software engineers seem to be bad at that early in a project and for OOP, early is exactly when you need to have nailed the hierarchy.
Interesting talk with some nice higher-level perspectives taking history into account. What I miss in the talk is the idea of embracing that within the same project you may very well have different areas where different styles need to be applied. In any large project you will see a mix of styles. Like: functional programming for UI, OOP for the architecture of your application and procedural programming for your GPU workload.
Return of procedural programming is just the result of PTSD. OOP was the hot thing, people were trying to make everything as object-oriented as possible, that led to a lot of bad ideas like UML etc. Now people who were hurt by overzealous OOP evangelists are rejecting it wholesale.
and rightly so. If you can't make me understand it (something like OO) in a couple of hours, as something that make sense? then it's not worth it. I still get things done in ILE RPG (on system i). I mean ACTUALLY get things done, and maintainable. All of this talk about Java, bla bla, in the mean time in the real world .... we do Mastercard Debit acquiring and issuing etc.
biggest problem with oops is people just memorize it but then forget to use it. so many times in interview i have encountered a person grinding me on oops concept but when i see their code its next level bullshit.
I agree with freedomgoddess. Procedural programming never left. A lot of the specialized coding for satellite/instrument control has always been done with procedural programming. I working for both DOD and then NASA have used and continue to use procedural programming. I also do OO programming in both C++ and Python when I think it is appropriate and will lead to more easily expandable systems or subsystems. However, I always start out thinking procedural programming.
This is the most flat earth talk I have ever watched, simple examples and preaching instead of actual real world enterprise class challenges. I come from a very strong Procedural programming background and I enjoye using it to the extent of right domain, when I learned OOP It addressed many of the shortcomings I had with desiging, changing, maintaining and understanding procedural code. I am not saying there won't be shortcomings, codebase rot and chaos in OOP solutions, but It gives at least a chance to write some code with engineering principles in mind. Good luck writing remotely quality and solid code for systems that have more than 2 screens, It will start deteoriating the second you need to add a second method to ftp from other endpoint, or the next time the solution requires a different protocol. It will force more method duplication, tight coupling and side effects, worried that adding a change in a method in a subclass will break your system ? Try changing a if statement in a procedural module.
yeah, I agree. The message is clear, and I can understand explanations for the styles differences and certain advantages, but the examples really let down. Exactly as you said, how will this support low coupling and extensibility?
Agreed. We’re not going backwards to writing in C again. We can actually implement OOP the way Kay envisioned. Look at the Actor model and frameworks like Microsoft Orleans. That’s the future.
I like Richard but a lambda proc'ing a let around a lambda is the first step of OO (an encapsulation for the better or worse): that's what AK meant, not the CLOS/MOP (almost AOP). Object is not about class. Class is a template for objects. You can do OO from any closures. The common alternative is the cloning ones AKA prototype-based (btw, Self is a very interesting alternative to Smalltalk; the Traits as object-behavior with no properties came from this one).
Most of the time I treat classes as structs with methods, a convenient way to associate the data with the functions that operate on the data. It's convenient for simulations where there are a lot of entities with their own state floating around. I could shove everything into a giant table, but then it's harder to conceptualize what's going on. I use inheritance sparingly--only if I really need runtime polymorphism.
If you need runtime polymorphism then you shouldn't be using inheritance at all. The much better way is to use a myType field and if and case statements to distinguish how methods act on variants of the base type.
I'd argue rust is in a loose sense object oriented too, since it supports structures with encapsulation, methods, and abstraction and polymorphism through traits (equivalent to interfaces in other typical OO languages). It's much more limited compared to what you'd find in languages such as C++ though
Go allow to effectively and efficiently implement OO programs! You don’t need classes and inheritance to do OO. If you have a method: a function attached to a data type and a mechanism to put an interface in front of it, you have an object. I think what we’ve seen is the raise of hybrid languages providing easy access to different paradigms, rather than procedural paradigm per se.
I do inheritance by copying the source from some existing thing and pasting it into my new thing. Now my new thing has all the behaviour of the old thing. But it has zero dependency on the old thing. I can even delete the old thing and the new thing keeps working.
That works until you need to do a mass change. The code that’s copied can morph making mass changes harder. Copying code obviously isn’t DRY. My point in disagreeing is to highlight there is no right way. I’ve done exactly what you mentioned many times and I’ve use inheritance and utility classes. Just depends on the situation. We as a collective need to stop looking at styles and languages as absolutes and do what makes sense which is what’s easiest and meets requirements.
@@Lewehot Well, I made that post with half my tongue in my cheek. I was sort of bashing on people who inherit from something as a way of making a slightly different version of the somethings code, overriding this and that, without any particular rhyme or reason. Ending up with code that has dependencies on whatever inherited for no useful reason. Just the same but different in many odd ways. More philosophical... Apparently I, as a human, have inherited properties from my mother and father, and their mothers and fathers etc, etc. All done through copy and pasting of DNA with some mutation thrown in. BUT still my existence does not dependent on the existence of my parents or grand parents, long gone, their DNA deleted. Which is a good thing, for me at least :) Conversely inheritance in C++ as others creates a web of dependencies. Which at least I find difficult to deal with. Making changes to it can be as hard as those "mass changes" to copy/pasted code you mentioned. All in all I agree. Use whatever style/paradigm that does the job. Don't get fixated on OOP, Functional, DRY, SOLID, whatever. I sometimes get the feeling those catch phrases are just dreamed up by self proclaimed software engineering faith healers to sell their books, training courses and conference speaking. Promising snake oil, at a price, to magically cure all your software production problems.
You don't. Sometimes, when you are in a hellish project you have to. Unless that's the case already, you never, ever, want to do late binding. Ideally you want a completely static binary with a perfectly defined call tree. The closest one can get to that is with state machines as far as I know. I would love to see another technique that is nearly as reliable.
After all the functional talks that Richard Feldman has given, I was surprised to see him give a talk with this title. The talk is not about Felman moving from FP to PP, which would have been controversial for me, but instead it was nice to hear him give a talk about all the different programming paradigms. A few nitpicks: Brendan Eich was working at Netscape and not Mozilla at the time. And Richard, you can use the function as property syntax from ES2015, even if you use a JS logo from the 90's ;)
My pet peeve is that C, which was so popular and influential, was written by people who didn't know where to put the braces, and as a result hardly anybody, in most languages, does it right. That includes Mr. Feldman, in his examples here, despite his obvious great knowledge about programming languages. I adhered to Dijkstra's rule*, and since I am now long retired I no longer have to fret about it. *A Method of Programming
13:26 late binding vs static type checking: Weirdly enough I agree with both and think they should coexist. I want to be able to own & customize the final app, while preserving its invariants. E.g. If I want the send button on this comment to be a weather-dependent animal, I should be able to do that then run the app's validation/invariant checks to make sure I didn't break it before hitting save. Javascript & HTML come pretty close to that idea, but fail gloriously on invariant checks, understandability(everything is minified with 1k deps), and are very limited (to the browser). Some nix-like rollbacks would be cool for hot-swapping
I feel like order of paradigms by niceness is logic > functional > Kay-style OO > procedural > typical OO. I believe the best part of the original OO idea is modularity and message passing (essential event queues that are handled solely by a FSM "object" instead of being able to reach in and control the inside of something from the outside). Modern day OO with inheritance and a more obscure version of namespaced procedures is the worst IMO. Procedural is very intuitive at first because we're used to sequences of instructions like in DIY guides and recipes so it seems natural to communicate with the hardware that way, and it's basically the idea of a Turing machine. But if you read "Can We Escape from the Von Neuman Architecture" by the inventor of BNF, it becomes apparent that statements are way less useful than expressions. Expressions convey the idea of referential transparency, or basically that it should be possible to cut-and-paste the definition of something with its name, which means side-effects need to be wrapped in monads to turn their action into a form of data. Hard-core functional like Haskell follows this by making everything descriptive rather than imperative. And programming just becomes writing down a specific vocabulary with everything is described in terms of primitive notions (just like math). Logic programming is currently no where near as popular as functional even. But if more effort was put into building an ecosystem around it to do what general apps do, then it could be the best. (Check out the Verse programming language being headed by the inventor of Haskell). The difference between logic and functional paradigms is basically the difference mathematical relations (non-deterministic) and functions (deterministic) and how relations can be solved backwards instead of only run forwards. This methodology would revolve around specifying a set of constraints on a more general domain and the output of the program being elements of the feasible set according to those constraints (which could turned on and off for different use cases). It's well studied in math like in optimization, relational algebra, SAT, constraint logic programming, etc.
You need to see a shrink. Your "feelings" are all messed up. Message passing was never part of object oriented programming. Not even Kay made that claim and Kay was already crazy. :-)
@ Yes! And I emphasize that this is my subjective opinion. I don't have supporting evidence outside of my own anecdotal experience. But I believe descriptive code is superior to imperative code. And I believe logic goes one step beyond pure functional in that it only lets you state facts and you delegate the finding of the possible solution to the logic engine. Most of my experience building business applications is collecting their definitions of domain events, system entities, the rules governing how they want things to change and other invariants. It seems that actually writing the sequence of actions to take the app from one valid state to the next is more of a side effect of the real task--gathering and verifying the logical facts of this business' "universe". The problem is Prolog syntax is not so pretty and they're lacking lots of the IO stuff we often need. But domain models can always be pure and the IO requirements pushed to the edges like using monads in Haskell. These are my inspirations for the idea: - Verse language by the inventor of Haskell: ua-cam.com/video/OJv8rFap0Nw/v-deo.html&pp=ygUWaW50byB2ZXJzZSBwcm9ncmFtbWluZw%3D%3D - Strange Loop's model-theoretic declarative programming: ua-cam.com/video/R2Aa4PivG0g/v-deo.html&pp=ygUbaSBzZWUgd2hhdCB5b3UgbWVhbiBkYXRhbG9n - Same problem, different paradigms (logic section): ua-cam.com/video/cgVVZMfLjEI/v-deo.htmlsi=XQDfy22zNnxQGGXV&t=1200 - Acceptance testing / defining app requirements/invariants is more important as AI code completion tools improve: ua-cam.com/video/NsOUKfzyZiU/v-deo.html&pp=ygUZYWNjZXB0YW5jZSB0ZXN0aW5nIGlzIG5ldw%3D%3D
@@lepidoptera9337 Quite the antagonist, I see. Well you're free to share an alternative set of beliefs that is more convincing than mine. Otherwise, there's no point to this dialogue.
About his comment of finding it hard to believe or odd that Alan Kay said that it was possible to do OOP in LISP as earlier as the late 1960's, well, even before the CLOS (Common Lisp Object System that was added in the 1980's) LISP had first class functions and closures, so, as clunky as it might have been, yes, it was possible to do OOP, as you can encapsulate the environment and hide variables or exposed them with functions inside functions.
You're saying how Python was supposedly influenced by Simula, but original Python didn't have any class support that I know of, unless I'm mistaken? And I think it got added kind of as an afterthought (which is why it's kind of clunky), but for those knowing Python's history well, let me know if I'm wrong.
I came from the pre-procedural time. There was a reason why OOP was popular. The felt freedom in procedural languages comes with a price. It's so easy to make a mess. Especially when multiple people work on the same stuff for years. It feels like work when fixing code. OOP gave structure and localizes the issues. But I always to overlook the hypes and used commonsense. I guess that's what happening now. But don't think that procedural coding is just heaven either!
OOP was never popular among programmers with experience. It was popular with middle management who were not programmers because they had been sold on it and by architects who are micromanagers. OOP doesn't give structure. It locks you into the architect's idea of what the code has to look like. Since your architect is not god and doesn't have perfect foresight, 99 out of 100 times that structure is wrong. Procedural team development is extremely simple. You assign libraries to individual team members and make sure that your interfaces are well defined and don't change.
What Alan Kay meant as lisp is more like Interlisp Loops, MIT Flavors, Common Lisp Object System and other Smalltalk-like lisp-based environments. Lisp has a rich history.
I think that Kay just meant what he said - that it was possible to implement object systems in Lisp and Smalltalk themselves. CLOS was only adopted to provide a standardized way of doing OOP in Lisp. I don't know why that fairly obvious point baffles Feldman, unless he's being deliberately obtuse.
Thank you IMHO OOP like java requires you to write more code, before resolving the problem you have to think about abstractions, best practices, etc... Functions are abstractions too but it’s thinner and direct. Using interfaces and other indirect abstractions may work for projects or application layers that might change over time like DB access or Auth, but not all projects are subject to those changes. Overall ,it’s subjective but simplicity plays a huge role in choosing a language over another.
In addition to simplicity, the developer must understand what is happening. OOP is ideal in this regard. As for abstractions, assembler is also an abstraction, use it. Machine codes are also an abstraction, you can use them, there is simply nowhere thinner. I don't understand why I don't like the obvious layout of components in Java. Functionalism is all the same, only it ties your hands much more. OOP is literally optimal for everyone. But, the functionalists point-blank do not notice the obvious.
@MrChelovek68 I think it depends on the developper's way of thinking. I've started with procedural programing in python, C then PHP. It feels like those langages formated my brain into procedural way of thinking and writing code. So creating classes feels like translating my thoughs into another language. I still can see the benefits of OOP though.
@@icantchosemyname It's funny, I studied Pascal, then C-Sharp, and now C. But, more convenient and intuitive than OOP, I have not seen anything. In fact, OOP in PHP is well documented. As for the formatting, I strongly agree) My comment is just about "why OOP is used", and everything else is dreams and shadows. I am so familiar with higher mathematics, but when functional programming begins, for example, my brain rejects it as something alien. I don't know why, but it's counterintuitive. purely for my taste, oop and in particular languages like Java or C-Sharp are ideal, because they do not adjust a person's thinking to a machine, but on the contrary, allow anyone to translate a thought into a code. At the same time, I dearly love both C and Pascal for their complete freedom of action.
20:00 I don’t think the industry actually moved away from messaging and late-binding, but only the specific implementations due to pursuit for performance. The static type checking boom came only when type checking has advanced enough to check late-binding (generic types, trait constraints, gradual typing) and messaging (borrow checker), so it is not actually against those ideas. Not to mention the micro services paradigm is a system level realization of messaging and late binding, along with renewed persistent interest in Erlang BEAM.
I started programming in 1974. I realized very early (late 80's) that OO was more difficult to teach, and far less productive especially for average programmers. It is also far more difficult to analyse and debug. Procedural programming needs one thing to render it truly useful - an integrated memory database. Relatively easy to do, this provides procedural programming with all of the (tentative) advantages in OO due to classes / objects being able to retain complex data at run-time, and doesn't add any of the baggage. I've been managing / writing systems in procedural programming with an integrated memory database for 2 decades now, and I am quite sure that in terms of the programming paradigms available today it is the best trade-off. AI might result in new trends, let's see.
@@MrHopp24 I guess you could use a very basic system like that, but for complex systems you actually need a relational database, even if only with minimal functionality. You do not need sql (at all), just row level access to data, and perhaps some kind of indexing ability (often not required). Essentially, the data concept behind class and object with relations is a very good idea and almost indispensable for complex systems, but it is (in my opinion) a bad idea to couple it with the programming language. Procedural access to a minimalist memory database is all you need. I have managed and programmed highly complex systems (over a million lines of source code) over the last 20 years, and the resulting programs are simple to code, understand, and debug. No hidden nothing, everything is apparent. Where required (and this is often) the entire memory database can be dumped onto disk to store state and essential data, and reloaded at startup. I am not so sure this is easy with class and object.
@@rs8197-dms I recently had to implement and debug a rather obscure topology decoding algorithm. Being able to just dump the program state to disk at every step and analyse the resulting data flow in a spreadsheet was key to getting it right. I've worked with IBM mainframes before and working with count-key data (CDK) formatted storage was the most pleasant programming experience I've had in years. OOP too often leads to convoluted and deeply hierarchical data (mostly by accident) that's hard to parse and reason about. I too moved away from that many years ago.
OO is just fancy message passing with lots of helper stuff so you don't need to manually check what the message is all the time. It's nice in some instances but it's a bit broken in others. I think it can lead to people getting confused. But what do I know?
OOP has nothing to do with message passing. That's just another meme of the internet crowd that knows nothing about computer science. So that's you, then. ;-)
@lepidoptera9337 cool story bro. How do you think the objects pass information to each other? Magic? Unix/posix has used it since the start (pretty much). C++ etc use it. Message passing is just a way that programs and objects/functions etc understand how to communicate with each other. I don't need computer "science" to understand how basic messaging works.
@skilletpan5674 Most OOP languages implement old style function calls. Absolutely nobody in their right mind implements message passing outside of the context of distributed systems. You need to take a few CS classes.
procedural falls apart when you have large sets of structs (like nodes in a syntax tree) and you need to call functions which each type implements differently. Having overring to make dispatch tables is way easier than function pointers and switching on type IDs.
Procedural is a technique and has its limitations. OOP is also a technique. You just have to use the right tool for the problem. 90+% of problems can be solved by procedural? When needed you introduce OOP or functional, why not calls to quantum in the future? The problem with OOP and its best practices is that now the push is "We have a hammer and everything you have to do has to be done with the hammer as this is the best practice and only professional way.". What happens when you need a small screwdriver? Well, we all know it. The stats for a very long time have been that around 90% of projects fail before first production release and things get worse.
I agree, I would rather have a large OOP code base than a large procedural code base. With microservices, code is smaller and procedural makes more sense. I think reducing code base size is driving us back to procedures.
OO. Back in the day, we called it OOPS. I wrote some code in Ada, before I left the game. My main language was IBM/370 Assembler, but I also wrote in PL/I, Cobol and Fortran. My very first language was IBM 1440 Autocoder. The 1440 was a scaled-down version of the 1401. In those days, an expert programmer was one who could squeeze the most out of 4000 6-bit characters in RAM and instruction timings in milliseconds. Today, it looks like chaotic spaghetti, but you couldn't do too much in such a small program. We used op-codes as constants, modified code on the fly. It had variable word lengths, so we could fool around with word marks. It was the baling-wire-and-canvas age of programming. There was no separation of code and data, which made all kinds of hair-raising things possible, but self-modifying code was like a precursor of neural networks and AI.
Even though I do object-oriented programming, I've really grown to hate "inheritance hell", when there are long chains of A inherits from B which inherits from C. Maybe this is addressed in the video (I haven't watched the whole thing yet) but I assume part of the shift away from OO is people getting fed up with inheritance hell. Personally, I think I might like a language that still has objects/classes, but no inheritance. EDIT: now that I'm halfway through, yep this is where he talks about that exact issue. Specifically, he talks about composition being preferred over inheritance. He also describes classes without inheritance as being basically nested structs, which huh I never thought of that.
then what happens if you some class and you want to add an extra field and some methods? now you have to pack in the same class and mix it all together which is not great either. The bigger culprit tends to bad APIs that weren't well designed or grew out of control over time.
Me personally, I've been impressed by most *_Library Devs_* and what they've done with their APIs. So, I would be curious of some examples of Libs with "inheritance hell" ? My impression is that "inheritance hell" comes from "the business layer" as a result of Devs always being on a time crunch, and hence, leads to them just throwing shit together (that probably wasn't the best decision, but since they had a deadline, they just "went with it").
Unfortunately OOP is often taught and understood as `class Dog extends Animal`, which is the worst way to explain OOP and OOM (right up there with the other extreme, IEnterpriseAbstractFactoryProvider) Marrying functional and object-oriented as well as compositional patterns is the way to go. Use the right tool for the job. I don't like JAI and neither ODIN. They lack expressiveness And I come from an Assembly and C background (then C++/Java/ObjC, then Python, and currently mostly C# and a bit of TS) If I want a good alt-C, I use Zig. If I need Cpp interop, I use Nim. Rust is absolutely an OO language. It has traits and and methods that accompany and operate on the data types they are declared in.
@@andrewf8366 I'd say OO is when objects themselves do things. Procedural is functions change objects that just store data. Functional is functions return new data based just on inputs. Best way to go IMO is a mix of all 3 - OO gives you really nice abstractions with interfaces. Procedural is great for IO, functional is great for business logic - extemely easy to unit test due to pure functions. That's why I really enjoy C#, it's great at all 3 (functional is getting better :))
I'm curious about your thoughts on disliking Odin. I spent a decade with C#, and over the past year, I've been exploring all of the new C-likes. Out of all of the ones I've tried, Odin has stuck. There are definitely features that I would love from Zig / C3, but overall, Odin is robust enough. I've found it to be the easiest to translate thought to code. To be honest, I would use plain C if it wasn't for windows. Linux made it so much simpler. Mostly due to lack of effort to learn...
Even within C++, I find myself reaching less for OOP and more for procedural or functional solutions to my problems. So, it's not always accurate to assume that C++ == OOP
You could say the same about PHP. And probably all multiparadigm languages that allow both OOP and procedural. Given the general idea of the talk, I'd say it's fine, there's wide brushes over everything used in the talk.
I think the most important factor in this equation is the philosophy of the ecosystem. You might be writing without classes at the application level, however most of the libraries you are using are most probably relying on classes anyway.
I wonder if the rise of gRPC and microservice system archetecture worker agents and orchestration messaging systems has something to do with procedural programming being seen more. Workers are getting something like a RabbitMQ message or an RPC call to 'do you task with this data'.
I remember mentioning procedural in an interview decades ago, it ended abruptly and I was walked out the door. My how things have changed. OOP is slow and this confirms my saying…. Speed always wins
Too bad the examples were too sandbox-like. Like the example with FtpDownloader, PatentJob and Config. Clearly, the procedural version raises few concerns - How to inject dependencies? How to swap the implementations with test stubs for test isolation?
Yeah, I think message passing, encapsulation, late binding, all of that just moved a level up with microservices due to the scale. And then services themselves don't need so much code, so less hierarchical procedural style came back. In anything, the ideas of OOP just scaled up out of single node, not became less popular.
The only thing useful about OOP is encapsulation. The ability to statically guarantee that invariants are maintained by limiting access to values to only certain procedures.
No, *_Polymorhpism_* is the most valuable aspect of OOP, imo. Altho, I am seeing a lot of people in the comments saying it's ADT for code completion (via the dot operator).
this is a misunderstanding of what is object oriented programming. Even this guy who read a lot about programming misunderstands Lisp like most people do. Lisp has always been a multiparadigm programming language, it has never described itself as "functional" in fact, it was never self-identified as functional, functional was just one of the tool belts that could be used in Lisp but when SML was made all the FP community left Lisp, and it was until Richard Hickey that there was a new Lisp focusing for the very first time in FP. If you wanna think of a main programming paradigm for Lisp then that would be Symbolic Programming, not functional and not object oriented and CLOS is a symbolic programming approach to object orientation.
Disillusionment sounds weird in that a lot of people weren't "illusioned" by single-inheritance OOP to begin with. Also, the return of procedural programming has been happening for about a decade, with talks like this that followed soon: ua-cam.com/video/mrY6xrWp3Gs/v-deo.html
Most OOP is procedural programming. Renaming a procedure a "method" doesn't change that. OOP without inheritance is actually nothing other than what people used to call "libraries".
I don't know about that, but it did have success as a teaching language for awhile. I don't know how long that lasted, but I did work on one commercial product that was written in Pascal, CDC's communication processor for 6000-series computers in the late 1970s.
We could always go back to flowcharts and Assembly Language for concise code. You can even write self modifying code. Who needs a typeless interpreted script that dares to call itself a program language
The beginner needs it! I'm strongly in favor of Flowcharting, and Assembler- but only for a limited problem domain. It is simply a terrible waste of human lifespan to code in assembler for most things.
@JimLecka A beginner to a language will often make assumptions based upon their experiences with other languages that may have unintended consequences. For example a C programmer may have difficulty with Python, especially for if cases.
@@99bobcain Amongst other activities, I have taught introductory programming to completely raw beginners. At the rate of 500-1000 people per semester. The very first thing is to give them something simple to copy and type in. Like "hello world". About 10% fail and drop at this point. Then show them how to change "hello world" to something else like their name. Success at this point is their 1st positive feed back. Then gradually more concepts, some history, and learn by doing simple exercises. It is a long way down the trail to get to concepts like actual bit representations : I am happy if they get to use one (1) numeric type [best a default float] and simple character strings, with some control logic. The idea is to get them up to the point we can introduce them to a real programming language, in the next semester.
Most if not all really complex systems would be impossible without OOP, because it provides encapsulation: the ability to restrict access to a bundle of data to a small number of well defined operations, and enforce invariants. The hardware doesn’t care: OOP is there to keep programmers honest. After compilation, of course, what you have is just procedural code.
That's total bullshit. You can do encapsulation simply with name space. What really happens with OOP is the opposite: if you restrict a programmer from choosing the right solution, then he will be forced to chose the wrong one since he has to deliver working code any which way.
IMHO. C++ are used in big programming such network cloud service, while C is used in fast efficiency time critical such real time programming. They are not substitute or to compete.
Everything you can do in C you can also do in C++ and the other way around. The problem with C++ is the learning curve, which is extremely steep. You are unlikely to find enough experienced C++ programmers for a large job, which means that your juniors will cause a lot of trouble by not understanding the language.
I don't understand the backlash against OOP to me, OOP is the natural way of programming, because, well, the world is full of classes and objects it reminds me of Plato's Theory of Forms (and functional programming reminds me o Zeno's Monism) disclaimer: all I know of Plato and Zeno is what I saw in senior high school another way of looking at OOP is that a program is a machine, with all of its parts (objects) working together to perform some work for medium to big systems, I don't think there's a better programming paradigm than OOP on the other hand, I hate deep hierarchies, and especially, I hate virtual methods
Sometimes it goes too far. I see that in Java code bases. The abstraction hides the overall flow to the point where the logic becomes incomprehensible.
@@toby9999 yes it's true in my current job I work with a huge C++ codebase, with lots of deep hierarchies, a lot of use of multiple inheritance, everything is virtual ... I hate it (it's also heavily multithreaded, but it makes sense) in my previous job (20 years ago) the codebase was C++/MFC, lots and lots of classes *and threads* that system had been designed by a real architect 😁, not a software architect someone should write a book called The Zen of OOP
The natural way of programming is how the computer is supposed to work- execute actions on data. Have you compared modern OOP code to old procedural for same tasks in terms of readability, easy debug and support, memory usage and performance? Have you seen projects that can be written with 500-1000LOC taking 50000+LOC and a year to be written instead of a week max? OOP is too much overhead for 0 effect.
I'm speaking from decades of programming experience. When I started out, it was all about "structured programming." Then came "structured analysis and design" (and by the way, I loved working with DFDs). Next, object-oriented programming (OOP) arrived on the scene-and maybe functional programming (FP) too? To be honest, I don't have much hands-on experience with FP, but for medium-sized programs, I naturally drift towards OOP. When it comes to large programs or systems, there's no contest: OOP is the way to go. Even systems that aren't written in languages designed for OOP often implement OOP principles. A great example is OpenSSL, which uses object-oriented concepts in C. If you look at the Linux kernel, for instance, you'll find plenty of "objects"-though perhaps not formal "classes." Many of these are called "drivers," but other components, like the VFS (Virtual File System) or the VMM (Virtual Memory Manager), can also be thought of as objects.
@@hjxkyw I am also speaking from decades of programming. OOP has its positives in complex systems but from my experience should be limited more to data objects and module encapsulation purposes with avoidance of complex inheritance, dependency injections and such as much as possible. But this usage is close to what we had as units in Pascal and now have as modules. My idea is that threating code that performs actions over data as a set of interconnected and inheritable objects is not the best way and leads to many problems and slow performance of developers who tend to think how to build the set of objects instead of the code and logic to be done. And, well, if you have so much experience you have been in the inheritance hell many times trying to understand what the 101th descendant of X does and why in case X it does not do it just to find that the 35th descendant has changed a method and is used in this situation but all from 1 to 35 and after 36 use different implementations. Classes and small 1 line methods all over just pollute the code. To read a simple logic many times you have to jump across many classes and wonder if some has state changes from other methods that will change the behavior. I am fully aware overusing of SOLID and design patterns is the problem here but, well, this is the typical OOP we get in large corporate systems. I just write classes and objects as modules to include given actions on types of data inside and encapsulate all internal logic. Some use each other of course but just calling each other's methods as messages. So lets say I prefer to stay in the middle between procedural and OOP.
Procedural programming never went away. Of course, certain trends and fads appear in the industry, and old ones sometimes come back, but advanced and experienced programmers use the tools and paradigms that are best for a specific task, whether it is OOP, procedural programming, functional programming, or something else.. Poor programmers write poor code regardless of the programming paradigm.
I don't know if OOP is the problem. Java is a problem because it forces everything into a class. C++ is a problem because it has evolved in to a nightmare of complexity having failed to fix any of the foot guns of C and introducing new foot guns of its own. The new "class" in Javascript is a problem because, well, Javascript.
Nowadays the feeling I have is that Java developers and frameworks tend to use less and less inheritance. What I hate about Java programming is not actually the OO/DesignPatterns abuse (which are becoming more rare nowadays) but the abuse of Annotations and reflections, which brings implicit magic to you program and totally hide control flow, so frameworks become Documentation/CopyPaste/GuessTryRepeat Oriented Programming. This is the annoying part.
Extremely well said.
@@MarioMeyrelles One thing I really didn't like was Spring. Too much magic for my taste.
Where do I put the breakpoint or print statement when using annotations and reflection? If code doesn't run procedurally line by line, then I'm just too dumb to debug it.
composition for the win
@@mortensimonsen1645if only it was magic. It's just shit
After 20 years in 2044,
OOP is back.!!
My take: OOP is the skeuomorphism of programming. It was very useful for helping programmers conceptualize what was happening in their heads with the metaphor of objects interacting. But now, experienced programmers are realizing that code can be much simpler, do the same work, and be more aligned with the reality of the hardware with procedural programming.
No, OOP is still best for stuff you need to modify and maintain over a long period with multiple programmers. It's really more of a defence mechanism against the things that can go wrong. On the other hand, in systems programming(including cloud systems etc) building general environments rather than specific apps - anything that's not going to change much and it'd probably going to need complete rewriting when it does change - procedural is all you need. And it's better because you are closer to what's actually happening under the hood. You need OOP for apps and interfaces, but for systems and environments, procedural is better
@@bozdowleder2303 What is it about OOP that enables this in your opinion? What kind of OOP are you talking about? My experience is exactly contrary to that. Java/C++ etc etyle OO programs have a lot of state that is hidden but gets modified and that state affects their behaviour. This means that a method call that gives no outward hints can change the state and cause some subsequent method call to behave in unexpected ways. Other problem related to this is code reuse. Where the problem is again that methods do not act independently but most of the time they depend on the state of the object. In both these cases the state may not be simple data types but other objects which tends to multiply this effect.
If you're talking about Erlang style OOP which is pretty close to original meaning of object oriented.. well, that's entirely different kettle of fish and I would agree that there are aspects that help build reliable programs.
@@bozdowleder2303 oop it's really simplest thing for understanding program logic for anyone. because we are thinking by objects
@@MrChelovek68 In interface design oop is more intuitive. For example it's hard to imagine a procedural version of CSS. But otherwise it's more about protecting you from the bad things that can happen when the same code base is maintained and modified by multiple programmers.
@@bozdowleder2303 This is really the most important point. OOP is for the type of software that developers don’t control the flow, the user does. Objects “do what they do”, it’s skeuomorphic model of the world, the user pushes around the bits. It’s perfect for UI’s, and websites. If it’s a database, the user asks questions, defines filters, and the OO software gives answers, presented as the user wants them. The quid pro quo, is that there is simply no control flow to debug. The answer to “what it should it do end-to-end in this case” is “meh, I don’t know, let’s do it and see, as long as the bits meet their API functionality that’s all I can ask”.
Whereas non-user-facing software, the software has a defined flow at design-time. If you want to model the weather it takes an input file of observations, it runs some physics equations, and produces an output file. Or the embedded controller for a jet engine. I don’t want the jet engine to be one of a Class of engines, there’s just one and it better not explode. This should be procedural. Anything else is just un-debuggable madness, because even the developer doesn’t have a clue what it’s supposed to do in particular circumstances, as that has all been abstracted away.
If you want to watch somebody squirm, try getting one of those OOP functional people, give them a stack-frame, and ask them what went wrong. They can’t. They can’t even understand the question.
it never left. praise procedural.
What if you could declare procedures inside structures and compiler would then add implicit first argument named e.g. "self". That will probably never take on.
Hallelujah! Brother.
Feldman guy is creating fake hype over procedural paradigm.
@@isuckatthisgame No need for any hype. Procedural is the original and best, it has been with us since forever, it's the way computers work. All else is an attempt to impose human hallucinations (abstractions) onto reality.
@ Yep, Feldman is ecstatic over something that has been used over and over. Strikes me as woke tbh.
I just tossed all the OOP books I bought at Borders in the 1990s. My relieved bookshelves thank you!
These programming videos about paradigms and history are very inspiring. And helps me be a better programmer.
I thought a reference to Niklaus Wirth's 1976 book, Algorithms + Data Structures = Programs, would be good.
It's always about understanding where to use what rather than what is good and what is bad.
No. Inheritance breaks type inference due to variance. It's provably shit.
@i-am-the-slime No, it's just a tool. Sometimes, it is useful, though not often. It doesn't break anything if it's used appropriately.
@@toby9999 it's completely true. At the same time the features classes provide could be delivered in a different way (think of how go and rust deliver these features without classes). In OOP languages sadly since the class is the main abstraction, you are limited to this ideology of function and state that are tied to a module.
@@toby9999Guns don't kill people type of argument
As someone who learnt Procedural programming with Pascal and Ansi C, OOP always seemed weird and more complicated to me. Not saying it's not useful in some cases, to me it just seems overcomplicated when Procedural Programming can do the job just fine. K.I.S.S.
It looks like you are a classmate from the Fing .. Pascal, C, green screens, modems, Analysis I, Algebra .. etc.
I also never really grasped the point of OO in reality. Harder to understand , harder to read, achieves the same thing.
Yes.
I normally watch videos on 1.3x speed, but Richard Feldman saves me from having to do that! More content in less time 💌
I watched it 1.5x but definitelly felt like 1.75 :D
watched at 2x speed lol
Richard Feldman has a built-in 2X speed speaker
I always watch at 2x, I respect my own time.
0.9 was more than enough for me, so I know now that I will not attend a live presentation done by him.
Always a good time when there's a talk from Richard Feldman.
If you look at modern frameworks l>e Laravel that use tons of classes, you'll find that in virtually every case they're using a single instance for every class. Zend Framework/Laminas is even more extreme, as you don't ever instantiate a class, instead you're forced to use a "factory" to get a reference to the one singleton instance.
That is not OOP, that is procedural programming with objects. If you don't ever use more than one object for each class, you don't need objects. Your class is just a package with package variables.
The more extreme part about Laravel, Symphony is that you get to using methods such as HTML::escape and load something like 20 classes on startup of the program just to call htmlspecialchars inside the method. After that people wonder why a small login screen takes 500mb of RAM and is a 100mb+ project instead of using 100K of RAM for a 100K project assets and js included.
Factories are my 27B/6…
You forget about one very important aspect of Object-Oriented Programming. It's the ergonomics of discovering what could be done with ADT by simply hitting the “.”. And that is the fact why Object-Oriented is so popular, especially for creating programming APIs and that’s why strong typing is now prevalent
in languages that you know/use .... in languages that are actually important - COBOL and RPG - these things aren't optional. And OO is a non-issue, it attempts to solve problems we don't actually have on real operating systems.
that isn't an oop thing, really. it's more of a tooling thing. go and rust tools, for example, also have that.
didn't mention Pascal when talking about procedural programming is a war-crime.
Pascal is great but begin/end killed him. Also i would say java is much closer to turbo pascal than c++.
@@andreydonkrotI’ve never Pascalled but really like begin end in Julia, it’s very clear
@@andreydonkrot- "begin/end" just came over from Algol 60, the parent of "Algol-like" languages. I thought that was the right way to do it. C changed the notation, and it was so influential that almost every language followed suit. But because the C developers didn't know where to put the braces, now hardly anybody knows, to the detriment of clarity in programming.
Edit: Actually, it started with "B", the predecessor of "C", or perhaps with BCPL.
Edit: I just looked up some BCPL code from Martin Richard's website (the creator of BCPL). He did, indeed, introduce the braces, and he uses them in Algol style, i.e. according to Dijkstra's rule as set forth in "A Method of Programming." So he is not at fault, as I expected.
Inheritance is good if used to add/compose one layer of functionality. Take the standard webpage or controller object and add auth, logging, and configuration to the base class. Makes it easy to make global changes to cross cutting functionality. This can be accomplished in other ways however. The real problem with OOP is layers and it’s slow. I worked on systems that had 3 layers and they weren’t too bad. Then I worked on a system where single responsibility was taken literally and each layer had one line of code in it. That thinking led to 25 layers, massive complexity, that was impossible to step thru.
If you set the speed of the video to 0.75 you'll have normal speed.
In the ‘80’s I was taught ADT programming - Abstract Data Type - with Pascal and Modula. When OOP showed up it was you data definitions and their procedures stuffed in single file.
Richard, along with your reasons for this trend back to procedural, there may be a wider scope affect at play here: the lifting up and out of the so-called OOP pillars (messaging) concepts to higher inter-org abstractions. There was a time when local compilation and local services and (custom) libraries were part of a smaller local geographic and administrative ecosystem. The notion of a looser coupled set of non-local services , SOA, micro-services whatever [WAN -ish], has put the interface farther up and out, relying on a higher degree of inter-org definition, accepted standards, and/or trust. So it makes sense that procedural code would rise from the ashes again, the "pillars of OOP" have been subsumed by other inter-cloud interfacing standards, API's , what not. If for no other reason, there is more use of procedural coding as the simple local cohesive of all the many published "WAN" interfaces.
"You're all wrong" -- Gray Haired Smalltalk Programmer
"Hold my beer" White haired FORTRAN dude.
Huh!!! Structured COBOL programmer.
You're all wrong. Even assembly isn't right. Should have just stuck to raw machine code !
"Laughing at this naivety" - while dusting off my box of punched cards.
@@michaelmoorrees3585 Machine code? Pfft! You haven't really lived when you didn't build your own PBX with at least ten extensions out of relays.
I think there is also a meta-point about how trendy and fashionable languages and paradigms can be. The zeal for FP feels like how OOP was, and the zeal for Rust is similar to how people spoke about Java.
problem with Java - aside from the fact that it's inherently crappy - is that it is now under the control of a psychopath billionaire. Organizations like Bank of Nova Scotia have already banned the use of Java in their organizations (as in LAST YEAR).
A simple form of oo can be done even in COBOL without oo language extensions. Just make one DATA DIVISION + PROCEDURE DIVISION per object, an ENTRY for each public method …
Great talk, OOP always seemed dodgy, particularly w.r.t. maintenance. Modules (Pascal had them) were the crux of modern programming. Now we have AI for code generation and better compilers fixing a lot of the old problems, its back to 'C'
AI doesn't fix any problems. It's trained on bad code and it will write bad code. ;-)
The speaker doesn't really remember the '90s. He was a kid. What this history doesn't include is the explosion of C with classes type dialects that came out in the late '80s and early '90s. There was also an explosion of Pascals with classes and pretty much every language received a class based object system. That's because object oriented programming was PHENOMENALLY POPULAR and I feel that we're being undersold on this because people's memory of this era are so poor.
You really just need to ask a programmer who remembers those days a little better
Despite this I agree with the main point of this video that OOPS peaked a while ago and that more and more programmers are wanting ditch more more procedural and functional alternatives.
In my opinion, OOP was a bit of a mistake and the problem set which suits late bound encapsulated objects is actually pretty small.
_Pascals with classes_
COBOL with classes ... yes, that was/is a thing
@@hjxkyw I was a programmer when COBOL got enriched with OOP-facilities, but I've never seen it being used in real life.
That guy must have been gulping coffee for two hours.
Meth???
There's descriptions of procedural programming in Patterns of Enterprise Application Architecture by Martin Fowler
OOP was great for SDKs and Frameworks and that is probably why it took off because it solved the problems of the SDK and Framework publishers (and OS api abstraction in particular). It’s also great if you get the domain model right, but software engineers seem to be bad at that early in a project and for OOP, early is exactly when you need to have nailed the hierarchy.
Ironically, I have "The Return of Procedural Programming" video on my "OOP" playlist 😂😂
MODULA 3 ?
Interesting talk with some nice higher-level perspectives taking history into account. What I miss in the talk is the idea of embracing that within the same project you may very well have different areas where different styles need to be applied. In any large project you will see a mix of styles. Like: functional programming for UI, OOP for the architecture of your application and procedural programming for your GPU workload.
I haven't realized that it was gone at any time tbh. Backbone of structural programming.
Return of procedural programming is just the result of PTSD. OOP was the hot thing, people were trying to make everything as object-oriented as possible, that led to a lot of bad ideas like UML etc. Now people who were hurt by overzealous OOP evangelists are rejecting it wholesale.
Shhhh, let's see them sink in exposed internal data and come back 😅
and rightly so. If you can't make me understand it (something like OO) in a couple of hours, as something that make sense? then it's not worth it. I still get things done in ILE RPG (on system i). I mean ACTUALLY get things done, and maintainable. All of this talk about Java, bla bla, in the mean time in the real world .... we do Mastercard Debit acquiring and issuing etc.
gah - once upon a time there was option explicit,
biggest problem with oops is people just memorize it but then forget to use it. so many times in interview i have encountered a person grinding me on oops concept but when i see their code its next level bullshit.
Spaghetti code will pay my bills until I retire love it
I agree with freedomgoddess. Procedural programming never left. A lot of the specialized coding for satellite/instrument control has always been done with procedural programming. I working for both DOD and then NASA have used and continue to use procedural programming. I also do OO programming in both C++ and Python when I think it is appropriate and will lead to more easily expandable systems or subsystems. However, I always start out thinking procedural programming.
This is the most flat earth talk I have ever watched, simple examples and preaching instead of actual real world enterprise class challenges. I come from a very strong Procedural programming background and I enjoye using it to the extent of right domain, when I learned OOP It addressed many of the shortcomings I had with desiging, changing, maintaining and understanding procedural code. I am not saying there won't be shortcomings, codebase rot and chaos in OOP solutions, but It gives at least a chance to write some code with engineering principles in mind. Good luck writing remotely quality and solid code for systems that have more than 2 screens, It will start deteoriating the second you need to add a second method to ftp from other endpoint, or the next time the solution requires a different protocol. It will force more method duplication, tight coupling and side effects, worried that adding a change in a method in a subclass will break your system ? Try changing a if statement in a procedural module.
yeah, I agree. The message is clear, and I can understand explanations for the styles differences and certain advantages, but the examples really let down. Exactly as you said, how will this support low coupling and extensibility?
Agreed. We’re not going backwards to writing in C again. We can actually implement OOP the way Kay envisioned. Look at the Actor model and frameworks like Microsoft Orleans. That’s the future.
Love this 2x speed energy!
I had to slow the video down just to understand this guy, and I'm a native speaker.
I watched this in 2x speed
I bet you know why😂
lol, I've just realized that my playback speed is still 1x, he definitely sounds 2x-ish :)))
I'm not a native speaker, it is a bit hard to follow at full speed, I really have to sharpen my ears 🙂
I like Richard but a lambda proc'ing a let around a lambda is the first step of OO (an encapsulation for the better or worse): that's what AK meant, not the CLOS/MOP (almost AOP). Object is not about class. Class is a template for objects. You can do OO from any closures. The common alternative is the cloning ones AKA prototype-based (btw, Self is a very interesting alternative to Smalltalk; the Traits as object-behavior with no properties came from this one).
I can do OOP simply with name space. Not sure why you are overcomplicating something absolutely trivial. ;-)
Brian Will has already made two compelling videos right here on UA-cam discussing the "OOP is bad" opinion -- WITH EXAMPLES.
When played at 75% this sounds almost normal
So basically original OOP was another word for micro services without the network part...
The basic ideas of micro services can be tracked in several other paradigms, like RPC, SOA or even EJB's.
Most of the time I treat classes as structs with methods, a convenient way to associate the data with the functions that operate on the data. It's convenient for simulations where there are a lot of entities with their own state floating around. I could shove everything into a giant table, but then it's harder to conceptualize what's going on. I use inheritance sparingly--only if I really need runtime polymorphism.
If you need runtime polymorphism then you shouldn't be using inheritance at all. The much better way is to use a myType field and if and case statements to distinguish how methods act on variants of the base type.
Excellent talk!
I'd argue rust is in a loose sense object oriented too, since it supports structures with encapsulation, methods, and abstraction and polymorphism through traits (equivalent to interfaces in other typical OO languages). It's much more limited compared to what you'd find in languages such as C++ though
in the end "object oriented" doesn't really mean anything on it's own, and everyone equivocates on the word
Go allow to effectively and efficiently implement OO programs!
You don’t need classes and inheritance to do OO.
If you have a method: a function attached to a data type and a mechanism to put an interface in front of it, you have an object.
I think what we’ve seen is the raise of hybrid languages providing easy access to different paradigms, rather than procedural paradigm per se.
I do inheritance by copying the source from some existing thing and pasting it into my new thing. Now my new thing has all the behaviour of the old thing. But it has zero dependency on the old thing. I can even delete the old thing and the new thing keeps working.
That works until you need to do a mass change. The code that’s copied can morph making mass changes harder. Copying code obviously isn’t DRY. My point in disagreeing is to highlight there is no right way. I’ve done exactly what you mentioned many times and I’ve use inheritance and utility classes. Just depends on the situation. We as a collective need to stop looking at styles and languages as absolutes and do what makes sense which is what’s easiest and meets requirements.
@@Lewehot Well, I made that post with half my tongue in my cheek. I was sort of bashing on people who inherit from something as a way of making a slightly different version of the somethings code, overriding this and that, without any particular rhyme or reason. Ending up with code that has dependencies on whatever inherited for no useful reason. Just the same but different in many odd ways.
More philosophical... Apparently I, as a human, have inherited properties from my mother and father, and their mothers and fathers etc, etc. All done through copy and pasting of DNA with some mutation thrown in. BUT still my existence does not dependent on the existence of my parents or grand parents, long gone, their DNA deleted. Which is a good thing, for me at least :) Conversely inheritance in C++ as others creates a web of dependencies. Which at least I find difficult to deal with. Making changes to it can be as hard as those "mass changes" to copy/pasted code you mentioned.
All in all I agree. Use whatever style/paradigm that does the job. Don't get fixated on OOP, Functional, DRY, SOLID, whatever. I sometimes get the feeling those catch phrases are just dreamed up by self proclaimed software engineering faith healers to sell their books, training courses and conference speaking. Promising snake oil, at a price, to magically cure all your software production problems.
Do you mean that you don't really need a factory factory factory factory to build a single hammer? 🤣Shocker!
what's the point of extreme late binding? why would I want my code to do that?
You don't. Sometimes, when you are in a hellish project you have to. Unless that's the case already, you never, ever, want to do late binding. Ideally you want a completely static binary with a perfectly defined call tree. The closest one can get to that is with state machines as far as I know. I would love to see another technique that is nearly as reliable.
After all the functional talks that Richard Feldman has given, I was surprised to see him give a talk with this title. The talk is not about Felman moving from FP to PP, which would have been controversial for me, but instead it was nice to hear him give a talk about all the different programming paradigms. A few nitpicks: Brendan Eich was working at Netscape and not Mozilla at the time. And Richard, you can use the function as property syntax from ES2015, even if you use a JS logo from the 90's ;)
My pet peeve is that C, which was so popular and influential, was written by people who didn't know where to put the braces, and as a result hardly anybody, in most languages, does it right. That includes Mr. Feldman, in his examples here, despite his obvious great knowledge about programming languages.
I adhered to Dijkstra's rule*, and since I am now long retired I no longer have to fret about it.
*A Method of Programming
“Pillars of OOP” -> POOP?
Intentional? You be the judge!
But the real question is: Was his *_Strawman_* of OPP (like @[13:30] with Late Binding & Static Type Checking) also _intentional_ ???
love a good brian will reference
Great talk. Thanks
13:26 late binding vs static type checking: Weirdly enough I agree with both and think they should coexist.
I want to be able to own & customize the final app, while preserving its invariants.
E.g. If I want the send button on this comment to be a weather-dependent animal, I should be able to do that then run the app's validation/invariant checks to make sure I didn't break it before hitting save.
Javascript & HTML come pretty close to that idea, but fail gloriously on invariant checks, understandability(everything is minified with 1k deps), and are very limited (to the browser).
Some nix-like rollbacks would be cool for hot-swapping
I feel like order of paradigms by niceness is logic > functional > Kay-style OO > procedural > typical OO. I believe the best part of the original OO idea is modularity and message passing (essential event queues that are handled solely by a FSM "object" instead of being able to reach in and control the inside of something from the outside). Modern day OO with inheritance and a more obscure version of namespaced procedures is the worst IMO. Procedural is very intuitive at first because we're used to sequences of instructions like in DIY guides and recipes so it seems natural to communicate with the hardware that way, and it's basically the idea of a Turing machine.
But if you read "Can We Escape from the Von Neuman Architecture" by the inventor of BNF, it becomes apparent that statements are way less useful than expressions. Expressions convey the idea of referential transparency, or basically that it should be possible to cut-and-paste the definition of something with its name, which means side-effects need to be wrapped in monads to turn their action into a form of data. Hard-core functional like Haskell follows this by making everything descriptive rather than imperative. And programming just becomes writing down a specific vocabulary with everything is described in terms of primitive notions (just like math).
Logic programming is currently no where near as popular as functional even. But if more effort was put into building an ecosystem around it to do what general apps do, then it could be the best. (Check out the Verse programming language being headed by the inventor of Haskell). The difference between logic and functional paradigms is basically the difference mathematical relations (non-deterministic) and functions (deterministic) and how relations can be solved backwards instead of only run forwards. This methodology would revolve around specifying a set of constraints on a more general domain and the output of the program being elements of the feasible set according to those constraints (which could turned on and off for different use cases). It's well studied in math like in optimization, relational algebra, SAT, constraint logic programming, etc.
Bro, Logic? Like Prolog?
You need to see a shrink. Your "feelings" are all messed up. Message passing was never part of object oriented programming. Not even Kay made that claim and Kay was already crazy. :-)
@ Yes! And I emphasize that this is my subjective opinion. I don't have supporting evidence outside of my own anecdotal experience. But I believe descriptive code is superior to imperative code. And I believe logic goes one step beyond pure functional in that it only lets you state facts and you delegate the finding of the possible solution to the logic engine. Most of my experience building business applications is collecting their definitions of domain events, system entities, the rules governing how they want things to change and other invariants. It seems that actually writing the sequence of actions to take the app from one valid state to the next is more of a side effect of the real task--gathering and verifying the logical facts of this business' "universe".
The problem is Prolog syntax is not so pretty and they're lacking lots of the IO stuff we often need. But domain models can always be pure and the IO requirements pushed to the edges like using monads in Haskell.
These are my inspirations for the idea:
- Verse language by the inventor of Haskell: ua-cam.com/video/OJv8rFap0Nw/v-deo.html&pp=ygUWaW50byB2ZXJzZSBwcm9ncmFtbWluZw%3D%3D
- Strange Loop's model-theoretic declarative programming: ua-cam.com/video/R2Aa4PivG0g/v-deo.html&pp=ygUbaSBzZWUgd2hhdCB5b3UgbWVhbiBkYXRhbG9n
- Same problem, different paradigms (logic section): ua-cam.com/video/cgVVZMfLjEI/v-deo.htmlsi=XQDfy22zNnxQGGXV&t=1200
- Acceptance testing / defining app requirements/invariants is more important as AI code completion tools improve: ua-cam.com/video/NsOUKfzyZiU/v-deo.html&pp=ygUZYWNjZXB0YW5jZSB0ZXN0aW5nIGlzIG5ldw%3D%3D
@ Yes, that was a lot of bullshit. ;-)
@@lepidoptera9337 Quite the antagonist, I see. Well you're free to share an alternative set of beliefs that is more convincing than mine. Otherwise, there's no point to this dialogue.
About his comment of finding it hard to believe or odd that Alan Kay said that it was possible to do OOP in LISP as earlier as the late 1960's, well, even before the CLOS (Common Lisp Object System that was added in the 1980's) LISP had first class functions and closures, so, as clunky as it might have been, yes, it was possible to do OOP, as you can encapsulate the environment and hide variables or exposed them with functions inside functions.
You can do OOP in assembly language if you want to. Not that you want to do OOP in any language. It's a bad idea.
Late binding and static type checking are not incompatible. C++'s virtual methods do exactly that. JVM methods are late bound, too.
You're saying how Python was supposedly influenced by Simula, but original Python didn't have any class support that I know of, unless I'm mistaken? And I think it got added kind of as an afterthought (which is why it's kind of clunky), but for those knowing Python's history well, let me know if I'm wrong.
Finally. I always thought OO was not a gain.
When computers were a million times smaller and a thousand times slower, there was no way to luxuriate in that kind of slop.
I came from the pre-procedural time. There was a reason why OOP was popular. The felt freedom in procedural languages comes with a price. It's so easy to make a mess. Especially when multiple people work on the same stuff for years. It feels like work when fixing code. OOP gave structure and localizes the issues. But I always to overlook the hypes and used commonsense. I guess that's what happening now. But don't think that procedural coding is just heaven either!
OOP was never popular among programmers with experience. It was popular with middle management who were not programmers because they had been sold on it and by architects who are micromanagers. OOP doesn't give structure. It locks you into the architect's idea of what the code has to look like. Since your architect is not god and doesn't have perfect foresight, 99 out of 100 times that structure is wrong. Procedural team development is extremely simple. You assign libraries to individual team members and make sure that your interfaces are well defined and don't change.
What Alan Kay meant as lisp is more like Interlisp Loops, MIT Flavors, Common Lisp Object System and other Smalltalk-like lisp-based environments. Lisp has a rich history.
I think that Kay just meant what he said - that it was possible to implement object systems in Lisp and Smalltalk themselves. CLOS was only adopted to provide a standardized way of doing OOP in Lisp. I don't know why that fairly obvious point baffles Feldman, unless he's being deliberately obtuse.
Thank you
IMHO OOP like java requires you to write more code, before resolving the problem you have to think about abstractions, best practices, etc...
Functions are abstractions too but it’s thinner and direct.
Using interfaces and other indirect abstractions may work for projects or application layers that might change over time like DB access or Auth, but not all projects are subject to those changes.
Overall ,it’s subjective but simplicity plays a huge role in choosing a language over another.
In addition to simplicity, the developer must understand what is happening. OOP is ideal in this regard. As for abstractions, assembler is also an abstraction, use it. Machine codes are also an abstraction, you can use them, there is simply nowhere thinner. I don't understand why I don't like the obvious layout of components in Java. Functionalism is all the same, only it ties your hands much more. OOP is literally optimal for everyone. But, the functionalists point-blank do not notice the obvious.
@MrChelovek68 I think it depends on the developper's way of thinking. I've started with procedural programing in python, C then PHP. It feels like those langages formated my brain into procedural way of thinking and writing code.
So creating classes feels like translating my thoughs into another language.
I still can see the benefits of OOP though.
@@icantchosemyname It's funny, I studied Pascal, then C-Sharp, and now C. But, more convenient and intuitive than OOP, I have not seen anything. In fact, OOP in PHP is well documented. As for the formatting, I strongly agree) My comment is just about "why OOP is used", and everything else is dreams and shadows. I am so familiar with higher mathematics, but when functional programming begins, for example, my brain rejects it as something alien. I don't know why, but it's counterintuitive. purely for my taste, oop and in particular languages like Java or C-Sharp are ideal, because they do not adjust a person's thinking to a machine, but on the contrary, allow anyone to translate a thought into a code. At the same time, I dearly love both C and Pascal for their complete freedom of action.
20:00 I don’t think the industry actually moved away from messaging and late-binding, but only the specific implementations due to pursuit for performance. The static type checking boom came only when type checking has advanced enough to check late-binding (generic types, trait constraints, gradual typing) and messaging (borrow checker), so it is not actually against those ideas. Not to mention the micro services paradigm is a system level realization of messaging and late binding, along with renewed persistent interest in Erlang BEAM.
I started programming in 1974. I realized very early (late 80's) that OO was more difficult to teach, and far less productive especially for average programmers. It is also far more difficult to analyse and debug.
Procedural programming needs one thing to render it truly useful - an integrated memory database. Relatively easy to do, this provides procedural programming with all of the (tentative) advantages in OO due to classes / objects being able to retain complex data at run-time, and doesn't add any of the baggage. I've been managing / writing systems in procedural programming with an integrated memory database for 2 decades now, and I am quite sure that in terms of the programming paradigms available today it is the best trade-off.
AI might result in new trends, let's see.
Interesting.. what is an integrated memory database? Eg A hash table to store global state at runtime ?
@@MrHopp24 I guess you could use a very basic system like that, but for complex systems you actually need a relational database, even if only with minimal functionality. You do not need sql (at all), just row level access to data, and perhaps some kind of indexing ability (often not required). Essentially, the data concept behind class and object with relations is a very good idea and almost indispensable for complex systems, but it is (in my opinion) a bad idea to couple it with the programming language. Procedural access to a minimalist memory database is all you need. I have managed and programmed highly complex systems (over a million lines of source code) over the last 20 years, and the resulting programs are simple to code, understand, and debug. No hidden nothing, everything is apparent.
Where required (and this is often) the entire memory database can be dumped onto disk to store state and essential data, and reloaded at startup. I am not so sure this is easy with class and object.
@@rs8197-dms I recently had to implement and debug a rather obscure topology decoding algorithm. Being able to just dump the program state to disk at every step and analyse the resulting data flow in a spreadsheet was key to getting it right. I've worked with IBM mainframes before and working with count-key data (CDK) formatted storage was the most pleasant programming experience I've had in years. OOP too often leads to convoluted and deeply hierarchical data (mostly by accident) that's hard to parse and reason about. I too moved away from that many years ago.
you just described what elixir/erlang is, welcome to the club!
OO is just fancy message passing with lots of helper stuff so you don't need to manually check what the message is all the time. It's nice in some instances but it's a bit broken in others. I think it can lead to people getting confused. But what do I know?
OOP has nothing to do with message passing. That's just another meme of the internet crowd that knows nothing about computer science. So that's you, then. ;-)
@lepidoptera9337 cool story bro. How do you think the objects pass information to each other? Magic? Unix/posix has used it since the start (pretty much). C++ etc use it.
Message passing is just a way that programs and objects/functions etc understand how to communicate with each other.
I don't need computer "science" to understand how basic messaging works.
@skilletpan5674 Most OOP languages implement old style function calls. Absolutely nobody in their right mind implements message passing outside of the context of distributed systems. You need to take a few CS classes.
Check Elixir and it's functional oriented way of programming... Once you jump in - there is hardly no coming back.
static-type-checking has nothing to do with (late or not) binding... you can statically check the type with its interface
13:30 He acts like you cannot change behavior with static-type checking (at runtime) !!!
Seems like late binding would make security screens of software more difficult went doing static analysis
procedural falls apart when you have large sets of structs (like nodes in a syntax tree) and you need to call functions which each type implements differently. Having overring to make dispatch tables is way easier than function pointers and switching on type IDs.
Procedural is a technique and has its limitations. OOP is also a technique. You just have to use the right tool for the problem. 90+% of problems can be solved by procedural? When needed you introduce OOP or functional, why not calls to quantum in the future? The problem with OOP and its best practices is that now the push is "We have a hammer and everything you have to do has to be done with the hammer as this is the best practice and only professional way.". What happens when you need a small screwdriver? Well, we all know it. The stats for a very long time have been that around 90% of projects fail before first production release and things get worse.
I agree, I would rather have a large OOP code base than a large procedural code base. With microservices, code is smaller and procedural makes more sense. I think reducing code base size is driving us back to procedures.
Disagree, sum types handle syntax tree nodes much, much better than OOP + visitor pattern
I misread the sponsor as Lumon 🤦♀️
OO. Back in the day, we called it OOPS. I wrote some code in Ada, before I left the game. My main language was IBM/370 Assembler, but I also wrote in PL/I, Cobol and Fortran. My very first language was IBM 1440 Autocoder. The 1440 was a scaled-down version of the 1401. In those days, an expert programmer was one who could squeeze the most out of 4000 6-bit characters in RAM and instruction timings in milliseconds. Today, it looks like chaotic spaghetti, but you couldn't do too much in such a small program. We used op-codes as constants, modified code on the fly. It had variable word lengths, so we could fool around with word marks. It was the baling-wire-and-canvas age of programming. There was no separation of code and data, which made all kinds of hair-raising things possible, but self-modifying code was like a precursor of neural networks and AI.
Even though I do object-oriented programming, I've really grown to hate "inheritance hell", when there are long chains of A inherits from B which inherits from C. Maybe this is addressed in the video (I haven't watched the whole thing yet) but I assume part of the shift away from OO is people getting fed up with inheritance hell. Personally, I think I might like a language that still has objects/classes, but no inheritance.
EDIT: now that I'm halfway through, yep this is where he talks about that exact issue. Specifically, he talks about composition being preferred over inheritance. He also describes classes without inheritance as being basically nested structs, which huh I never thought of that.
then what happens if you some class and you want to add an extra field and some methods? now you have to pack in the same class and mix it all together which is not great either. The bigger culprit tends to bad APIs that weren't well designed or grew out of control over time.
Me personally, I've been impressed by most *_Library Devs_* and what they've done with their APIs. So, I would be curious of some examples of Libs with "inheritance hell" ? My impression is that "inheritance hell" comes from "the business layer" as a result of Devs always being on a time crunch, and hence, leads to them just throwing shit together (that probably wasn't the best decision, but since they had a deadline, they just "went with it").
It’s not Richard Feldman it’s Ricardo Feldman. He use his hands to talk 🗣️ more than me using my hands to write code 🧑💻.
Great presentation to understand what OOP really is, and why is losing momentum
Unfortunately OOP is often taught and understood as `class Dog extends Animal`, which is the worst way to explain OOP and OOM (right up there with the other extreme, IEnterpriseAbstractFactoryProvider)
Marrying functional and object-oriented as well as compositional patterns is the way to go. Use the right tool for the job.
I don't like JAI and neither ODIN. They lack expressiveness And I come from an Assembly and C background (then C++/Java/ObjC, then Python, and currently mostly C# and a bit of TS)
If I want a good alt-C, I use Zig. If I need Cpp interop, I use Nim.
Rust is absolutely an OO language. It has traits and and methods that accompany and operate on the data types they are declared in.
So basically OOP means to you that you can use the "." to access functions on data?
@@andrewf8366 I'd say OO is when objects themselves do things. Procedural is functions change objects that just store data. Functional is functions return new data based just on inputs.
Best way to go IMO is a mix of all 3 - OO gives you really nice abstractions with interfaces. Procedural is great for IO, functional is great for business logic - extemely easy to unit test due to pure functions.
That's why I really enjoy C#, it's great at all 3 (functional is getting better :))
I'm curious about your thoughts on disliking Odin. I spent a decade with C#, and over the past year, I've been exploring all of the new C-likes. Out of all of the ones I've tried, Odin has stuck. There are definitely features that I would love from Zig / C3, but overall, Odin is robust enough. I've found it to be the easiest to translate thought to code.
To be honest, I would use plain C if it wasn't for windows. Linux made it so much simpler. Mostly due to lack of effort to learn...
@@cbbbbbbbbbbbb Odin lacks closures, methods, and true generics, a few things that might be considered expressive. I’m a big fan though.
Even within C++, I find myself reaching less for OOP and more for procedural or functional solutions to my problems.
So, it's not always accurate to assume that C++ == OOP
You could say the same about PHP. And probably all multiparadigm languages that allow both OOP and procedural.
Given the general idea of the talk, I'd say it's fine, there's wide brushes over everything used in the talk.
I've always coded C++ in more of a "C with classes" style. Mostly procedural. I do use classes to create abstraction, but minimally.
I think the most important factor in this equation is the philosophy of the ecosystem. You might be writing without classes at the application level, however most of the libraries you are using are most probably relying on classes anyway.
Problem with Scala is that's far more than just a better Java. Also why I love Scala.
I wonder if the rise of gRPC and microservice system archetecture worker agents and orchestration messaging systems has something to do with procedural programming being seen more. Workers are getting something like a RabbitMQ message or an RPC call to 'do you task with this data'.
I remember mentioning procedural in an interview decades ago, it ended abruptly and I was walked out the door. My how things have changed. OOP is slow and this confirms my saying…. Speed always wins
24:06 a minute of silence for what could've been 🥺 press F
F
Too bad the examples were too sandbox-like. Like the example with FtpDownloader, PatentJob and Config. Clearly, the procedural version raises few concerns - How to inject dependencies? How to swap the implementations with test stubs for test isolation?
How to test a procedural program? With a single global flag bit. Dudes... who didn't teach you programming and when did that not happen? ;-)
I think the message passing idea lives on in distributed systems, actor model, etc...but it is operating at higher abstraction level.
Yeah, I think message passing, encapsulation, late binding, all of that just moved a level up with microservices due to the scale. And then services themselves don't need so much code, so less hierarchical procedural style came back.
In anything, the ideas of OOP just scaled up out of single node, not became less popular.
It's not just proc programing but also combination with functional programming too.
Rust is multi paradigm, it doesn’t have class based OO but does allow vtable based objects implementing interfaces. Just no inheritance.
Is that photo of the Borders in SF near Union?
Procedural programming never went away. OOP was simply a syntactical cloak.
The only thing useful about OOP is encapsulation. The ability to statically guarantee that invariants are maintained by limiting access to values to only certain procedures.
it is absolutely useless
No, *_Polymorhpism_* is the most valuable aspect of OOP, imo.
Altho, I am seeing a lot of people in the comments saying it's ADT for code completion (via the dot operator).
this is a misunderstanding of what is object oriented programming.
Even this guy who read a lot about programming misunderstands Lisp like most people do. Lisp has always been a multiparadigm programming language, it has never described itself as "functional" in fact, it was never self-identified as functional, functional was just one of the tool belts that could be used in Lisp but when SML was made all the FP community left Lisp, and it was until Richard Hickey that there was a new Lisp focusing for the very first time in FP.
If you wanna think of a main programming paradigm for Lisp then that would be Symbolic Programming, not functional and not object oriented and CLOS is a symbolic programming approach to object orientation.
Disillusionment sounds weird in that a lot of people weren't "illusioned" by single-inheritance OOP to begin with.
Also, the return of procedural programming has been happening for about a decade, with talks like this that followed soon: ua-cam.com/video/mrY6xrWp3Gs/v-deo.html
Most OOP is procedural programming. Renaming a procedure a "method" doesn't change that. OOP without inheritance is actually nothing other than what people used to call "libraries".
Pascal the best programming language ever
I don't know about that, but it did have success as a teaching language for awhile. I don't know how long that lasted, but I did work on one commercial product that was written in Pascal, CDC's communication processor for 6000-series computers in the late 1970s.
We could always go back to flowcharts and Assembly Language for concise code. You can even write self modifying code. Who needs a typeless interpreted script that dares to call itself a program language
The beginner needs it! I'm strongly in favor of Flowcharting, and Assembler- but only for a limited problem domain. It is simply a terrible waste of human lifespan to code in assembler for most things.
@JimLecka A beginner to a language will often make assumptions based upon their experiences with other languages that may have unintended consequences. For example a C programmer may have difficulty with Python, especially for if cases.
@@99bobcain Amongst other activities, I have taught introductory programming to completely raw beginners. At the rate of 500-1000 people per semester. The very first thing is to give them something simple to copy and type in. Like "hello world". About 10% fail and drop at this point. Then show them how to change "hello world" to something else like their name. Success at this point is their 1st positive feed back. Then gradually more concepts, some history, and learn by doing simple exercises. It is a long way down the trail to get to concepts like actual bit representations : I am happy if they get to use one (1) numeric type [best a default float] and simple character strings, with some control logic. The idea is to get them up to the point we can introduce them to a real programming language, in the next semester.
php is a child of Perl btw, it was invented as just templating language for Perl ^^
Procedural when it makes sense, FP for the rest
Most if not all really complex systems would be impossible without OOP, because it provides encapsulation: the ability to restrict access to a bundle of data to a small number of well defined operations, and enforce invariants.
The hardware doesn’t care: OOP is there to keep programmers honest. After compilation, of course, what you have is just procedural code.
That's total bullshit. You can do encapsulation simply with name space. What really happens with OOP is the opposite: if you restrict a programmer from choosing the right solution, then he will be forced to chose the wrong one since he has to deliver working code any which way.
IMHO. C++ are used in big programming such network cloud service, while C is used in fast efficiency time critical such real time programming. They are not substitute or to compete.
Everything you can do in C you can also do in C++ and the other way around. The problem with C++ is the learning curve, which is extremely steep. You are unlikely to find enough experienced C++ programmers for a large job, which means that your juniors will cause a lot of trouble by not understanding the language.
@@lepidoptera9337 C++ itself not steep. But the OS releaser made it complex and confuse.
@ C++ is not steep? OK... Moving on. ;-)
I don't understand the backlash against OOP
to me, OOP is the natural way of programming, because, well, the world is full of classes and objects
it reminds me of Plato's Theory of Forms
(and functional programming reminds me o Zeno's Monism)
disclaimer: all I know of Plato and Zeno is what I saw in senior high school
another way of looking at OOP is that a program is a machine, with all of its parts (objects) working together to perform some work
for medium to big systems, I don't think there's a better programming paradigm than OOP
on the other hand, I hate deep hierarchies, and especially, I hate virtual methods
Sometimes it goes too far. I see that in Java code bases. The abstraction hides the overall flow to the point where the logic becomes incomprehensible.
@@toby9999
yes it's true
in my current job I work with a huge C++ codebase, with lots of deep hierarchies, a lot of use of multiple inheritance, everything is virtual ... I hate it
(it's also heavily multithreaded, but it makes sense)
in my previous job (20 years ago) the codebase was C++/MFC, lots and lots of classes *and threads*
that system had been designed by a real architect 😁, not a software architect
someone should write a book called The Zen of OOP
The natural way of programming is how the computer is supposed to work- execute actions on data. Have you compared modern OOP code to old procedural for same tasks in terms of readability, easy debug and support, memory usage and performance? Have you seen projects that can be written with 500-1000LOC taking 50000+LOC and a year to be written instead of a week max? OOP is too much overhead for 0 effect.
I'm speaking from decades of programming experience.
When I started out, it was all about "structured programming." Then came "structured analysis and design" (and by the way, I loved working with DFDs).
Next, object-oriented programming (OOP) arrived on the scene-and maybe functional programming (FP) too? To be honest, I don't have much hands-on experience with FP, but for medium-sized programs, I naturally drift towards OOP.
When it comes to large programs or systems, there's no contest: OOP is the way to go.
Even systems that aren't written in languages designed for OOP often implement OOP principles. A great example is OpenSSL, which uses object-oriented concepts in C.
If you look at the Linux kernel, for instance, you'll find plenty of "objects"-though perhaps not formal "classes." Many of these are called "drivers," but other components, like the VFS (Virtual File System) or the VMM (Virtual Memory Manager), can also be thought of as objects.
@@hjxkyw I am also speaking from decades of programming. OOP has its positives in complex systems but from my experience should be limited more to data objects and module encapsulation purposes with avoidance of complex inheritance, dependency injections and such as much as possible. But this usage is close to what we had as units in Pascal and now have as modules. My idea is that threating code that performs actions over data as a set of interconnected and inheritable objects is not the best way and leads to many problems and slow performance of developers who tend to think how to build the set of objects instead of the code and logic to be done. And, well, if you have so much experience you have been in the inheritance hell many times trying to understand what the 101th descendant of X does and why in case X it does not do it just to find that the 35th descendant has changed a method and is used in this situation but all from 1 to 35 and after 36 use different implementations. Classes and small 1 line methods all over just pollute the code. To read a simple logic many times you have to jump across many classes and wonder if some has state changes from other methods that will change the behavior. I am fully aware overusing of SOLID and design patterns is the problem here but, well, this is the typical OOP we get in large corporate systems. I just write classes and objects as modules to include given actions on types of data inside and encapsulate all internal logic. Some use each other of course but just calling each other's methods as messages. So lets say I prefer to stay in the middle between procedural and OOP.
40:55 closures are equivalent to objects. So it's quite disingenious to say you didn't use OOP when you use closures !
Closures are like three decades older than the first digital computer, let alone OOP.
Procedural programming never went away. Of course, certain trends and fads appear in the industry, and old ones sometimes come back, but advanced and experienced programmers use the tools and paradigms that are best for a specific task, whether it is OOP, procedural programming, functional programming, or something else.. Poor programmers write poor code regardless of the programming paradigm.
It's like listening to Louie CK talk about programming. The accent, the cadence, the voice, hell he even looks like Louie a bit! red head and all.
I don't know if OOP is the problem. Java is a problem because it forces everything into a class. C++ is a problem because it has evolved in to a nightmare of complexity having failed to fix any of the foot guns of C and introducing new foot guns of its own. The new "class" in Javascript is a problem because, well, Javascript.