Still can't believe he used Java for the demo without showing the myriad of Model-Driven Development plugins for Eclipse. It's a LOT easier to show people the power of MDD with graphical programs like Papyrus or another program centering around the Eclipse Modeling Framework (EMF).
One of my early learning projects was a collection of simple text+grid based games where I basically made a very specific game engine, where lambda functions could be attached to locations on the board and the player merely selects different squares. I then made Chess, Checkers, Tic Tac Toe, and Minesweeper on the same board with the same structure, so that each game had it's rules defined in a single consistent way. It was still coded in java but I have always enjoyed this kind of approach.
Interesting, could you elaborate on how lambda functions aided you to achieve the desired abstraction ? I would assume it's something like, on left click lambda function, on drag, and so on... What aided you in the creation of this game engine ? Curious about this whole subject, and which problems it's best suited to solve
An alternative to this approach would be to create a library with functions that contain and simplify development within a particular domain. In general I think developers prefer this since all of the broader language features are still available, the simplifications become optional and complementary rather than mandatory and exclusive. If you for some reason want to make a network request to a server to fetch the probability of a minesweeper cell being a mine, and then process the result on the GPU, then you can just do that when a custom domain-specific language may not have those features.
i feel like this is particularly true for languages with a strong type system that allows you to express many high level concepts such as state machines for example the "library" approach might be more attractive when using haskell or rust than if you were using Python or C/C++
People were talking about 4th generation languages when I was in school, some 35 years ago. Then it stopped, for good reason. There is little added value to a new domain specific language compared to a good library in a 3rd generation language, which gives you the best of both worlds.
@@avidrucker I never heard of Clojure so I looked it up. Wikipedia classifies it as a functional language. Scala supports both OO and functional programming. Neither are domain specific or 4th generation, both are "general purpose" languages. The point of 4th generation is that you model rather than code, you state what you want as an end result rather than dictating steps to a result.
This really reminds me of game engines. They restrict the potential of a language, but I dont need to know how to draw things on screen at all to make some game.
First, thank for this video and sharing your knowledge on MDD approach. I have been working for some years in that field for industrial projects, and here is my two cents. First, I would prefer avoid code generation to transform a model into a program. I would create a model that describe the business requirements, and then create a runtime that interprets this model. It would ease the maintenance of the many project using your DSL (Domain Specific Language) and answer few of the concerned described in comments about code generation. I agree that there are some drawback with abstraction already mentioned in other comments, however here are the main benefits (I see) of using MDD in programs : * Closer to specification: The business rules are located in an isolated part of your project and the way you express it is close to the business you want to support. It gives fewer case of misinterpretation between user requirements and implementation. * Fast round trip. If you are using an interpretation architecture (model + runtime) you can make your runtime "change aware" meaning that changing your model "live change" your application. This is a really big benefit when interacting with users when trying to catch requirements during meetings. * Clear separation of concern (Business vs Technology). That could be a big leverage for keeping your application with up to technology. If all your business rules are written in a DSL in complete separate part of your project you can more easily switch the runtime part. For example replace your application interface technology (for example from swing to to JavaFX), or even create a 3 different runtime for different platform (Windows, Linux, Cloud, Android etc..) * A huge open source Ecosystem (Eclipse Foundation with project such as EMF and all its satellite projects) * It is a really robust approach used in thousands of industrial project in massive companies such as Thales, Airbus, Nasa and many more. This methodology has been alive for a long time. Of course creating the correct DSL and runtime might be more costly than directly creating the application and sometimes the runtime implementation is not easy at all, but if you think about really complex applications (dozen, hundreds or thousands of components) and the life period of the software (long living system, such are bank SI, space, aircraft etc..) this methodology really helps you keep your system alive (ease migration, technology evolution) and clean (clear separation of concern, alive specification model = specification).
This video was quite the throwback for me; I remember Steffen's lectures on a Thursday afternoon in my first year of undergrad covering this kind of thing. The Eclipse IDE only doubled the nostalgia :P
2 роки тому+38
In my professional experience the problem is not, that programming languages are complicated, but rather the problem domain being ill defined and/or understood in regard to formalization... i.e. the domain experts know what they are doing, but aren't able to describe the domain well enough in either language, math, UML, or any other form formal enough, to express in programming. I don't see how model driven software would make a difference there. You might as well compile from UML to code, which should theoretically work, as soon as someone generates the first UML specific enough to actual fully describe the problem domain correctly.
But if you describe the problem very specifically and do a UML diagram or whatever, you just basically did 2x the work with 0 interaction of a working program... and when you're finally start programming you most certainly run into issues where you have to refactor anyways.
Can a semantic generative adversarial network just process natural language into UML that can then be converted to code in specific business domains? I think a language model like bloom can provide the framework for this semantic generative adversarial network
@@marilynlucas5128 Theoretically, you could use AI to generate UML or even code directly. But the key issue remains: In the initial description (so the natural language input or the manually made UML) is already incomplete or wrong. Thus, the machine learning system would likely not fully understand what it's actually supposed to model/program and you would still face significant human effort checking the resulting code for issues, edge-cases etc. I reckon it'd make more sense to just replace the entire design process with an AI: The AI effectively gets the same training any human would, learns the model that way, and then might be able to write a program, just like a human would. Unless you have a very general (and thus expensive) AI you still probably won't be able to tell it to change specific rules of the model, instead having to re-do the entire model learning phase. In conclusion, I think our best bet is to use AI to assist programmers, so that they can focus on the difficult understanding, formalization and design tasks instead of language-specific details. Basically, just improve IDEs with things like recommendations and auto-generation of common code snippets/patterns.
In my firm we use UML to generate a static data model which in turn is used to auto generate code libraries. This works surprisingly well but it does still require a “syntactically valid” UML spec, which means the person writhing the rules still needs a degree of technical ability (I’m a business analyst not a developer and this remains a rare skill amongst my peers alas).
I don’t really understand the trick here. I see how once you have your mini language, there is less code to write for something like minesweeper.. but isn’t there at least just as much code to write for the mini-minesweeper language itself?
I think it's a cool concept and a good way to think about how you implement your code for reusability and maintainability. Personally, I am lost with the advantage of making a purpose-built language to implement it; I feel like a programmer could do the same thing with encapsulation and abstraction.
Every programming language has a certain level of expressivity. Sure, they might be turing complete - (i.e be able to express any expressable program) but in order to achieve what the video suggests, your abstraction will still be beholden to the language's syntax. In a sufficiently complex enough system, that in itself is enough to put you away from too many awkward abstractions. (although I do see this happening way too often in the industry and it drives me bonkers) If that's not gnarly enough, consider the fact that there generally are no zero-cost abstractions.(take the example of virtual dispatch in c++ - not zero cost. for more info see the excellent `There Are No Zero-cost Abstractions` on cppcon) Of course, writing a language for the sole purpose of expressing minesweeper is obviously unscalable, but we can think of slightly more focused programming languages: like a system programming language (rust, c), or a purpose built game programming language used to express game-y things (JAI) where the language syntax and memory/cpu utilisation lends itself to necessary abstraction driven by the 'model'. We're already kinnda of in that territory, although there's still a general trend of adding unnecessary complexity in languages in order to call them 'general purpose'. Thanks for coming to my TED talk.
Also, think further: it's a lot easier to communicate with business and analysts as that higher level language is perfect for those users to understand. And it's perfect to auto-generate automated tests to verify your code.
@@Kheos100 Why not to communicate with them in words, a language they can 100% understand? Some marketing manager of Blinking Lights Ltd. is interested in how the light blinking is perceived by the customer, not about the abstract schematic of the blinking light design. The abstract schematic is only going to make the manager confused and very likely misunderstand some core functionalities of the product.
I love that this research is happening. We had model-driven software engineering in the late 80s through late 90s, especially for business apps. Describe the rules, it generates the code. The tools we used (my fave was called Synon for old IBM midrange computera) are still maintained. But they went out of favour as low-level, multi-platform, declarative languages (e.g. Java, C++) became the norm. Since then, I kept wondering when the idea of model-driven software would come back. Very satisfying to see where people like Dr. Zschaler have taken it.
Never heard about model driven engineering until my internship. Took me a few weeks to really dig deep and understand how it works, but it's extremely easy to put an app together.
Model-driven development was an nice experiment which failed. Even highly conservative industry branches like automotive or railroad tried it, but dropped it after a decade of issues. The main problem was that the generated code was not maintainable/understandable and in case the source code generator had an issue you were dependent on the generator supplier. In most cases it was not possible to fix it without support and the most time went into finding generator issues instead of debugging the real issue. Furthermore the generated code had significant drawbacks with regard to resource and timing constraint and finally everything with tight constraints was developed manually anyways. Companys at some point also realized that the problem in a lot of cases sits in front of the screen as somebody with no real programming experience was clicking some stuff together without any knowlede about the consequences. Now it is back to C or C++ and everything works smoothly. Invest in good developers instead of eyecandy tools.
People who want and can program already know a general-purpose language. For them, model-driven development means needing to learn a DSL and feeling restricted. People who don‘t want to or can’t program will also not program in a DSL.
As others have pointed out this approach can only be implemented for very well understood problem domains. For everything else, the classic book "Domain Driven Design" by Eric Evans tells you how to slowly work your way through a less well understood problem domain to come up with a software design that contains a core domain component that is as descriptive and valuable as the game example here. But it usually takes a long time to achieve such a result.
Well, Eric Evans has actually created a DSL to express information structures. His DSL language elements are things like classes, attributes, associations types, root types, etc. and the rules (syntax) how you can combine those elements. Since he did this in the early 2000s, he did not create a formal (external) DSL with a syntax and an editor like demonstrated in this video, but defined patterns in Java to represent the elements of his DSL. DSL frameworks like Xtext for Eclipse allow you to create a formal, textual syntax for Evans' language in an afternoon and then generate a parser, editor, outline, etc from that syntax at the press of a button. Within a few days, you'll implement validations and code generators for any target language you like. Then you can describe your domain according to "Domain Driven Design" and have the code generated for it!
There is a reason why 4GL fell out of favor for 3GL. The fallacy is that someone non-technical can effectively use a model language. You'll just end up with a tech that no developer wants to touch.
Ok, I'll buy, provided: - I can create any kind of grid-based board game-Snakes and Ladders, Chess, Draughts/Checkers, Reversi/Othello, Scrabble etc., not just Minesweeper-using that configuration / DSL / modeling language; - I shouldn't have to modifying the runtime engine for creating new games; - The game-specific code should be orders of magnitude less complicated than the runtime engine. It should not blow up and become another monster. (Apache Web Server configuration files come to mind.) To be sure, each game will have its own images to be displayed in the grids, and will be provided as part of the game-specific code.
So what this video is basically saying is that I should write a parser for a custom language that simplifies further programming on that specific program, right?
It simply is one abstraction layer to far for 99% of cases that are not games. For games having a scripting language is pretty handy. I wrote a couple of those for fun. They are particularly necessary for adventure games, but for anything else, this concept turns into spaghetti in no time. It is an illusion that you can perfectly capture any domain 100% in an abstraction language. At best 80% perhaps, but when you hit that 20% part, your abstraction has to become twice as complicated because you need an API on both sides that can talk to each other. The software that I use for interactive panorama's suffers from this a lot. The amount of code needed to stitch together the domain specific scripts and the custom stuff is enormous.
I believe that the Model Driven approach is more useful in domain and business driven solutions than generic solutions. Based on my experience working in my organisation, I believe that this framework should be implemented, as it could save significant time in developing new solutions and maintaining existing dev work.
This is declarative programming in combination with domain-specific-languages. The model aspect of it is really just the application of the general paradigm to separate function and data of your program.
As someone who works in numerical simulation, implementing a feature like this would be game changing in the sense that it would allow those unfamiliar with the low level functionally of the code to define complex systems with relative ease which is nontrivial. Great video!
I’m not sure what you have in mind exactly, but the Julia SciML project has an interesting take on all of this. Because Julia is homoiconic, they’ve been able to build up some domain specific languages for simulation and modeling. Now, learning the Julia and an embedded DSL at the same time turns out to be kind of a pain - at least for me, but I like the overall direction they’re going with this approach. There’s some videos on ModelingToolkit.jl and JuliaSim that may pique your interest.
4th generation languages are great to put yourself into a corner Also flags in minesweeper mean that you are pretty sure that there is a mine there, the question mark means that you're not sure
Smalltalk and TDD allows you to archive something similar, and in a much easier way. When you are developing, the model makes itself clear naturally. Smalltalk is the language which is closer to how the mind works, so it allows you to always thinking in terms of the problem you are trying to solve, and not the technology which you are using to do so (i.e it minimizes the accidental complexity).
I used eclipse's GEF + EMF to create a Model Driven Software in my studies more than a decade ago. It works, but there are 2 big trade-offs: 1) The source code generate is mostly far away from being human readable, hard to understand and sub-optimal in a lot of places. 2) When you have a bug in the application, but your model is correct, the generator (or source to source - S2S compiler) is the one that produced not properly working source code. You need to fix the generated source code by hand when you can't change the generator (like it's proprietary closed-source software or the generator is too complex to understand). The advantages: - changes are very easy to do - you can use another generator to generate the source code in another programming language IMHO it's better to use a wide used general purpose programming language and implement a DSL (Domain Specific Language) on it, like Groovy and Kotlin Script with Gradle. Script languages interpreters are the other approach, but an interpreted script language is usually slower than something compiled to fit the hardware. PS: JVM languages (like Java) are trans-compiled into Bytecode (or like C# to IL) that is very close to machine code, so it can run on a simplified compiler or interpreter. Bytecode is afaik compiled into machine code or interpreted on the fly by the JVM implementation.
There are better alternatives to GMF + GEF nowadays, like Sirius. The issue around producing readable code is a longstanding one, although there are facilities in current languages to do things like automated formatting of the code, or managing imports to be more natural. Sometimes the issue is that rather than a straight up model-to-text transformation, you may really want a transformation to a second model closer to the code, which could be even put through some optimisation, and allow you to write a simpler generator. (Basically, split up the complexity across multiple steps as you would do in any program.) Another approach is to generate code against a runtime library that has been written with specific components in mind. What the modeling approach brings then is a more approachable notation to produce those solutions. And yes, one more approach is that you write an interpreter for that notation yourself, without getting to code generation. MDE is not just to ultimately generate code: you can use the models for validation, simulation, visualisation, documentation, etc.
"you can use another generator to generate the source code in another programming language" I like this point. But seems to be very hard work to maintain and stabilise a large panel of them.
I used EMF this year for my studies and I can say that the experience has not changed. Actually they have changed close to nothing in the last 10 years, with most EMF git repos being dormant or abandoned
But i still think generating the parser/lexer, or using a parsing combinator, is a good idea. The effort needed to write a parser without it is just unnecessary for business DSLs
@@Volxislapute yes, there has to be a business case for supporting all those alternatives. For instance, think about the code generators inside OpenAPI. Those are not using the tooling shown in this video, but the basic idea is that you describe something at a higher level and then reuse it across multiple target languages.
This is the way embedded system is delivered at uk university. Its the best way to developed advance systems that needs to be able to deal with rapid feedback for improvement. The restriction allows low error probability which is critical in industries such as semiconductor and defense. So much more advantages such a code readability/collabaration, sensor based design and AI.
Interesting ideas, reminds me of frameworks like TLA+. My worry is that the number of states in a complex program increase exponentially so I find it hard to believe that such a framework can be used to write a complete piece of software. I'd love to be proven wrong of course! Some components could certainly be written like that. And generally, aspiring to isolate state so that the aforementioned state explosion does not happen in a piece of software is a good rule of thumb in traditional software too. In any case, it will be interesting to see how such frameworks might be used in production in the future.
Well, this is the agent based programming. There is still a need for an "engine" to execute the state diagram and step the simulation for each agent. What I see here is just an hard decoupling between the game logic and the input/output subsystem. So basically instead of writing a single purpose program (minesweeper) we write a configuration (minesweeper logic) and an engine able to run such configuration (input/output subsystem), But you see, at some point you have to write the routine to display a grid and the routine to accept user inputs. That's part of the engine. The engine is still somehow coupled to the domain of the original problem (grid, cells and mouse clicks), although is not tied to the very specific minesweeper game logic. 3D engines have been using this technique since 1996. The lines of code "saved" by this approach are actually moved into the executing engine. I must admit it may be one of the most efficient way of coding, since each engine function is orthogonal to any other (e.g. "display the grid", "detect mouse click", "change text" etc..)
This is a great overview, and thank you for your work in the field, Dr. Zschaler. I'm not sure I necessarily agree that a generalized modeling language will *reduce* lines of code unless it's optimized to look for patterns and automatically refactor, but I like where this research is going. I'm pretty convinced that there is a "height" restriction on programming language abstraction - ie how high-level a programming language can be before it starts losing precision. I feel like Python is about there right now. For general purpose or more fundamentally basic applications, this is great because you don't necessarily need to know any details. But when you do need the details, well, you need a mechanism to achieve that. Inline assembly exists for a reason. I've been interested in this type of stuff ever since learning UML and looking at different automated code generation applications based on UML designs. I really hope that research finds an optimal "hybrid" model where programmers can design programs from fundamental, easily understood concepts, but then also dive into the inner-workings of their application seamlessly. I think you're going in that direction, and I like it.
What is being described here is DSLs, which is a technic dating back to the 80s or 70s (see/read) SICP... What in the industry today is being sold as model driven / model based engineering is complete and utter BS. I'm a big fan of DSLs. At the end, from customer requirements, down to the machine code, it is all a big chain of translations, from one language to another.
To me this feels like being able to think in OOP principles without the extra mental overhead of being distracted by implementation details, issues, possibilities.
At the risk of sounding elitist: Do you WANT coding to be easier? Hear me out: In software engineering, it's not just about a small abstraction and then you have a finished program and all of that is great. Like, yeah, you want to reach the goal of a fully developed software but it helps you exactly in absolutely no way if you simplify the process as much as possible. Your goal is also to understand how the program works in case it bugs out (which is something I can see this do a LOT). And also you wouldn't be aware of all the fine details like security risks in your code. What you essentially did here is make the program write specific boilerplate which is fun and all but ultimately not all that useful.
Great to finally see this topic being covered on this channel! I did my PhD in this area and still nobody outside of academia and a few niche companies has heard of it. "Low code" is the practical application in industry. By the way, anyone who's interested in practical tools, check out Epsilon :)
Steffen was basically defining a domain specific modeling language, which is a form of DSL. DSMLs come with some advantages, like having pre-existing tooling to help you create your own editing facilities, validate the models people create, and transform them to other things (e.g. other models, or text).
Yes. Unfortunately, the ancient art of Lisp hacking has mostly been forgotten now, so programmers have to re-invent its many aspects piece-by-piece, mostly in an unordered ad-hoc way...
Back in the 90's we were using Rational Rose w/Booch diagrams (precursor to UML) to build models based on GoF design patterns and round tripping the model/code.
Interesting concept, creating programs from a model created a higher-level language. It seems similar to Behaviour Driven Development, except that the latter is used for testing and the former for creating whole new pieces of software. Not quite sure I would use it, I still have to know it more in dept what I gain by using it.
interesting idea but lots of information is missing. like what about: - performance of the generated code - dealing with bugs in the code generator - it's also not clear whether the mini language is declarative or imperative or whatever - how would this compare to manually written code that has all the rules properly separated from the presentation?
Quick Question: Would you be able to make a video about intra frames when recording video. Where each frame on say the Sony a7siii when selected is doing each frame individually, then say if you wanted to edit that and render in software whether the software then changes that Intra-Frame and so defeats the object of filming in Intra frame unless there's a specific setting on some random video editor. Perhaps using a sony editor as an example
(First of all these questions are more than my job but my passion so I might be animated but please don't take it into account or put it in that context) I see several problems in this approach even if I understand the need it tries to address. You say that we code by only applying instructions to the machine. But the whole core business of software engineering is to create an interface between the machine and the human in order to meet BOTH these constraints. Indeed this approach perfectly answers the problem of the inteligibility of the code. (and this is certainly the most important point of all but I will come back to it later). But what about the maintainability of the code? The more complex the rules of your program are going to be, the more complicated the maintainability of your state machine will be. At some point it will be extremely difficult to know if adding or changing a rule will conflict with another. For the simple reason that mixing the semantic and the technical in one and the same model is impossible to solve. So the best argument you could give me at this point is that testing is the answer. And it's their job to make sure that the component or the interaction between the components of your application respects the rules and the semantics that you want. But when you say that the rules of the code are hard to read, it is not true! The rules of the code are described in the tests. They are the ones that explain what we code for. But at no time should it be the code. Code is to have an architecture that is inteligible and easily maintainable. So when you say that to change a rule (set a flag) you have to change several files and that this can be complicated, I seriously question your development methodology. Because it is precisely the heart of our job to know how to create and design a clear and inteligible architecture. TDD and BDD are approaches that better address your problem than having to create a new programming paradigm... If the goal is only to try to create a new paradigm even higher than the Object approach, I don't think it can be possible (and this is a very personal opinion and it commits only me). But the problem is that the Object approach has something magical, it's scalability. For it is clearly the mind's representation of what a concept is, all laid down on paper. I may be wrong, but you are probably right and can revolutionize the business. In any case it is all the evil that I wish you. In any case, thank you for sharing the same passion and for trying to make things happen. Because there is still a lot to be done in our field and it is difficult to imagine what the state of the art of our profession will be in 10 years.
Having worked on a MDE / MDSD tool many years ago. I feel like the video misses one important part. MDE / MDSD is not so much about creating a DSL but being able to model and reason about software and enforcing rules. E.g. we were able to simulate approximate run time behavior, comparing architectures, validate that specific attack vectors, that are inherent to specific components have been guarded against using some security patterns along the call chain, ensure that data flow was following some rules (e.g. legislation on PII it health data) etc. That in my eyes is the value of MDSD / MDE.
the next level is implementing that concept in mechanical engineering, alias MBSE - Model based System Engineering. given that a mechanical system has more than just signal in-/ouputs => signal, material and energy flows, MBSE gets way more complex^^
I can confirm that the complexity increases. Especially for CAD models it's a challenge to abstract away the technicalities to just inputs/outputs. Creating robust and re-usable parametric models is challenging.
Depends at what level of abstraction you're working. If you're the programmer that has to write the model parser and code generator then you're writing a lot of code.
I'm using MDE for my masters and the most important aspect is the tool integration with Sirius for instantiating models using a GUI, integrating with other MDE tools and the java ecosystem For the minesweeper example each model object has alot of values, so this wouldn't apply. In our models we have recursivity of model objects so having a GUI to visualise boxes and links between them is simpler than reading DSL text that says some component A references some component B. The largest problem encountered is that the way to extend (plugin style) EMF (eclipse modeling framework) metamodels is very bad but at least it works.
I like the intent of the additional layer of abstraction to simplify solving problems. I feel like this particular implementation of MBSE still falls prey for someone who does not have a software education. The syntax is still difficult to follow for a non-programmer.
"There are things you can do in assembler that you cannot do in a higher level language" Yes, but there are also things you can do in a higher level language that you CAN'T do in assembler. Such as target different architectures using the same source code. Sure, you can probably pragma a lot of stuff in an ASM file but that would be silly at some point. If you needed to do something you can ONLY do in ASM then you would use ASM only where you need it and ship one version of each ASM trick for each architecture while letting the compiler deal with the rest. And that is the beauty of high level languages, you aren't entirely restricted to one language. You can go down the rabbit hole and do things "the hard way" when you need to. And this is long before you start writing code that is meant to alter the behavior of already compiled code, which is a thing that happens a lot in game modifications... But also what happens with malware. Injecting code is always possible unless the language you are using is so daft that it simply doesn't compile what you needed. In which case, the language isn't likely going to get adopted.
If you wrote your first code such that to turn off flagging took changes in several files rather than one line I'd say that was just a being very bad programmer. I don't see how this is different from the many code generating frameworks that allow to you develop things at a higher level but with massive restrictions and efficiency losses.
It's pretty much the same thing, with the caveat that you build what you need, instead of using something someone else built. I would push back on the efficiency loss though, because humans are kind of lazy, and generated code can be a lot more consistent about doing the Right Thing than any of us can, so the resulting code is often more efficient. You're absolutely right about toggling the flag though, minesweeper is kind of a weak example - but this technique kind of shines in complex domains, so I can't really think of a better example 🤷🏼♀️
Which just proves to me the problem with all these fads. They make it very easy to generate very easy programs. But once you have to dive deeper they become an obstruction, sometimes a breaking obstruction.
Reading through the comments here it seems to add complications when the generator doesn’t do his thing right. If the benefits from that practice is having an accessible business logic, why not go for BDD or TDD (or both)? It’s a real question I’m doing here, not an affirmation 💡
right questions are asked in the end. DSL is cool for writing, but for anyone new assigned to the project it can be a nightmare to read the DSL code. And there is no stackoverflow to explain this separate "DSL" quirks.
This looks interesting, but I'd like to better understand how this differs from other DSL's (Domain Specific Languages), I guess the scale ast which it's used? Usually, a DSL is used as part of a bigger program, but in this case, the DSL models the application as a whole?
Now I wanna teach the game of Life to play mine sweeper.
2 роки тому+1
Jr. Engineers always ask me why we do not use code generation from models. My short answer to that question is because we know how to code. I have never seen a single benefit of code generation (I mean from model not special cases for boilerplate codes).
This reminds me of the AWS Workflow for example. It abstracts things away so you don't need to know the true ins and outs of how AWS works to implement cloud functions. Or something like a WYSIWYG web editor like square space implements. I kind of see the abstraction chain as MDE - OOP - High level language - Assembly - hardware. (Yes I know this is very flawed). You can do more things the lower down the abstraction chain you go but you need more knowledge and time (therefore money)
Great demo. However, I would love to learn how Dr Zschaler designed this one language grammar that can represent all such games. What were his thought process?
Yeah, this feels like a missing bonus video to me - how do you make a new language that, as in the example, turns descriptions of game-states into real Java code? The concept seems like it could be very useful, but now I'm struggling to google all the right terms to just even get started with tools and tutorials. :)
As usual you always wonder with these, how much that limited DSL could really be used to implement. For example, I suppose you should show how to make some completely different game using the same thing. Solitaire would be a good example since it's the other game that came with Windows. Also the real benefit of this approach is that when you finally want to ditch Swing, you don't have to rewrite all your games, you just rewrite the engine to run on Compose instead and all the games still run.
My only issue with this approach is using such a common term as "model" in the name. Please indicate what type of model you are using. "Semantic/conceptual"? "Logical"?
Read dwm source code. Its config.h is kinda its own language that allows to just implement some function and then just put one line inside dwm's array of Keys which allows you to bind function call with sequence of buttons pressed.
There must be about a million JavaScript libraries which can do that for you. So what? So nothing. This is like me giving you thirty tubes of oil paint, a canvas and three brushes and telling you to paint the Mona Lisa.
i always like to start with a json model of the problem. think - how can i turn this section of the project into a configuration problem rather than a coding problem. didn't know it was called model driven software engineering tho. tools like swagger also take this approach
I'm not seeing it as revolutionary as much as he's describing it. Not trying to be critique, but can't we do the same with oop? given that we will spend more time and effort creating a very dynamic and well-abstracted objects, then let them interact with each other based on the program state and some rules. To my humble understanding from this video, MDE is OOP with a bit more restrictive mindset and different terminology. In case I'm wrong, that means we need more MDE videos Sean ;) thanks as always
Higher abstraction languages reduce the number of ways to shoot yourself in the foot. I said true that. I can over run the bounds on an array in C++ but it is impossible to do that in Mathematica. C++ will happily smash stacks or otherwise over run array boundaries while Mathematica will complain "Part::partw: Part of does not exist"
think outside the box. goto notepad and use dots or digits as if they were an actual square themself. imagination please. P = N x 4 - 4 P = Square or Pixel start with 1 and you realize that 1 represents 0 simultaneously. Now, use odd numbers if you want to shortcut. hint: odd numbers are shortcuts. odd numbers should be treated as the i = imaginary number of squares added to the previous set in order to make a bigger box let's apply to quantum and how you scan skip or fine tune the end-results in order to go from binary to quantum to whatever the heck we'll use after quantum [which will probably be octa-processing instead of binary processing or quantum processing] when someone figures out how to use this you can thank me later for dumbing down math to simple equation.
Does this (MDE) language has a mechanism to input intentions? Once you have a tree of intentions for the application, you follow a process of deducing the required contract-language to define in the MDE. This initial step was discussed
"Build a website.". Is that enough of an intention description to build an empty HTML document? Sure. How do you get from that to $100 billion in revenue like Facebook? See the problem? :-)
Can a semantic generative adversarial network just process natural language into UML that can then be converted to code in specific business domains? I think a language model like bloom can provide the framework for this semantic generative adversarial network
So, I can write a simpler program in a new language that I have to learn as well. Not much difference wit a framework, except for the new language, right?
The idea is that you're the "tool smith" creating the language for someone else to use. Think map editors in games, for instance: someone created that tooling so the creative people could do their job more easily.
Isn't this is just object oriented programming? He could have done this abstraction using one of the core features of Java, even exported it as a library for game developers to use. There's no need to invent a new language, and doing so unnecessarily doesn't make this a "new interesting way of developing software"
did a huge mda project in the past for more than 3 years. the project failed... why? it failed because no one wants to model the logic. its tideous, error prone and makes no fun at all. programmers want to program in their favourite language and not model abstract stuff...
It kind of depends on the type of system you are making. It does require some upfront work, so it tends to be more effective at "product line" type of work where you notice you're repeatedly doing the same thing (so automating it will save time), or at safety-critical work where you may want to automate things that are error prone (say, wiring sensors to pins on a chip) or support formal methods. Also, modeling is a bit different from traditional programming, and it does take some time to design good abstractions. One aspect to be careful about is when you need to change your abstractions over time and you have models on the old version of the abstraction, for instance. In the good and the bad ways, it's pretty much like being the maintainer of your own language.
‘No one wants to model the logic’ except programmers. In my opinion, programmers are just builders the stakeholders should be responsible for modelling the logic but they are too busy in real world
In some languages DSL's are used in very similar manner but using shallow embeddings which lower to the host language rather than code generation. So one-off languages are fine if they are cheap enough to implement, but code generation is probably too expensive for that?
This is an example of Object oriented programming with engines like those used for video games (graphics, collision, AI/NPC movement, etc). Another version is MVVM (Model, View, View-Model) commonly used in Xamarin and Maui in which the display, storage, and the way they interact is seperated in different files.
Thanks to Scaler for their support. Visit bit.ly/Scaler_Computerphile to take the free live class
Still can't believe he used Java for the demo without showing the myriad of Model-Driven Development plugins for Eclipse.
It's a LOT easier to show people the power of MDD with graphical programs like Papyrus or another program centering around the Eclipse Modeling Framework (EMF).
One of my early learning projects was a collection of simple text+grid based games where I basically made a very specific game engine, where lambda functions could be attached to locations on the board and the player merely selects different squares. I then made Chess, Checkers, Tic Tac Toe, and Minesweeper on the same board with the same structure, so that each game had it's rules defined in a single consistent way. It was still coded in java but I have always enjoyed this kind of approach.
Interesting, could you elaborate on how lambda functions aided you to achieve the desired abstraction ? I would assume it's something like, on left click lambda function, on drag, and so on...
What aided you in the creation of this game engine ? Curious about this whole subject, and which problems it's best suited to solve
An alternative to this approach would be to create a library with functions that contain and simplify development within a particular domain. In general I think developers prefer this since all of the broader language features are still available, the simplifications become optional and complementary rather than mandatory and exclusive. If you for some reason want to make a network request to a server to fetch the probability of a minesweeper cell being a mine, and then process the result on the GPU, then you can just do that when a custom domain-specific language may not have those features.
i feel like this is particularly true for languages with a strong type system that allows you to express many high level concepts such as state machines
for example the "library" approach might be more attractive when using haskell or rust than if you were using Python or C/C++
@@nasso_ yo do not need stron typing. You just need to be able to create types. Python for example, will do perfectly.
@@esepecesito oh that's right i didn't know you could do that in python
@@nasso_ Why couldn't C++ do that? It's one of the most advanced OOP languages. You could can define pretty much any type, even a frog ;-)
@@Edekje cpp doesn't have algebraic data types the way Haskell does for example
People were talking about 4th generation languages when I was in school, some 35 years ago. Then it stopped, for good reason.
There is little added value to a new domain specific language compared to a good library in a 3rd generation language, which gives you the best of both worlds.
DSLs are really common nowadays though..
What about Clojure or Scala, would these not be considered 4th gen languages?
@@avidrucker I never heard of Clojure so I looked it up. Wikipedia classifies it as a functional language. Scala supports both OO and functional programming. Neither are domain specific or 4th generation, both are "general purpose" languages. The point of 4th generation is that you model rather than code, you state what you want as an end result rather than dictating steps to a result.
This really reminds me of game engines. They restrict the potential of a language, but I dont need to know how to draw things on screen at all to make some game.
First, thank for this video and sharing your knowledge on MDD approach.
I have been working for some years in that field for industrial projects, and here is my two cents.
First, I would prefer avoid code generation to transform a model into a program. I would create a model that describe the business requirements, and then create a runtime that interprets this model. It would ease the maintenance of the many project using your DSL (Domain Specific Language) and answer few of the concerned described in comments about code generation. I agree that there are some drawback with abstraction already mentioned in other comments, however here are the main benefits (I see) of using MDD in programs :
* Closer to specification: The business rules are located in an isolated part of your project and the way you express it is close to the business you want to support. It gives fewer case of misinterpretation between user requirements and implementation.
* Fast round trip. If you are using an interpretation architecture (model + runtime) you can make your runtime "change aware" meaning that changing your model "live change" your application. This is a really big benefit when interacting with users when trying to catch requirements during meetings.
* Clear separation of concern (Business vs Technology). That could be a big leverage for keeping your application with up to technology.
If all your business rules are written in a DSL in complete separate part of your project you can more easily switch the runtime part. For example replace your application interface technology (for example from swing to to JavaFX), or even create a 3 different runtime for different platform (Windows, Linux, Cloud, Android etc..)
* A huge open source Ecosystem (Eclipse Foundation with project such as EMF and all its satellite projects)
* It is a really robust approach used in thousands of industrial project in massive companies such as Thales, Airbus, Nasa and many more. This methodology has been alive for a long time.
Of course creating the correct DSL and runtime might be more costly than directly creating the application and sometimes the runtime implementation is not easy at all, but if you think about really complex applications (dozen, hundreds or thousands of components) and the life period of the software (long living system, such are bank SI, space, aircraft etc..) this methodology really helps you keep your system alive (ease migration, technology evolution) and clean (clear separation of concern, alive specification model = specification).
This video was quite the throwback for me; I remember Steffen's lectures on a Thursday afternoon in my first year of undergrad covering this kind of thing. The Eclipse IDE only doubled the nostalgia :P
In my professional experience the problem is not, that programming languages are complicated, but rather the problem domain being ill defined and/or understood in regard to formalization... i.e. the domain experts know what they are doing, but aren't able to describe the domain well enough in either language, math, UML, or any other form formal enough, to express in programming.
I don't see how model driven software would make a difference there. You might as well compile from UML to code, which should theoretically work, as soon as someone generates the first UML specific enough to actual fully describe the problem domain correctly.
But if you describe the problem very specifically and do a UML diagram or whatever, you just basically did 2x the work with 0 interaction of a working program... and when you're finally start programming you most certainly run into issues where you have to refactor anyways.
Can a semantic generative adversarial network just process natural language into UML that can then be converted to code in specific business domains? I think a language model like bloom can provide the framework for this semantic generative adversarial network
@@marilynlucas5128 that's how you get skynet
@@marilynlucas5128 Theoretically, you could use AI to generate UML or even code directly. But the key issue remains: In the initial description (so the natural language input or the manually made UML) is already incomplete or wrong. Thus, the machine learning system would likely not fully understand what it's actually supposed to model/program and you would still face significant human effort checking the resulting code for issues, edge-cases etc.
I reckon it'd make more sense to just replace the entire design process with an AI: The AI effectively gets the same training any human would, learns the model that way, and then might be able to write a program, just like a human would. Unless you have a very general (and thus expensive) AI you still probably won't be able to tell it to change specific rules of the model, instead having to re-do the entire model learning phase.
In conclusion, I think our best bet is to use AI to assist programmers, so that they can focus on the difficult understanding, formalization and design tasks instead of language-specific details. Basically, just improve IDEs with things like recommendations and auto-generation of common code snippets/patterns.
In my firm we use UML to generate a static data model which in turn is used to auto generate code libraries.
This works surprisingly well but it does still require a “syntactically valid” UML spec, which means the person writhing the rules still needs a degree of technical ability (I’m a business analyst not a developer and this remains a rare skill amongst my peers alas).
I don’t really understand the trick here. I see how once you have your mini language, there is less code to write for something like minesweeper.. but isn’t there at least just as much code to write for the mini-minesweeper language itself?
It's the no-code trend. There are lots of well funded startups trying to simplify development through strategies similar to this.
Its like object-oriented programming. You are just front-loading the design effort.
I think it's a cool concept and a good way to think about how you implement your code for reusability and maintainability. Personally, I am lost with the advantage of making a purpose-built language to implement it; I feel like a programmer could do the same thing with encapsulation and abstraction.
Every programming language has a certain level of expressivity. Sure, they might be turing complete - (i.e be able to express any expressable program) but in order to achieve what the video suggests, your abstraction will still be beholden to the language's syntax. In a sufficiently complex enough system, that in itself is enough to put you away from too many awkward abstractions. (although I do see this happening way too often in the industry and it drives me bonkers)
If that's not gnarly enough, consider the fact that there generally are no zero-cost abstractions.(take the example of virtual dispatch in c++ - not zero cost. for more info see the excellent `There Are No Zero-cost Abstractions` on cppcon)
Of course, writing a language for the sole purpose of expressing minesweeper is obviously unscalable, but we can think of slightly more focused programming languages: like a system programming language (rust, c), or a purpose built game programming language used to express game-y things (JAI) where the language syntax and memory/cpu utilisation lends itself to necessary abstraction driven by the 'model'.
We're already kinnda of in that territory, although there's still a general trend of adding unnecessary complexity in languages in order to call them 'general purpose'.
Thanks for coming to my TED talk.
He address this at the end, it’s about reducing risk of shooting yourself in the foot
Also, think further: it's a lot easier to communicate with business and analysts as that higher level language is perfect for those users to understand. And it's perfect to auto-generate automated tests to verify your code.
oop isn't the be all end all, you also have data driven programming amongst others.
@@Kheos100 Why not to communicate with them in words, a language they can 100% understand? Some marketing manager of Blinking Lights Ltd. is interested in how the light blinking is perceived by the customer, not about the abstract schematic of the blinking light design. The abstract schematic is only going to make the manager confused and very likely misunderstand some core functionalities of the product.
I love that this research is happening. We had model-driven software engineering in the late 80s through late 90s, especially for business apps. Describe the rules, it generates the code. The tools we used (my fave was called Synon for old IBM midrange computera) are still maintained. But they went out of favour as low-level, multi-platform, declarative languages (e.g. Java, C++) became the norm. Since then, I kept wondering when the idea of model-driven software would come back. Very satisfying to see where people like Dr. Zschaler have taken it.
Never heard about model driven engineering until my internship. Took me a few weeks to really dig deep and understand how it works, but it's extremely easy to put an app together.
Model-driven development was an nice experiment which failed. Even highly conservative industry branches like automotive or railroad tried it, but dropped it after a decade of issues. The main problem was that the generated code was not maintainable/understandable and in case the source code generator had an issue you were dependent on the generator supplier. In most cases it was not possible to fix it without support and the most time went into finding generator issues instead of debugging the real issue. Furthermore the generated code had significant drawbacks with regard to resource and timing constraint and finally everything with tight constraints was developed manually anyways. Companys at some point also realized that the problem in a lot of cases sits in front of the screen as somebody with no real programming experience was clicking some stuff together without any knowlede about the consequences. Now it is back to C or C++ and everything works smoothly. Invest in good developers instead of eyecandy tools.
Model-driven development is definitely still used in some of those "conservative" industry branches you mentioned.
That's more a warning about 3rd party dependencies than model driven development.
People who want and can program already know a general-purpose language. For them, model-driven development means needing to learn a DSL and feeling restricted.
People who don‘t want to or can’t program will also not program in a DSL.
As others have pointed out this approach can only be implemented for very well understood problem domains. For everything else, the classic book "Domain Driven Design" by Eric Evans tells you how to slowly work your way through a less well understood problem domain to come up with a software design that contains a core domain component that is as descriptive and valuable as the game example here. But it usually takes a long time to achieve such a result.
Well, Eric Evans has actually created a DSL to express information structures. His DSL language elements are things like classes, attributes, associations types, root types, etc. and the rules (syntax) how you can combine those elements. Since he did this in the early 2000s, he did not create a formal (external) DSL with a syntax and an editor like demonstrated in this video, but defined patterns in Java to represent the elements of his DSL.
DSL frameworks like Xtext for Eclipse allow you to create a formal, textual syntax for Evans' language in an afternoon and then generate a parser, editor, outline, etc from that syntax at the press of a button. Within a few days, you'll implement validations and code generators for any target language you like. Then you can describe your domain according to "Domain Driven Design" and have the code generated for it!
There is a reason why 4GL fell out of favor for 3GL. The fallacy is that someone non-technical can effectively use a model language. You'll just end up with a tech that no developer wants to touch.
Time and time again it turns out you need smart people to solve complex problems.
Ok, I'll buy, provided:
- I can create any kind of grid-based board game-Snakes and Ladders, Chess, Draughts/Checkers, Reversi/Othello, Scrabble etc., not just Minesweeper-using that configuration / DSL / modeling language;
- I shouldn't have to modifying the runtime engine for creating new games;
- The game-specific code should be orders of magnitude less complicated than the runtime engine. It should not blow up and become another monster. (Apache Web Server configuration files come to mind.)
To be sure, each game will have its own images to be displayed in the grids, and will be provided as part of the game-specific code.
So what this video is basically saying is that I should write a parser for a custom language that simplifies further programming on that specific program, right?
this video just shows off a toy this person has made and proud of
Basically the definition of a DSL :D
Seems like it
DSL
Or just write a library/package, or a DSL
It simply is one abstraction layer to far for 99% of cases that are not games. For games having a scripting language is pretty handy. I wrote a couple of those for fun. They are particularly necessary for adventure games, but for anything else, this concept turns into spaghetti in no time.
It is an illusion that you can perfectly capture any domain 100% in an abstraction language. At best 80% perhaps, but when you hit that 20% part, your abstraction has to become twice as complicated because you need an API on both sides that can talk to each other. The software that I use for interactive panorama's suffers from this a lot. The amount of code needed to stitch together the domain specific scripts and the custom stuff is enormous.
I believe that the Model Driven approach is more useful in domain and business driven solutions than generic solutions. Based on my experience working in my organisation, I believe that this framework should be implemented, as it could save significant time in developing new solutions and maintaining existing dev work.
Hello Aditya, I have a project related to the Model Driven approach. Would you mind helping me? I'll pay.
Reminds me of hardware description languages like Verilog and VHDL, the way to design the hardware modules as a part of a circuit is model driven.
more like auto-router for PCB tracing in EDA software, which is hopeless for anything worth of production..
This is declarative programming in combination with domain-specific-languages. The model aspect of it is really just the application of the general paradigm to separate function and data of your program.
As someone who works in numerical simulation, implementing a feature like this would be game changing in the sense that it would allow those unfamiliar with the low level functionally of the code to define complex systems with relative ease which is nontrivial. Great video!
Always happy to chat ;-)
I’m not sure what you have in mind exactly, but the Julia SciML project has an interesting take on all of this. Because Julia is homoiconic, they’ve been able to build up some domain specific languages for simulation and modeling. Now, learning the Julia and an embedded DSL at the same time turns out to be kind of a pain - at least for me, but I like the overall direction they’re going with this approach. There’s some videos on ModelingToolkit.jl and JuliaSim that may pique your interest.
4th generation languages are great to put yourself into a corner
Also flags in minesweeper mean that you are pretty sure that there is a mine there, the question mark means that you're not sure
Smalltalk and TDD allows you to archive something similar, and in a much easier way. When you are developing, the model makes itself clear naturally. Smalltalk is the language which is closer to how the mind works, so it allows you to always thinking in terms of the problem you are trying to solve, and not the technology which you are using to do so (i.e it minimizes the accidental complexity).
That zoom-in at the system malfunction at 8:05 killed me.
Another small step toward programming with natural languages
I used eclipse's GEF + EMF to create a Model Driven Software in my studies more than a decade ago. It works, but there are 2 big trade-offs:
1) The source code generate is mostly far away from being human readable, hard to understand and sub-optimal in a lot of places.
2) When you have a bug in the application, but your model is correct, the generator (or source to source - S2S compiler) is the one that produced not properly working source code. You need to fix the generated source code by hand when you can't change the generator (like it's proprietary closed-source software or the generator is too complex to understand).
The advantages:
- changes are very easy to do
- you can use another generator to generate the source code in another programming language
IMHO it's better to use a wide used general purpose programming language and implement a DSL (Domain Specific Language) on it, like Groovy and Kotlin Script with Gradle.
Script languages interpreters are the other approach, but an interpreted script language is usually slower than something compiled to fit the hardware.
PS: JVM languages (like Java) are trans-compiled into Bytecode (or like C# to IL) that is very close to machine code, so it can run on a simplified compiler or interpreter. Bytecode is afaik compiled into machine code or interpreted on the fly by the JVM implementation.
There are better alternatives to GMF + GEF nowadays, like Sirius. The issue around producing readable code is a longstanding one, although there are facilities in current languages to do things like automated formatting of the code, or managing imports to be more natural.
Sometimes the issue is that rather than a straight up model-to-text transformation, you may really want a transformation to a second model closer to the code, which could be even put through some optimisation, and allow you to write a simpler generator. (Basically, split up the complexity across multiple steps as you would do in any program.)
Another approach is to generate code against a runtime library that has been written with specific components in mind. What the modeling approach brings then is a more approachable notation to produce those solutions.
And yes, one more approach is that you write an interpreter for that notation yourself, without getting to code generation. MDE is not just to ultimately generate code: you can use the models for validation, simulation, visualisation, documentation, etc.
"you can use another generator to generate the source code in another programming language" I like this point.
But seems to be very hard work to maintain and stabilise a large panel of them.
I used EMF this year for my studies and I can say that the experience has not changed. Actually they have changed close to nothing in the last 10 years, with most EMF git repos being dormant or abandoned
But i still think generating the parser/lexer, or using a parsing combinator, is a good idea. The effort needed to write a parser without it is just unnecessary for business DSLs
@@Volxislapute yes, there has to be a business case for supporting all those alternatives. For instance, think about the code generators inside OpenAPI. Those are not using the tooling shown in this video, but the basic idea is that you describe something at a higher level and then reuse it across multiple target languages.
This is the way embedded system is delivered at uk university. Its the best way to developed advance systems that needs to be able to deal with rapid feedback for improvement. The restriction allows low error probability which is critical in industries such as semiconductor and defense. So much more advantages such a code readability/collabaration, sensor based design and AI.
Interesting ideas, reminds me of frameworks like TLA+. My worry is that the number of states in a complex program increase exponentially so I find it hard to believe that such a framework can be used to write a complete piece of software. I'd love to be proven wrong of course! Some components could certainly be written like that. And generally, aspiring to isolate state so that the aforementioned state explosion does not happen in a piece of software is a good rule of thumb in traditional software too. In any case, it will be interesting to see how such frameworks might be used in production in the future.
Well, this is the agent based programming. There is still a need for an "engine" to execute the state diagram and step the simulation for each agent. What I see here is just an hard decoupling between the game logic and the input/output subsystem. So basically instead of writing a single purpose program (minesweeper) we write a configuration (minesweeper logic) and an engine able to run such configuration (input/output subsystem), But you see, at some point you have to write the routine to display a grid and the routine to accept user inputs. That's part of the engine. The engine is still somehow coupled to the domain of the original problem (grid, cells and mouse clicks), although is not tied to the very specific minesweeper game logic. 3D engines have been using this technique since 1996.
The lines of code "saved" by this approach are actually moved into the executing engine.
I must admit it may be one of the most efficient way of coding, since each engine function is orthogonal to any other (e.g. "display the grid", "detect mouse click", "change text" etc..)
This is a great overview, and thank you for your work in the field, Dr. Zschaler. I'm not sure I necessarily agree that a generalized modeling language will *reduce* lines of code unless it's optimized to look for patterns and automatically refactor, but I like where this research is going. I'm pretty convinced that there is a "height" restriction on programming language abstraction - ie how high-level a programming language can be before it starts losing precision. I feel like Python is about there right now.
For general purpose or more fundamentally basic applications, this is great because you don't necessarily need to know any details. But when you do need the details, well, you need a mechanism to achieve that. Inline assembly exists for a reason.
I've been interested in this type of stuff ever since learning UML and looking at different automated code generation applications based on UML designs. I really hope that research finds an optimal "hybrid" model where programmers can design programs from fundamental, easily understood concepts, but then also dive into the inner-workings of their application seamlessly. I think you're going in that direction, and I like it.
What is being described here is DSLs, which is a technic dating back to the 80s or 70s (see/read) SICP... What in the industry today is being sold as model driven / model based engineering is complete and utter BS. I'm a big fan of DSLs. At the end, from customer requirements, down to the machine code, it is all a big chain of translations, from one language to another.
Exactly. The video's title should be changed because I know all about the DSLs craze and thought this was going to be about TLA+ etc.
To me this feels like being able to think in OOP principles without the extra mental overhead of being distracted by implementation details, issues, possibilities.
Planning out required classes in a UML diagram is always easier than actually implementing it in code!
At the risk of sounding elitist:
Do you WANT coding to be easier? Hear me out: In software engineering, it's not just about a small abstraction and then you have a finished program and all of that is great. Like, yeah, you want to reach the goal of a fully developed software but it helps you exactly in absolutely no way if you simplify the process as much as possible. Your goal is also to understand how the program works in case it bugs out (which is something I can see this do a LOT). And also you wouldn't be aware of all the fine details like security risks in your code.
What you essentially did here is make the program write specific boilerplate which is fun and all but ultimately not all that useful.
TL;DR General Purpose Languages >>>>> DSLs!?!
Great to finally see this topic being covered on this channel! I did my PhD in this area and still nobody outside of academia and a few niche companies has heard of it. "Low code" is the practical application in industry. By the way, anyone who's interested in practical tools, check out Epsilon :)
I don't find anything besides the greek letter and a construction company. Could be more specific?
@@MaxDiscere you might be looking for "eclipse epsilon"
I've just finished a course on this and literally all of classmates absolutely hated it, myself included.
What did you find disagreeable / unpleasant / painful about it?
Isn't this just what lisp does? Creating domain-specific languages for everything?
pretty much yeah
Most progresses in programming languages have just been replicating Lisp features to imperative languages, in a way.
Steffen was basically defining a domain specific modeling language, which is a form of DSL. DSMLs come with some advantages, like having pre-existing tooling to help you create your own editing facilities, validate the models people create, and transform them to other things (e.g. other models, or text).
Yes. Unfortunately, the ancient art of Lisp hacking has mostly been forgotten now, so programmers have to re-invent its many aspects piece-by-piece, mostly in an unordered ad-hoc way...
Back in the 90's we were using Rational Rose w/Booch diagrams (precursor to UML) to build models based on GoF design patterns and round tripping the model/code.
declarative vs imperative
Interesting concept, creating programs from a model created a higher-level language. It seems similar to Behaviour Driven Development, except that the latter is used for testing and the former for creating whole new pieces of software. Not quite sure I would use it, I still have to know it more in dept what I gain by using it.
interesting idea but lots of information is missing. like what about:
- performance of the generated code
- dealing with bugs in the code generator
- it's also not clear whether the mini language is declarative or imperative or whatever
- how would this compare to manually written code that has all the rules properly separated from the presentation?
The more bespoke & speedier the language, the harder the job interview to get to use the language.
Quick Question: Would you be able to make a video about intra frames when recording video. Where each frame on say the Sony a7siii when selected is doing each frame individually, then say if you wanted to edit that and render in software whether the software then changes that Intra-Frame and so defeats the object of filming in Intra frame unless there's a specific setting on some random video editor. Perhaps using a sony editor as an example
(First of all these questions are more than my job but my passion so I might be animated but please don't take it into account or put it in that context)
I see several problems in this approach even if I understand the need it tries to address.
You say that we code by only applying instructions to the machine. But the whole core business of software engineering is to create an interface between the machine and the human in order to meet BOTH these constraints.
Indeed this approach perfectly answers the problem of the inteligibility of the code. (and this is certainly the most important point of all but I will come back to it later). But what about the maintainability of the code? The more complex the rules of your program are going to be, the more complicated the maintainability of your state machine will be. At some point it will be extremely difficult to know if adding or changing a rule will conflict with another. For the simple reason that mixing the semantic and the technical in one and the same model is impossible to solve.
So the best argument you could give me at this point is that testing is the answer. And it's their job to make sure that the component or the interaction between the components of your application respects the rules and the semantics that you want. But when you say that the rules of the code are hard to read, it is not true! The rules of the code are described in the tests. They are the ones that explain what we code for. But at no time should it be the code. Code is to have an architecture that is inteligible and easily maintainable. So when you say that to change a rule (set a flag) you have to change several files and that this can be complicated, I seriously question your development methodology. Because it is precisely the heart of our job to know how to create and design a clear and inteligible architecture.
TDD and BDD are approaches that better address your problem than having to create a new programming paradigm...
If the goal is only to try to create a new paradigm even higher than the Object approach, I don't think it can be possible (and this is a very personal opinion and it commits only me). But the problem is that the Object approach has something magical, it's scalability. For it is clearly the mind's representation of what a concept is, all laid down on paper.
I may be wrong, but you are probably right and can revolutionize the business. In any case it is all the evil that I wish you.
In any case, thank you for sharing the same passion and for trying to make things happen. Because there is still a lot to be done in our field and it is difficult to imagine what the state of the art of our profession will be in 10 years.
Having worked on a MDE / MDSD tool many years ago. I feel like the video misses one important part. MDE / MDSD is not so much about creating a DSL but being able to model and reason about software and enforcing rules. E.g. we were able to simulate approximate run time behavior, comparing architectures, validate that specific attack vectors, that are inherent to specific components have been guarded against using some security patterns along the call chain, ensure that data flow was following some rules (e.g. legislation on PII it health data) etc. That in my eyes is the value of MDSD / MDE.
the next level is implementing that concept in mechanical engineering, alias MBSE - Model based System Engineering.
given that a mechanical system has more than just signal in-/ouputs => signal, material and energy flows, MBSE gets way more complex^^
I can confirm that the complexity increases. Especially for CAD models it's a challenge to abstract away the technicalities to just inputs/outputs. Creating robust and re-usable parametric models is challenging.
I'm still waiting for a device that can read my mind and then build what I want.
Depends at what level of abstraction you're working. If you're the programmer that has to write the model parser and code generator then you're writing a lot of code.
I'm using MDE for my masters and the most important aspect is the tool integration with Sirius for instantiating models using a GUI, integrating with other MDE tools and the java ecosystem
For the minesweeper example each model object has alot of values, so this wouldn't apply.
In our models we have recursivity of model objects so having a GUI to visualise boxes and links between them is simpler than reading DSL text that says some component A references some component B.
The largest problem encountered is that the way to extend (plugin style) EMF (eclipse modeling framework) metamodels is very bad but at least it works.
I like the intent of the additional layer of abstraction to simplify solving problems. I feel like this particular implementation of MBSE still falls prey for someone who does not have a software education. The syntax is still difficult to follow for a non-programmer.
in the real world it's hard to convince them upfront design is worth anything, so I'll wait until April before I even suggest this
"There are things you can do in assembler that you cannot do in a higher level language"
Yes, but there are also things you can do in a higher level language that you CAN'T do in assembler. Such as target different architectures using the same source code.
Sure, you can probably pragma a lot of stuff in an ASM file but that would be silly at some point.
If you needed to do something you can ONLY do in ASM then you would use ASM only where you need it and ship one version of each ASM trick for each architecture while letting the compiler deal with the rest.
And that is the beauty of high level languages, you aren't entirely restricted to one language. You can go down the rabbit hole and do things "the hard way" when you need to.
And this is long before you start writing code that is meant to alter the behavior of already compiled code, which is a thing that happens a lot in game modifications... But also what happens with malware.
Injecting code is always possible unless the language you are using is so daft that it simply doesn't compile what you needed. In which case, the language isn't likely going to get adopted.
If you wrote your first code such that to turn off flagging took changes in several files rather than one line I'd say that was just a being very bad programmer. I don't see how this is different from the many code generating frameworks that allow to you develop things at a higher level but with massive restrictions and efficiency losses.
It's pretty much the same thing, with the caveat that you build what you need, instead of using something someone else built.
I would push back on the efficiency loss though, because humans are kind of lazy, and generated code can be a lot more consistent about doing the Right Thing than any of us can, so the resulting code is often more efficient.
You're absolutely right about toggling the flag though, minesweeper is kind of a weak example - but this technique kind of shines in complex domains, so I can't really think of a better example 🤷🏼♀️
Which just proves to me the problem with all these fads. They make it very easy to generate very easy programs. But once you have to dive deeper they become an obstruction, sometimes a breaking obstruction.
Fantastic video!
Reading through the comments here it seems to add complications when the generator doesn’t do his thing right. If the benefits from that practice is having an accessible business logic, why not go for BDD or TDD (or both)? It’s a real question I’m doing here, not an affirmation 💡
Interesting topic! Exciting area to explore for sure
Minesweeper and solitaire taught me how to use a mouse back in 1993. I still prefer the command line, though.
Do I know what minesweeper is? Well, I actually clicked the vid specifically because I saw minesweeper in the thumbnail XD
right questions are asked in the end. DSL is cool for writing, but for anyone new assigned to the project it can be a nightmare to read the DSL code. And there is no stackoverflow to explain this separate "DSL" quirks.
This looks interesting, but I'd like to better understand how this differs from other DSL's (Domain Specific Languages), I guess the scale ast which it's used? Usually, a DSL is used as part of a bigger program, but in this case, the DSL models the application as a whole?
Sorry for only responding now. Yes, in effect this is about driving as much as possible from models rather than just using some configuration files.
This looks like an intersection between DDD and DSL
Software people stop inventing new layers of abstraction challenge (IMPOSSIBLE)
This seems like the next step after "behavior driven development" BDD which extended the field of "test driven development" TDD.
Sounds like a Carl Barks idea about the future in Duckburg..., where the toys play with themselves and the kids just watch...
Some of this reminds me functional programming.
Now I wanna teach the game of Life to play mine sweeper.
Jr. Engineers always ask me why we do not use code generation from models. My short answer to that question is because we know how to code. I have never seen a single benefit of code generation (I mean from model not special cases for boilerplate codes).
Visual Basic has been able to do this exact thing for a decade.
This reminds me of the AWS Workflow for example. It abstracts things away so you don't need to know the true ins and outs of how AWS works to implement cloud functions. Or something like a WYSIWYG web editor like square space implements.
I kind of see the abstraction chain as MDE - OOP - High level language - Assembly - hardware. (Yes I know this is very flawed).
You can do more things the lower down the abstraction chain you go but you need more knowledge and time (therefore money)
This feels like it's really far along the Configuration Completely Clock.
My thoughts exactly. It's a sneaky trap and knowing when to invent a DSL is where the skill lies.
Great demo. However, I would love to learn how Dr Zschaler designed this one language grammar that can represent all such games. What were his thought process?
Yeah, this feels like a missing bonus video to me - how do you make a new language that, as in the example, turns descriptions of game-states into real Java code? The concept seems like it could be very useful, but now I'm struggling to google all the right terms to just even get started with tools and tutorials. :)
@@AileTheAlien it's called DSL or Domain specific language
As usual you always wonder with these, how much that limited DSL could really be used to implement. For example, I suppose you should show how to make some completely different game using the same thing. Solitaire would be a good example since it's the other game that came with Windows.
Also the real benefit of this approach is that when you finally want to ditch Swing, you don't have to rewrite all your games, you just rewrite the engine to run on Compose instead and all the games still run.
excellent video but how are you translating or generating the java code from the modelling lang
My only issue with this approach is using such a common term as "model" in the name. Please indicate what type of model you are using. "Semantic/conceptual"? "Logical"?
Domain Driven Design + Data Oriented Design + high level, general purpose OOP languages == Model Driven software engineering?
Read dwm source code. Its config.h is kinda its own language that allows to just implement some function and then just put one line inside dwm's array of Keys which allows you to bind function call with sequence of buttons pressed.
There must be about a million JavaScript libraries which can do that for you. So what? So nothing. This is like me giving you thirty tubes of oil paint, a canvas and three brushes and telling you to paint the Mona Lisa.
i always like to start with a json model of the problem. think - how can i turn this section of the project into a configuration problem rather than a coding problem. didn't know it was called model driven software engineering tho. tools like swagger also take this approach
Gerald Jay Sussman talks about it in his recent book Software Design for Flexibility.
MDE sounds a lot like a framework.
very refreshing approach. great advice
To me it seems that low-code and no-code solutions are what you call Model Driven Software.
It looks like the video was cut off and a part 2 seems pending.
So, domain-driven design?
Suggestion: package managers dependency solver
I'm not seeing it as revolutionary as much as he's describing it.
Not trying to be critique, but can't we do the same with oop? given that we will spend more time and effort creating a very dynamic and well-abstracted objects, then let them interact with each other based on the program state and some rules.
To my humble understanding from this video, MDE is OOP with a bit more restrictive mindset and different terminology.
In case I'm wrong, that means we need more MDE videos Sean ;) thanks as always
How do you handle the problem of state explosion ? hierarchical state machines ?
Higher abstraction languages reduce the number of ways to shoot yourself in the foot. I said true that. I can over run the bounds on an array in C++ but it is impossible to do that in Mathematica. C++ will happily smash stacks or otherwise over run array boundaries while Mathematica will complain "Part::partw: Part of does not exist"
one view at this an i know that the MinesweeperFrame warning is that the guid isnt reimplemented :D
think outside the box. goto notepad and use dots or digits as if they were an actual square themself. imagination please.
P = N x 4 - 4
P = Square or Pixel
start with 1 and you realize that 1 represents 0 simultaneously. Now, use odd numbers if you want to shortcut. hint: odd numbers are shortcuts. odd numbers should be treated as the i = imaginary number of squares added to the previous set in order to make a bigger box
let's apply to quantum and how you scan skip or fine tune the end-results in order to go from binary to quantum to whatever the heck we'll use after quantum [which will probably be octa-processing instead of binary processing or quantum processing]
when someone figures out how to use this you can thank me later for dumbing down math to simple equation.
Does this (MDE) language has a mechanism to input intentions?
Once you have a tree of intentions for the application, you follow a process of deducing the required contract-language to define in the MDE.
This initial step was discussed
"Build a website.". Is that enough of an intention description to build an empty HTML document? Sure. How do you get from that to $100 billion in revenue like Facebook? See the problem? :-)
Can a semantic generative adversarial network just process natural language into UML that can then be converted to code in specific business domains? I think a language model like bloom can provide the framework for this semantic generative adversarial network
So, I can write a simpler program in a new language that I have to learn as well. Not much difference wit a framework, except for the new language, right?
The idea is that you're the "tool smith" creating the language for someone else to use. Think map editors in games, for instance: someone created that tooling so the creative people could do their job more easily.
Your writing a meta language specific to your needs
Isn't this is just object oriented programming? He could have done this abstraction using one of the core features of Java, even exported it as a library for game developers to use. There's no need to invent a new language, and doing so unnecessarily doesn't make this a "new interesting way of developing software"
One of the sickest videos I've ever seen! This guy is spitting straight gold
isn't this a different way of describing dsl (domain specific language)?
did a huge mda project in the past for more than 3 years.
the project failed... why?
it failed because no one wants to model the logic. its tideous, error prone and makes no fun at all.
programmers want to program in their favourite language and not model abstract stuff...
It kind of depends on the type of system you are making. It does require some upfront work, so it tends to be more effective at "product line" type of work where you notice you're repeatedly doing the same thing (so automating it will save time), or at safety-critical work where you may want to automate things that are error prone (say, wiring sensors to pins on a chip) or support formal methods.
Also, modeling is a bit different from traditional programming, and it does take some time to design good abstractions. One aspect to be careful about is when you need to change your abstractions over time and you have models on the old version of the abstraction, for instance. In the good and the bad ways, it's pretty much like being the maintainer of your own language.
That's why people dont like make testing :D
‘No one wants to model the logic’ except programmers. In my opinion, programmers are just builders the stakeholders should be responsible for modelling the logic but they are too busy in real world
In some languages DSL's are used in very similar manner but using shallow embeddings which lower to the host language rather than code generation. So one-off languages are fine if they are cheap enough to implement, but code generation is probably too expensive for that?
This is an example of Object oriented programming with engines like those used for video games (graphics, collision, AI/NPC movement, etc). Another version is MVVM (Model, View, View-Model) commonly used in Xamarin and Maui in which the display, storage, and the way they interact is seperated in different files.
The video.
I built a poorly designed and project with a bad file structure.
I fixed it with a better state management system.