From another angle I think server components and HTMX make web programming way more accessible to people like me that come from a non web background. Unlike react and many other frameworks I think that HTMX just makes sense. There is no nonsense. It is simply works.
I also really love how it is just a single file UMD. No build system needed, you just include the file in whatever way you want and it just works. There is no extra build step complexity, there are no extra tools you need to learn. I like bonsai.css for the same reason as HTMX (very simple deployment, no friction in getting started even as a complete beginner). The repo hasn't updated in three years, but because it is just trivially deployed CSS it doesn't really age. I'm considering making my own classless css framework with a fork of bonsai's utilities since it is MIT licenced.
I do kinda think HTMX is a little too big for really snappy loading times. Perhaps you could make a tool that scans your HTML code for HTMX tags that you don't use and strips it from the HTMX.js file.
I think those "full circles" are just result of junior developers lack of vision. First we got punch cards. Then we got 80 char terminal (80 char is derived from punch card) and we just dial up with modem. Server draws ASCII view and client send commands to server. Then we got microcomputers to make "rich" client applications to minimize dial up time, lower latency and graphics too and GUI. Separating view, controllers and models are coming from 70s. Because of phone bills it was good idea to make server side to export files to package, that is delivered to client and client opened that package, use application offline, then export data to package that is send to server. Analogy is same as playing chess using physical mail. We got email games and message board software that worked offline. MVC pattern was very fundamental, it makes sense to separate UI from logic, like having command line programs or daemons to do background tasks and have GUI program in frontend. After GUI, we got HTML and server rendered views that was basicly same as terminal software with graphics. "rich" web apps came later, running Java/Javascript/Flash/ActiveX on client side. They were made to very same purpose that first software to microcomputers: To minimize latency. Of course they allowed to mix components that some parts are server rendered and some parts worked in client side. "Offline" webapps are also made mostly for same purpose as offline applications: To save costs. They just save costs in servers but they also lower latency and application is more robust as it doesn't need network connection all time. And how about building backend and make services? That is also very ancient pattern. "Services" existed before computers, "architects" planned organizations how human performed tasks but idea have services in computers is very fundamental concept that we automate human tasks. What is the purpose of services in Rails or some PHP framework? To save costs. Of course we got defacto standard to write services in unix that has existed decades. Idea of scripting languages are that they run in same process with HTTP server to keep requests fast even running script is slow, because starting own process for every request was bottleneck. Of course writing "bigger" tasks or scheduled tasks should go out of scripting language scope. Junior programmers then just didn't understand anything how OS worked and we saw interesting scripting. How about things today? Nothing has really changed how architecture should be made: 1. Services are made to automate task. Before writing any code, we need to know how business works. Things should work in pencil and paper and first. 2. There are software, or business itself, that is accessible (using current standard) or low latency (have client side automation but not so accessible). 3. Physical mail, messages, phone numbers to call etc. was the standard, then we got electronic services using character terminals and HTML is current way to create accessible services, and services using humans are kind of deprecated, used in exceptions and when humans are needed. Character terminals are switched to HTML. 4. Low latency software is running code in client side, this is fundamental thing when creating _application_, applications are tools. Latency matters in productivity but optimizing that is trade-off for accessibility. Developer should know what are accessibility requirements, latency requirements and requirements. Or how it should work without connection. Latency requirements are not changed at least 100000 years and they are well studied. Accessibility is based current standard and how our society works. 5. Offline application has been all time better for server infrastructure or if there is no all time network connection. All time accessible networks started to be common in late 90's outside of intranets but don't assume that if application used in military, marine or space for example. So in my opinion, character based UI was kind of first electronic automation, server side rendered HTML was the accessible standard for that. Client side code depends on requirements. Junior developers did fucked up object oriented programming back in 90s and they did fucked up web development too later using dumb patterns because of very same reason: Lack of vision. Seniors did't have those circles. They avoided them. It can be summed up to awful truth, that most of the developers today don't even understand that how computers work.
This is a common pattern in software and elsewhere. The trend shifts from one extreme to the polar opposite, until finally balance is established. Exactly where on the spectrum the answer lies, always depends on the context.
There’s a reason we switched to rendering on the front-end, monolithic server architecture is still expensive if it’s not self-hosted. And most don’t have the resources to offer a 99.9% uptime SLA on self-hosted.
@@jamess.2491I don't get it. Are you talking about an offline mode (ability to work when server is down) ? Because I don't see that in general we calculate on the front so much that it eases backend work to handle queries.
@@jamess.2491 How is this any different if you're using a JSON api, which also has to run on a server somewhere? There are also plenty of services for running your backend, 99% don't need huge resources to start out.
As an old guy who wrote his first Web page and cgi-bin programs in 1995, I'm tickled pink to see the trend back towards doing more stuff on the server. Huzzah!
Lovely thing about HTMX is that you can point the requests at any server that returns HTML, it doesn't care. Really frees up the front end, and means you can use whatever server technology makes sense for your app.
@@BradySie-wo6vf Except with json and non-hypermedia formats you now also need logic to display that data in a meaningful and usable way, and what actions you can do with that data. Which means you need to know exactly what that endpoint returns, what you can do with that data, and extremely intimate knowledge of what that data represents. And if the endpoint ever decides to change anything your website will probably stop working. Whereas with html you literally just tell the browser to display the stuff you've been given. Not a single line of code on your part or a single fuck to give what you've actually received. Thats the power of a universal interrface and hypermedia.
@@mkwpaul Now that I think about it, I can actually see the simplicity w the frontend just acting as the view layer and cutting out the middleman contract
@@geomorillo Native PHP and JavaScript are the only thing I use. I haven't touched a framework since I left the corporate world. I never liked "framework of the week" fads anyway.
htmx is jQuery on steroids, after all. When I was using jQuery, for example to implement a like button, I was returning as a result, a piece of HTML. jQuery was doing the call, PHP was generating the HTML, and then jQuery was replacing the existent HTML piece with the one generated by the server. It was a simple AJAX call. Certainly htmx does it better, but the principle is the same. JSON was used internally, by server to server calls, it was never sent to the client. But you know, I skipped many revolutions, and I predicted many fails. Never used an ORM, always thought Scrum was a shit, and most Unit tests are a waste of time, and TDD is rubbish. People start to get it now, 20 years later.
Exactly. We did this in the early 2000s. It was a horrible clunky user experience even with Ajax. The problem is working with data on the client is orders of magnitude more nimble and faster than generating the HTML on the server and passing it to the client. HTTP requests are expensive. I thought we learned this lesson 20 years ago.
I've been a software engineer since 2008. We used to build applications much like how htmx advocates. The most popular pattern was MVC, a pattern the industry largely moved away from because it deeply tied the implementation of the backend to the frontend and vice-versa. For small projects, this approach is fine, but for complex projects or larger teams, this pattern became frustrating as even small changes to the frontend could have implications for how controllers and models (data and logic) were implemented. It was difficult to iterate on the frontend as it meant iterating on the server as a whole. I would be willing to bet a vast majority of engineers trying out htmx are either as old as me and really liked MVC - a minority of mostly backend engineers I'd imagine, and young people who never experienced the bad old days of MVC. In any case, I think a lot of people are going to find out. It's going to be interesting to watch them run into issues similar to the ones we did and figure out solutions. These issues are what drove us to use json communication and to separate our view logic from the server in the first place.
@@JohnHoliver Actually, I'm not on the HTMX hype train, I was only saying MVC is a horrible pattern for modern web app development, so many pitfalls. I just wanted to clear the record.
you can have your HTML ssr builder call the same JSON endpoint though for data. So you still stay secure and you don't tie your templates to your backend.
Yes please do a HATEOS video, also graphql apis are not going away if you need to offer public api support for backend services. So even if your CRUD site is totally built the "old school" way with htmx and you offer any backend services it will need api's for customers. At least now you can have a separation of concerns and you can use a "securer" public api where it makes sense. In the end, just use the right tools for the right job. There is no one size fits all framework or programing language. Use what works for you, gets the job done and gives the least amount of headache.
Those enormous JS and JSON payloads were never a hard requirement. They are merely consequences of the so-called “JavaScript ecosystem” which made it way too easy to saddle up my project with thousands of dependencies-of-dependencies I never knew existed. Then somehow it became “best practice” and .. well you know the rest. It was always possible to build a good interactive UX with realtime refreshment of relevant bits. We devs have only ourselves to blame for over complicating that goal.
I was waiting this moment! Those who entered in the programming field 15-20 years ago never understood the shit show that was happening on the front-end side of things.
Haha reminds me of kids growing up, they hit a point where they know everything, no discussion, no reasoning, just go for the shinny new "plant food". Then again not all things that glitter is gold 😊
There was shit show before that in frontend. Idea of object oriented programming was totally misunderstood and we saw bloated desktop applications. Writing frontend as native application using RPC/HTTP request was ok if it is not intended to be accessible. Writing frontend using server rendered HTML was ok, writing frontend in server rendered ASCII was ok, writing application in React was also in ok if it not intended to be accessible as server rendered HTML. There were of course middle ground, accessible software using server rendered HTML but have some frontend component running in client side where they are necessary. They started in Java Applet/ActiveX but jQuery was library that truly started Javascript revolution. And they worked in same purpose, writing some rich component where it was necessary. Single page web applications truly started from Angular but that was too slow for mobile. In architecture point of view, they didn't make difference to desktop native application except they didn't need installation as it was standards based. React was kind of first tool make client application both desktop and mobile without installation using standard based technology so we didn't need separate native mobile apps if browser did job. So, I see that jQuery AJAX components replaced Java Applets and ActiveX components, and HTMX replaces jQuery. And single page web applications and webassembly replaced native applications. Also I really don't understand those who write some server side HTML -> rewrite to Angular/React -> rewrite htmx. That is just crazy. Those developers don't have vision of UI architecture.
I love your youthful enthusiasm but you should probably go and learn some more about web development history as a comment like this will make you look a little silly in front of more experienced colleagues. The ebb and flow of movement between client and server side rendering and how it's always a moving arrow between an emphasis on one or the other, and this ebb and flow has been going on since the emergence of client side scripting as a potential alternative to server side processing. I'm old enough to remember completely (and I mean completely) static sites, and the early scripting days coding the back end in cgi-bin and the client side code *twice* in both JS and VBscript to get the best possible browser coverage. The pendulum continues to move, and will continue to do so as the technologies and standards change. The points the pendulum moves between shifts based on server processing capability, client processing capability and average connection speeds between the two, and despite all the grumbling all of us developers absolutely love the constant change because it means a steady stream of interesting technologies to build applications with!
@@chillyspoon I never go to that pendulum. For me it was server handling rendering/static, server rendering with some client side rendered components where it makes sense, or render completely in client side. That is architectural choice and that doesn't change that much. Also Javascript was mostly useless before 2009 or something. It was often disabled from browsers to avoid security issues. Early scripting was waste of time in public web sites.
I got my first graphql job a couple of weeks ago. I successfully told them its not a good idea for ongoing security, a simple mistake by a future developer exposes your internals to the world. So we are doing things properly now :)
I think there is value in both. I remember the old days when everything was done on the server side. I found it to be a pain to develop compared to desktop applications. One of the great things that happened with the SPA frameworks is they made web development feel like desktop development. That is a major reason why these frameworks were adopted. I do not think that is something that we need to drop, but we do need to recognize in a web environment that may not always be the best solution. That is where something like HTMX comes into play. From what I have seen HTMX looks like a huge improvement over the old way of doing things on the server side. So I think the right approach would be to start your development off with a focus on the client side, and then when appropriate you add HTMX, or something similar, into the mix.
The problem with returning HTML (and not JSON) from your server is that you'd probably also like to serve mobile apps and don't want to duplicate your API
You can have a central pice of software for your backend that connects to adaptors via gRPC or whatever and those are the ones that send the HTMX/parse your components or in the case of mobile go from gRPC to http API. I have been doing that even outside of HTMX, and it makes sense since that way you can have this be a service or a CLI or a whatever from a main core ..... Model View Controller exists for a reason
I would argue that most apps are useless shit, that nobody really wants to install. There is a decline of mobile app because of that reason. We don’t want 10million apps for minimal stuff. Just stuff that matters is used in apps
Design your server-side software properly an it's not much extra work to expose an API for an app *if necessary*. Most applications never get to that stage anyway.
I've been having a lot of fun working with simpler SSG's like astro and using htmx + partials. Something about it just feels so right compared to the SPA approach. It's not a magic bullet for everyone, but it is for me and it might be for a lot of others too. Then throw on Tailwind and you've got rocket fuel 😁😁
I thought the idea behind single page applications was instead of sending the whole page including all the html and css back and forth, we just need to send the data... what happened to that? Why are we talking about giant payloads?
PHP is way messier than JS, both in terms of the actual language and the ecosystem as well, what are you on about lol. Even Laravel relies too much on magic like RoR did which is why everyone is moving away from it.
Basically Conway's law in effect. The software architecture reflects communication structure. PHP was all the rage back then to build server-side apps. It also assumes the server is secure, so basically moves the security model to the backend. We are still in a world of "Frontend' versus "Backend" versus "Database Administrator" and that's where Conway's law come into the picture, while security is across the stack.
I mean, no one is forcing anyone to send "massive" json, often people are sending data that is not needed for the current operations - graphql _is_ a way to slice that, as is jsonapi for that matter.
I am also wondering, since we are moving towards IoT will IoT devices need HTMX? or they just need to send and consume data as needed? All Software is not Web Apps!
This is a subject I think a lot about, and honestly no matter where you put your code, if you have a lot of features or data it’s going to suck especially if you are trying to rush. Personally I prefer server side code because there is less state to muddle through
I never stopped doing this. I never "got" into the idea of sending back JSON then faffing about with it then putting it on the screen. Always seemed a round about way to do something but it was painful to do. HTMX just filled in the gaps I had and made some great fast sites now with very little additional JS. Funny that the world always comes back around. "We told you but you didn't listen"
For big companies it won't make sense to "go back". If you have a website & mobile app, plus several different business / customer service tools, the data can come from the same API's with different front end's for different purposes. Meaning there's 1 backend team serving up microservices for many frontend teams. With React & React Native and other stacks, it makes sense as code and developers can be shared. No chance my FTSE100 company is adopting HTMX any time soon. It's nice for my side projects though.
@@gaiusjcaesar09 On I would never expect huge companies throwing it all away for HTMX that is insanity. HTMX will never be something for established code bases.
@@gaiusjcaesar09 Web developers, for some reason, seem to think that the browser is the only frontend that exists. If that were true, of course, mapping JSON to HTML would seem like redundant work.
A company will not drop a JS framework in favor of htmx, but for a new product you could have a back-end team working on a single API, which is used by both apps (web and mobile), and a couple of people specifically using these APIs to generate HTML pages and partials for the web app. The same company may work on different products. @@gaiusjcaesar09
I think the better solution is upgrading frontend devs to fullstack devs and allow them to customize the api/db schemas to meet their needs. Next.js currently in not a great choice (slow HMR, buggy caching, major breaking changes, bad SSR performance). Static sites with Astro combined with SPAs using Solid.js/Svelte/React for dynamic parts using something like TRPC/react-query is a much better solution.
But the problem with Astro is that with every page reload or navigation to another page you need to redownload the entire UI framework you have chosen even if you have one single dynamic button that needs to ship JS. I feel Astro is great to use as its intended purpose: pure MPA with maybe a particular page serving as an entry point for a SPA.
I converted my Django based static website into a reactive SPA using HTMX. It was doable in less than a day, had I known HTMX a little bit better. Still, it only took me a couple of days of tinkering and it works beautifully
Being able to delegate a maximum of logic to the client is a clear advantage, separation of concern is more powerful than the monolith of using SSR, you basically download your site from a cdn and solely use a well-made api to have a working client
I'm a middleware developer. So, my butter and bread is making APIs. I'm starting to see the appeal of server components though. I've dealt with teams asking for a field to be renamed before. I've seen a request asking to move some of their app logic into the API to take care of "tech debt" (they didn't want their sister app to reimplement part of the contract they defined for themselves -_-). Great video Theo! It connected a lot of dots for me.
@@buza_me I'm trying to learn a front-end framework this year. I haven't fully understood server components because that's what an API is for my world. However, I see now that server components are the front-end's way of connecting directly to a datasource with the flexibility of defining their own contract.
@@josephgonzalez9342 I see. It is not exactly that, server components are just rendered on a server, serialized to binary and sent to a browser. That's (very very) basically an HTML template like Handlebars or whatever else, with a new approach to serialization and delivery, as I get it. Call the DB directly from your HTML template, no need for API's etc. That is what Theo is preaching. It may be a working approach for a very small team idk, but to me it smells. You can define actual API routes in Next, and call them from your templates, or from server actions. What Theo is preaching is that user-facing API servers should not exist. The only way for a user to get anything from your app should be through HTML, no fetches from the browser during user session etc. That is what I got from this video. Maybe I misinterpreted something, but yeah.
@susiebaka3388 Yeah, I have my own small software company, all this time we've been doing basically PHP + Javascript for cute rendering on the side and we somehow survived. Now I'm listening I've missed the whole full circle of an epoch 😀
And that is all you need, for the vast majority of websites. Certainly if you are creating Figma or Canvas, then it is a different story. @@guestimator121
@@guestimator121 yeah thats so funny man hahahaha. glad to hear you've stuck it through! tbf the hype is that it's "on the edge" now so ideally time to first byte is lower. i think the modern hypetrain frontend stack is meant to get react devs to the edge faster, in more ways than one in if you catch my drift
Doesn't matter. Anyone who makes good productive software is doing job of a whole team, if it is anything more than a couple of files or multiple technology stacks.
I'd really like to see a real world example of a full stack complex application built with htmx. I've seen a lot of hype around this but yet to see a working example out there, in the public. I'm not hating on this, just need to understand the practicality of this new tech.
most web apps can be migrated to htmx with the entirety of their features intact. there are numerous examples of using htmx for crud apps out there, adjust ur searches.
The best thing about htmx is that it is not really a new tech/new concept. It has been re-branded, but the concept that it is built around has been with us almost since the beginning of WWW. Just a few other tech examples built on the same idea are AJAX with PHP, JSP, JSF or ... and there are many many complex enterprise applications around the world built on top of these technologies and they can scale very well if designed with modules in mind.
HMTX sucks (in comparison then using React) for complex web app state, because it does not have HMR (hot module reload). You lose all your frontend state on a reload -> so you have to click/open the state you want again (if not saved in the url). Doing that 100x a day and it 10x easier to switch to Vite + React/Svelte/Solid.js for proper HMR.
@@davidsiewert8649 you can have HMR with HTMX too, and whether or not you get hot reloading or how easy it is to configure it, it is not dependent on HTMX itself, but the technology you use to render the htmx pages, because the state is meant to be managed there. If hot realoding is important for you you can choose a language and framework which supports it, and that's the best about it, you can even choose you language and you are not bound to JS.
I'm not sure, running blindly into server components and htmx is the solution. For very large and complex apps this is totally legit. But the majority of web apps are totally fine with API in my opinion.
@@evancombs5159yes, but the client is dumb. You can change the response and that’s all. No problems with what data does the api return. No need to manage state and side effects. It gets messy really quick. It is just a lot simpler. You write simple templates, return the content and thats all.
Nextjs-like codegen stuff is 3rd solution for this, Js only for now tho. (Where you mix backend procedures with frontend code, it's get splitted and extracted to RPC authomatically) In theory there is also multi target Kotlin, which would be able to do JVM backend and native/js frontend file splitting. In the future it's reasonable to expect the ability to mix different imperative language calls in a single file, at least for Js, Go and Python.
This is so funny. I left frontend dev in like 2011, now I'm back, and I feel like I haven't missed a beat coz returning html from the server is popular again! Woot!
Aren't server components just api churn in UI disguise though? The backend still has to implement screen number of "endpoints", granted as a full stack developer and for just web clients it's great.
it's easy to rag on the client-side solution because it's flaw is observable at surface-level. Tbh, I agree with the approach htmx takes for certain scenarios, but relying on the server for every little interaction seems very ideological and downright insane.
@@jameshickman5401 For most applications you only have a few places where heavy interaction happens. It is perfectly valid to use Solid, Svelte, Preact or even React for those cases. As you said, there's no need for everything to use them.
Incorrect. The API churn effects are massive amplified as a result of having to perform complex state management routines on attempting to ensure consistent changes which it fails to eventually deliver due to requirement changes on the shape of JSON objects, fat APIs and API responsibility expanding from just a program’s contract to an incomplete state management validator for the front end. This even doesn’t account protocols like HTTP/1.1 and HTTP/2 which magnify the problem of fat payloads with large responses to the user and round trip request responses needed to fulfill the Frontend devs and clients needs that increase network latency at the end users expense. I haven’t even addressed db issues in terms of distribution of read and writes transactions. This is mainly a system architecture problem and if you as dev are not even addressing architecture approaches that facilitate the design of your backend services, db and front end systems then you are just transferring inefficiencies to end users.
So this is just to say, the UI team should not be the ones who have the authority or role (flexibility) to change the queries, but must offer continuous requests to modify the API via the backend team (the original complaint in the article from the UI team), such that the tuning is done via a secure proxy. GraphQL is the solution to enable UI define the query, eliminating this pipeline overhead, but comes with the requisite of needing to define access and authorization on the field level of the schema definitions
Think it comes down to keep your data and code close together. If your UI is heavily dependent on database queries run it on the server. If your UI is high interactive and mostly needs session state (that maybe then gets persisted to a backend periodically, aka excalidraw, tradeview) rendering on client makes more sense. I think the flexibility React server components offers really gives devs the tools to satisfy both these types of situations. Sure it's not a perfect solution but nothing is and there will be rough edges no matter what tech you go with.
I've been playing around with htmx for a few evenings so far. I generally like the direction it's heading in. A still open question for me is how it handles e2es, css tooling, and the like. In my preliminary search it all seems possible. I just haven't yet formed an opinion on if I likes the available approaches as much as current front end tooling (holding for a new-to-me skills issue).
Unlike React, HTMX really is just a library (and not a framework). This means that you can make any other decisions you like for the rest of the stack. You can combine it with anything. For example, I've used it with Astro. Nothing stopped me.
Depends on your use case. What if you need offline mode, optimistic UI updates, or multi-user conflict resolution? If the whole set of data that's visible to a client is relatively small, you might be better off synchronizing a complete data bundle with delta updates. Solves the same security and API churn problems, but provides instant cold starts, offline access, and the appearance of zero latency for interactions. This is more like what multiplayer video games do. A big chunk of devs out there could stand to think about their networked apps a little more like "How would I build this if it was a realtime game?" Either way, SSR or data replication are better options than a broad API, especially one that's basically a generic data-access layer like GraphQL or OData.
@@modernkennnernbut why? Does it make sense to use sockets do send small changes? Do you validate forms on server? There is logic that makes sense on server and logic that makes sens on client. It just needs to be simple and efficient to move the logic and communicate between both. Which we still did not achieve.
@@EduardsRuzga > do you validate forms on server? You could; you have to do it anyways for security reasons, so why write the same logic twice? (client-side for UX and server-side for security)
@@modernkennnern yeah I guess picked slightly wrong example. I was thinking of immediate response as user types. Seems like complete waste to go to server and back. In case of forms you do need to validate on server you are right. But you do not need to write it twice. That's the point of Server components. You share the code. Make it easy to move the code between server and client to do better. Moving all code to server or all code to client for simplicity reasons is a false thing. Both are worse for costs and product quality. Real issue is that actually moving code on a whim between server and client is still challenging.
The lesson from more than half a century ago (on Multics) is that the API is the real use interface. The client-side app is more like a skin. THe big concept is factoring the problem to assure the API is the security surface. The additional benefit is that others riding to the API can be your friends instead of being seen as attackers. This should be taught in a good CS curriculum. Should be ...
I have not worked on it, but HTMX sounds like PHP making a comeback. It does not solve access control issues, it just moves them to the server side and now you have to generate the right HTML for each user rather than JSON.
No, its totally unrelated. React looks more like PHP to me, because of stuffing code and css in templates and even business logic in the weirdest places. And if you mean 'server-side rendering', then sorry, but you don't understand how HTML works.
For me a problem is lack of fullstack developers. If a developer need an additional endpoint he should be able to make an additional endpoint by themself without asking anyone else. Also I see, that many developers don't distinguish between internal and external API. External API is for people outside your company to integrate with your app, but you don't know what they will do with this API. Internal is when multiple parts of the same app communicate with each-outher (like frontend calling backend api), so you know exacly for what is it used and what data are needed. In projects I do by myself, I typically have a codebase divided by modules (bigger sets of features) and in each module is code both for a backend and a frontend. For most people it probably sound like a heresy, but it is verry practical in most (not all) projects. I also made my own templating engine, so I can have the same view rendered both serverside and clientside.
Maybe someone can help me understand this... Isn't the View component of MVC literally what they were referring to? like the intermediate layer between the API and the html that is eventually served to the client?
I feel like reactive frameworks have lost sight of their initial values; Deliver a responsive and reactive experience in a *simple* way. I think the downfall started when people became so obsessed with puting compute time on the client side, that they are willing to forgo the simplicity.
You think that because you don't know what the point of them was. It wasn't to make frontend web development simpler. The point of these frameworks was to provide a way of rendering views clientside and enable the separation of presentation logic from the backend - to move the V in MVC to the browser. These frameworks were invented because we had a lot of trouble with separation of concerns back when MVC was king. Trust me, I lived through it. MVC was the only game in town when I started, and I don't look back fondly on those days. That said, this is a common misunderstanding, and I think it stems from how these frontend frameworks marketed themselves. When "easier" and "simipler" were thrown around it was when we got into reactivity - "simpler" and "easier" were in comparison to earlier attempts to move views to the client such as frameworks like Backbone. Not to say they are easier than serving HTML snippets from the server.
This is fine, as long as you don’t need to go with multi client architecture. If you use the same api for web and iOS/Android apps you can’t just deliver rendered HTML.
If you aren't using io functionality that isn't built into the browser, arguably it should be a web app not a mobile app anyways. But there are multiple mobile development options that utilize html.
There are some obvious pros and cons that are so clear. SSR makes sense if the needs of the UI are not super intricate or interactive, the additional requests and work in the server side is not an issue for your backend infra, the ssr approach or framework allows you to draw up clean and clear markup not ad libbd spaghetti, and you dont have any needs or requirements for that backend service to be a data api not an html api. There is your decision. Personally I'm a big believer in the front and back end bifurcation like every other form of software ( imagine if your game server or your mail server etc was in the business of sending UI ), and the idea of HTML api's makes me want to gag. But I also understand the push back against the overly complex constantly changing front end landscape nightmare (ahem react) and the desire and willingness to seek alternatives.
The problem with where front-end APIs ended up, and the move to GraphQL, and the complexities atop, wasn't in the move to GraphQL, it was before that. The APIs churned because they were allowed to diverge per page, in the first place. So now, instead of having authenticated and authorized users with roles, accessing parts of data models, based on those permissions (that *need* to be there at any / every level, regardless), you have per-page models that drive every subview of every page... I've seen them get as bad as requiring a back and forth to change sorting order, or to remove items in a filter... which just guarantees an API rewrite as soon as the design team makes a page refresh, or the marketing team wants to add a new banner CTA. The failure there is equivalent to redownloading game updates through Steam every time you make a settings menu change.
I’ve never had a problem creating performant user interfaces with general purpose APIs. You can add sorting and filtering parameters to API. You can make APIs that selectively denormalize. And you secure each endpoint by role. Thinking you need to create APIs that generate data in exactly the format of the UI is just a premature optimization. Client side JS is fast. You can radically transform 1000 rows of server data in a few milliseconds, and should do this, rather than creating yet another endpoint.
Returning html is only consumable by browsers. JSON payloads are versatile. Each has their limitations/issues/applications. That’s our job, to strategize and architect without using one approach as be all end all just because there are pain points.
the issue with HATEOAS is the same as with every other attempt to use your backend as the single source of truth for every state: it goes against physics at the very core. Lots of tiny details are frontend app state related that you don't want to communicate with your backend because it's inefficient, and even worse, if the communication breaks down (think CAP theorem, you are building a distributed app/db) if you expect consistency, you will be disappointed. second, and even worse, the customer, or person using your frontend is part of your system. This is inevitable because otherwise you haven't really built a coherent narrative of conveying information. So, you have 3 different agents, the BE, the FE and the human, and if only one of them is expected to dictate state to the other two that guarantees suffering and bad UX.
You have no idea how much I had to scroll to find this comment. IMHO Hasura is THE easy-to-use solution for all your API needs. And it feels like nobody here knows its existence.
I think every large, enterprise software will sooner or later have to create a flexible query language, that is accessible to clients, because they either need to provide an interface to third party software or have highly user configurable UI, like dashboards, generated reports, complex search filters, etc. But even when you aren't I'm not convinced HTML is a good client - server protocol. If the goal is to provide a stable backend API and then have the frontend team write a frontend API on top of it, you can just as easily do it with JSON or ProtoBufs or whatever your favorite format is. My problem with sending HTML (outside of live server solutions, that have other problems) is that I loose all the flexibility and ease that comes with functional reactive programming. ui = f(state), instead of directly manipulating the DOM was a huge step forward and under no circumstances do I want to give that up.
What about mobile apps? It’s great if the only clients you have are web apps. Love it. But when you also must support other non-HTML clients, the calculus changes somewhat, no?
htmx sends a "htmx-request" header on each htmx request, you can check if that and other headers if needed to determine whether to return JSON back but imo at that point might aswell just do all json or even better protobuf
"Endpoint that serve HTML" LOL, we did this in early 2000s. it was called server-side HTML processing or in other words, PHP/ASP.NET/JRE etc. The problem with this model is that HTTP requests are costly and many of the front-end functionality relies on cached data. You get a list of 20 json objects from your endpoint and you apply filtering to those objects on the client side in any way you wish. Trying to do this on the server will result in a ton more HTTP requests. This was the whole reason we moved dynamic HTML generation to the client.
7:00 With Server Components, now your frontend devs are writing SQL directly to the database. Which, ironically, makes them do the job of the backend dev team. They become full stack devs. If you want that, that's cool. But if you have a separate backend team who are the experts in everything backend, like writing and optimizing SQL queries, then it might not be what you want.
Server Components don't mean separation of concerns isn't a thing anymore. With an SPA, your backend probably already has layers of abstraction for controllers, services, repositories, entities, etc. (substitute any synonyms you like). The vast majority of that doesn't change. The only difference is that instead of your user-facing layer returning json, it uses Server Components to return HTML and JS. In other words, front-end is still front-end, backend is still backend. Only difference is the API between the two is made up of public functions rather than JSON.
@@etiennelevesque6015 you could have front-end devs call some backend functions in RSC's, instead of writing SQL directly in RSC's... but haven't you then just shifted the API churn one step backwards? Now they will ask for tailor made functions which back-end devs will make (with numerous bespoke SQL queries)..
@@magne6049 It's true churn will never fully go away, but that's true of any system. There's a balancing act here as well: abstractions (which imply some amount of churn) and bespoke, straightforward solutions (which imply repetition). On the one hand, it takes mork work to design and maintain abstractions, but heavy logical repetition makes code harder to change as well and increases the risk for errors and inconsistencies. Also, there's still value in RSCs because they reduce the amount of abstractions (and therefore work required to maintain them). Even if you keep a full architectural separation between FE an BE, it's much easier to change a method in a single code base than a web API. No more building entities to map between the API and your domain representation, no more (de)serialization and external input validation, no more backwards compatibility (and the accompanying 3-step process to remove stuff), etc. To be honest, I personally believe that outside of highly specialized needs (deep technical problems, very high scale, etc.) fullstack devs are better suited to work on the majority of apps out there anyway. My opinion may therefore be biased, but I think RSCs still bring big benefits to teams that prefer to specialize on one side.
@@magne6049 Also, shifting the API churn one step backwards sounds like a great thing to me! That's the basis of the whole shift-left movement (be it applied to security, agility, testing, whatever). The earlier you can hit these kinds of frictions, the better it is because you've invested much less time and money into something before realising its not quite what you need.
People gravitate towards faces according to some research. It's more natural for people to recognize other hoomans than graphics alone. Hence clickbaity and favors the algorithm whether we cringe or not
It's such a double edged sword for creators. It looks really childish and cringe to people who are regular followers and looking for this kind of content. However, they are more likely to get random clicks by people who get drawn towards the thumbnail because `haha funny face'. So it's kind of a toss up between annoying your loyals or loosing some new viewers
@@Never-t4y this is so true actually, while I might have an opinion of I don't mind the face, there will be a percentage of viewers who may or may not be comfortable with it. As for the numbers, I can't really say ~ I might not know how your YT newsfeed looks like, but mine indeed has a lot of faces hence I've probably acclimatized to it. It's actually good to hear from others a different opinion about it since yt algo probably has desensitized me over time
I feel people are shilling HTMX a little too hard, we have already been through this. We will all go back to basically what was PHP development 10 years ago and end up back at frameworks.
The whole frontend saga just made me realize it takes about 10 years to become a competent senior software engineer. After server components, I can confidently predict the next big trend is going be SOAs
@@bossgd100 Service Oriented Architecture. The problem describe in the videos of a client needing access to multiple endpoints/services won’t get solve by running code on the server. Surface issue like access control might be solved easily but the main issue is API design. If people keep requesting that you should add more endpoints to meet a UI needs, then the same rules will apply on server side. Bloated APIs will be replaced with bloated RPC contract that is impossible to maintain. Then people will try solving that issue with SOA/SOAP before rediscovering REST
I started a TypeScript/React -> Cloure/Hiccup/HTMX conversion recently. It took about 30 minutes of spiking HTMX usage to learn 80% of everything I would need on the frontend. Let's see if I can escape the JS ecosystem's grip... I'm using CSS modules with Vite, and I export the CSS class names to a JSON file for Clojure's consumption. Similartly I export i18next localizations to JSON.
I think a better solution to API churn/security issue would be to just ditch JSON in favor of Protobuf. Not only is it faster but it's easy to implement a system that guarantees your API is consistent.
Revolution! THis is how the web worked in the 90's. It all circles back but at every turn, it gets refined. Having state on the client is a bad idea, it always has been. Servers nowadays are sitting at 1% CPU doing nothing but being proxies to a database. IF they use any CPU, is because they are parsing huge JSON docs to be transformed on other JSON docs. We could use a little of that CPU so apps are fast on the client.
"this is not a new revolution" It definitely isn't. Entire languages were based on this idea (such as php and jsp). It's not a new revolution, it's an old paradigm that was purposely left behind.
Honestly I think I’m ready to accept that I’m turning into one of those “but that’s how we always did it and it’s never let us down” type of devs and stick to pages-directory + tRPC. I think I’ll check out the server-first stuff when Remix, Solid, or TanStack release their versions.
i'm with you, but HTMX is not hard to learn, and if you want to keep coding in js/ts i suggest looking into Bun + Hono + HTMX stack which is basically, node + expressjs + htmx, but better.
But what if I want an iot device or a robot pull the data and not just display a pretty picture? I think this is one of that cycle thing, where in a few years there is a framework on top of HTMX to get json and render it on the client side. But what makes sense to me is to have an expressive API for the backend and the frontend is also on the server and only sends the final result to the client as html, so basically PHP.
Its so weird that writing server-side code is like a "new" or "scary" thing in javascript land. I'm a PHP dev so server-side stuff is just the norm. There are no "server components" or talk around creating html server-side. Its just the normal way to do things. Or maybe i just don't know what the php community is talking about. i see so much javascript commentary
I still don't see how you can use this to build a webapp, mobile app, and support third party applications. Our company used to to that with a normal JSON REST api. I can't see supporting that with htmx or server components.
We are going full circle. When I started web development, everything was done on the server pretty much. Back then we didnt even have jquery and ajax was pretty new.
In my opinion, as someone who will as a hobby write little tools for myself to use, I would be fine with this as long as there were APIs left available with the intent of tools being developed targeting them. I COULD use JQuery and parse datafields and be annoyed at it myself, and it ends up creating more work for everyone. I am pretty new to development overall (only a few years of hobby coding before I got to uni) but I would be disappointed if the route I took to be more interested in coding became more closed off.
The problem is that somehow devs with no understanding about projects with more than two devs, a customer who expect that its easy to change the label of a value because there should be one location to change it and customer will be not happy when he has to create 4 tickets because the code is copied 4 times. The customer changes from unhappy to angry and loosing trust if such little changes create other bugs because when changing the same code by different devs it leads to code that is different and gets its one problems. It also happens that integrated products need a version update. If the interface get only a different signature you can be happy but if it also change its behavior and about ten different fronend calls use that methods direct you get bugs that are quite hard to reproduce and of course all is coupled and a nightmare to change or bugfix. This are only a few problems that will appear.
I don't think APIs will be killed, because we still need JSON to exchange information between servers. What I see here is APIs can still be used and then given to the HTML that will be returned from the server. I guess you get my point.
I like watching from time to time videos of web developers running in circles while I have been using ASP Net framework and asp ASPNet Core for ages that have just stuck to server rendering most of the time. You can still use javascript in addition no one is stopping you from that. and you can combine index db and local storage as well for things that don't need security
You'll have a back-end API which will send a generic JSON to a Nextjs server. Nextjs server will pluck the needed data out and narrow it down to some sort of shape suitable for charts, for instance. Then you call that route handler with server action for web front-end, and with and API call from your native front-end. Much simpler, isn't it? (hah)
@@buza_meAh, so this "simpler approach we used in the good old days" I keep hearing about means using 2-3 servers (one for JSON, one for HTMX, one Next.js) to serve your content instead of one. Great, what a joke. I'll just continue using JSON API backends and SPA frontends as if nothing noteworthy was made, because it wasn't.
Htmx is cool but the server running all the work also needs you need a bigger server ($) when the client can do some work and be free and very fast. I’m not sold tbh.
I think this movement back to the server is really just a byproduct of the maturing of tools for constructing data-driven UIs across many different client domains. 15 years ago you could not (easily) create a mono-repo of shared UI components across mobile, web, and server 15 years ago, for instance. APIs help facilitate communication across systems that do different things with the results, so as far as s2s APIs are concerned those aren’t going away. But the client-server sharing of UI components and client-side hydration tooling that is coming out is definitely is something that is crazy new and interesting.
@@patiencebear I wonder that too . I mean you can use the same data models not a problem when you use JS or C# same goes for APIs which give you those data. Sending UI components across the internet is always a security as well a performance issues but this is the price you pay for a hyperdynamic webpage and easy UI updates. React Nativ and MAUI you don't see something like this except when you include a browser in your app.
@@Fiercesoulking I thought about something like Ionic or Cordova, there it would be possible - but that's not the same. The cross-platform experience is rather "Meh" between desktop and mobile, in my experience. and server components are not possible afaik
1. dont use graphql for api churn 2. dont just throw out whatever api your front end designers want on a whim 3. go ahead and shift your api churn over to backend spaghetti markup churn, if thats what you like.
endpoints to display pages, then proxy in any heavy lifting from the backend. job done. honestly though, i've been using this approach since 2018. i wrote a small vanilla dom manip abstraction script and router that do this in sub 3k. and i'm no genius by any measure. just seems like a bit of common sense to me.
I think GraphQL has it's place and can benefit everyone if done right. You can, for example, implement authorization based on GraphQL directives. Server components do remove a lot of the need for this but they can't be used for everything. Especially with more of the frontend frameworks on mobile moving towards a declarative approach similar to React (Compose on Android, SwiftUI on iOS, Flutter, React Native) it has it's place. And it may still be possible to do something similar to server components on there, but it is much harder to do and if your application is anything other than a JSON-pretty-printer you won't get far with server driven UI.
A client does not always display HTML. It can be a mobile client, or even some fancy client that displays the whole state of the system with a set of LEDs. One [RESTlike] endpoint that emits JSON (or XML, or another form of raw data) can cover many types of clients. An endpoint that emits an HTML layout can be effectively used just to display HTML. What's the point?..
@@TapetBart Isn't plain PHP enough to serve just HTML? Or we have to use HTMX at first since we don't need app support; then we have to add GraphQL since we decided to support some apps; then we have to rework once again and add REST since we have to support some external integration, and so on and so forth... Why not just use an approach that _works_ well and _scales_ well?
From another angle I think server components and HTMX make web programming way more accessible to people like me that come from a non web background. Unlike react and many other frameworks I think that HTMX just makes sense. There is no nonsense. It is simply works.
I also really love how it is just a single file UMD. No build system needed, you just include the file in whatever way you want and it just works. There is no extra build step complexity, there are no extra tools you need to learn.
I like bonsai.css for the same reason as HTMX (very simple deployment, no friction in getting started even as a complete beginner). The repo hasn't updated in three years, but because it is just trivially deployed CSS it doesn't really age. I'm considering making my own classless css framework with a fork of bonsai's utilities since it is MIT licenced.
@@BosonCollidershare a link to the repo you're working on
Seems interesting
@@BosonCollider yeah. Sounds cool. I have mainly been doing tailwind or raw css for my styling. But trying something else seems fun.
I do kinda think HTMX is a little too big for really snappy loading times. Perhaps you could make a tool that scans your HTML code for HTMX tags that you don't use and strips it from the HTMX.js file.
I agree. It feels elegant and much cleaner than these monolith frameworks
As someone who has been doing a lot of Rails dev since Rails 3... I love seeing everything come back full circle to server rendered html.
I think those "full circles" are just result of junior developers lack of vision.
First we got punch cards. Then we got 80 char terminal (80 char is derived from punch card) and we just dial up with modem. Server draws ASCII view and client send commands to server. Then we got microcomputers to make "rich" client applications to minimize dial up time, lower latency and graphics too and GUI. Separating view, controllers and models are coming from 70s.
Because of phone bills it was good idea to make server side to export files to package, that is delivered to client and client opened that package, use application offline, then export data to package that is send to server. Analogy is same as playing chess using physical mail. We got email games and message board software that worked offline.
MVC pattern was very fundamental, it makes sense to separate UI from logic, like having command line programs or daemons to do background tasks and have GUI program in frontend.
After GUI, we got HTML and server rendered views that was basicly same as terminal software with graphics. "rich" web apps came later, running Java/Javascript/Flash/ActiveX on client side. They were made to very same purpose that first software to microcomputers: To minimize latency. Of course they allowed to mix components that some parts are server rendered and some parts worked in client side.
"Offline" webapps are also made mostly for same purpose as offline applications: To save costs. They just save costs in servers but they also lower latency and application is more robust as it doesn't need network connection all time.
And how about building backend and make services? That is also very ancient pattern. "Services" existed before computers, "architects" planned organizations how human performed tasks but idea have services in computers is very fundamental concept that we automate human tasks. What is the purpose of services in Rails or some PHP framework? To save costs. Of course we got defacto standard to write services in unix that has existed decades. Idea of scripting languages are that they run in same process with HTTP server to keep requests fast even running script is slow, because starting own process for every request was bottleneck. Of course writing "bigger" tasks or scheduled tasks should go out of scripting language scope. Junior programmers then just didn't understand anything how OS worked and we saw interesting scripting.
How about things today?
Nothing has really changed how architecture should be made:
1. Services are made to automate task. Before writing any code, we need to know how business works. Things should work in pencil and paper and first.
2. There are software, or business itself, that is accessible (using current standard) or low latency (have client side automation but not so accessible).
3. Physical mail, messages, phone numbers to call etc. was the standard, then we got electronic services using character terminals and HTML is current way to create accessible services, and services using humans are kind of deprecated, used in exceptions and when humans are needed. Character terminals are switched to HTML.
4. Low latency software is running code in client side, this is fundamental thing when creating _application_, applications are tools. Latency matters in productivity but optimizing that is trade-off for accessibility. Developer should know what are accessibility requirements, latency requirements and requirements. Or how it should work without connection. Latency requirements are not changed at least 100000 years and they are well studied. Accessibility is based current standard and how our society works.
5. Offline application has been all time better for server infrastructure or if there is no all time network connection. All time accessible networks started to be common in late 90's outside of intranets but don't assume that if application used in military, marine or space for example.
So in my opinion, character based UI was kind of first electronic automation, server side rendered HTML was the accessible standard for that. Client side code depends on requirements. Junior developers did fucked up object oriented programming back in 90s and they did fucked up web development too later using dumb patterns because of very same reason: Lack of vision. Seniors did't have those circles. They avoided them.
It can be summed up to awful truth, that most of the developers today don't even understand that how computers work.
IKR? LOL
@@jofftiquezCYE?
I used JSP pages (Java servlets) to render pages on the server 20 years ago.
I can still see use cases for client side rendering, but in most cases server side is superior.
This is a common pattern in software and elsewhere. The trend shifts from one extreme to the polar opposite, until finally balance is established. Exactly where on the spectrum the answer lies, always depends on the context.
Based on that we will end up with HTMX + web components. Which will be nice.
There’s a reason we switched to rendering on the front-end, monolithic server architecture is still expensive if it’s not self-hosted. And most don’t have the resources to offer a 99.9% uptime SLA on self-hosted.
@@jamess.2491I don't get it. Are you talking about an offline mode (ability to work when server is down) ? Because I don't see that in general we calculate on the front so much that it eases backend work to handle queries.
there is no middle just like american politics
@@jamess.2491 How is this any different if you're using a JSON api, which also has to run on a server somewhere? There are also plenty of services for running your backend, 99% don't need huge resources to start out.
As an old guy who wrote his first Web page and cgi-bin programs in 1995, I'm tickled pink to see the trend back towards doing more stuff on the server. Huzzah!
Lovely thing about HTMX is that you can point the requests at any server that returns HTML, it doesn't care. Really frees up the front end, and means you can use whatever server technology makes sense for your app.
literally the same with any standard format including json
As a format yes, but you also need to care deeply about the shape of that JSON, that's not freedom. @@BradySie-wo6vf
@@BradySie-wo6vf you can make a reactive website with just static html, json, and a backend? you should share that tech
@@BradySie-wo6vf Except with json and non-hypermedia formats you now also need logic to display that data in a meaningful and usable way, and what actions you can do with that data. Which means you need to know exactly what that endpoint returns, what you can do with that data, and extremely intimate knowledge of what that data represents. And if the endpoint ever decides to change anything your website will probably stop working.
Whereas with html you literally just tell the browser to display the stuff you've been given. Not a single line of code on your part or a single fuck to give what you've actually received. Thats the power of a universal interrface and hypermedia.
@@mkwpaul Now that I think about it, I can actually see the simplicity w the frontend just acting as the view layer and cutting out the middleman contract
Let us invent PHP at last!
I was already doing that with PHP 😂😂
@@geomorillo Native PHP and JavaScript are the only thing I use. I haven't touched a framework since I left the corporate world. I never liked "framework of the week" fads anyway.
htmx is jQuery on steroids, after all. When I was using jQuery, for example to implement a like button, I was returning as a result, a piece of HTML. jQuery was doing the call, PHP was generating the HTML, and then jQuery was replacing the existent HTML piece with the one generated by the server. It was a simple AJAX call. Certainly htmx does it better, but the principle is the same. JSON was used internally, by server to server calls, it was never sent to the client. But you know, I skipped many revolutions, and I predicted many fails. Never used an ORM, always thought Scrum was a shit, and most Unit tests are a waste of time, and TDD is rubbish. People start to get it now, 20 years later.
Livewire!@@geomorillo
Exactly. We did this in the early 2000s. It was a horrible clunky user experience even with Ajax. The problem is working with data on the client is orders of magnitude more nimble and faster than generating the HTML on the server and passing it to the client. HTTP requests are expensive. I thought we learned this lesson 20 years ago.
I've been a software engineer since 2008. We used to build applications much like how htmx advocates. The most popular pattern was MVC, a pattern the industry largely moved away from because it deeply tied the implementation of the backend to the frontend and vice-versa. For small projects, this approach is fine, but for complex projects or larger teams, this pattern became frustrating as even small changes to the frontend could have implications for how controllers and models (data and logic) were implemented. It was difficult to iterate on the frontend as it meant iterating on the server as a whole.
I would be willing to bet a vast majority of engineers trying out htmx are either as old as me and really liked MVC - a minority of mostly backend engineers I'd imagine, and young people who never experienced the bad old days of MVC. In any case, I think a lot of people are going to find out. It's going to be interesting to watch them run into issues similar to the ones we did and figure out solutions. These issues are what drove us to use json communication and to separate our view logic from the server in the first place.
Amen to that!!! I never want to go back to MVC such a nightmare
AMENNNNN, oh wow... this HTMX hype train this year is silly funny.
@@JohnHoliver Actually, I'm not on the HTMX hype train, I was only saying MVC is a horrible pattern for modern web app development, so many pitfalls. I just wanted to clear the record.
@@sasha777tube oh, I wasn't answering u. My amen was agreeing with you all.
you can have your HTML ssr builder call the same JSON endpoint though for data. So you still stay secure and you don't tie your templates to your backend.
Can't wait the next trend to be just using the canvas element to stream the render directly from the server.
I tried that with SVG but in-place editing of text killed it.
Yes please do a HATEOS video, also graphql apis are not going away if you need to offer public api support for backend services. So even if your CRUD site is totally built the "old school" way with htmx and you offer any backend services it will need api's for customers. At least now you can have a separation of concerns and you can use a "securer" public api where it makes sense. In the end, just use the right tools for the right job. There is no one size fits all framework or programing language. Use what works for you, gets the job done and gives the least amount of headache.
Those enormous JS and JSON payloads were never a hard requirement. They are merely consequences of the so-called “JavaScript ecosystem” which made it way too easy to saddle up my project with thousands of dependencies-of-dependencies I never knew existed. Then somehow it became “best practice” and .. well you know the rest. It was always possible to build a good interactive UX with realtime refreshment of relevant bits. We devs have only ourselves to blame for over complicating that goal.
Video on HATEOAS/HATEOS would be great, waiting for it! Your channel teached me to not hate frontend development :)
I was waiting this moment! Those who entered in the programming field 15-20 years ago never understood the shit show that was happening on the front-end side of things.
What is different? What changes?
Haha reminds me of kids growing up, they hit a point where they know everything, no discussion, no reasoning, just go for the shinny new "plant food". Then again not all things that glitter is gold 😊
There was shit show before that in frontend. Idea of object oriented programming was totally misunderstood and we saw bloated desktop applications.
Writing frontend as native application using RPC/HTTP request was ok if it is not intended to be accessible. Writing frontend using server rendered HTML was ok, writing frontend in server rendered ASCII was ok, writing application in React was also in ok if it not intended to be accessible as server rendered HTML.
There were of course middle ground, accessible software using server rendered HTML but have some frontend component running in client side where they are necessary. They started in Java Applet/ActiveX but jQuery was library that truly started Javascript revolution. And they worked in same purpose, writing some rich component where it was necessary.
Single page web applications truly started from Angular but that was too slow for mobile. In architecture point of view, they didn't make difference to desktop native application except they didn't need installation as it was standards based.
React was kind of first tool make client application both desktop and mobile without installation using standard based technology so we didn't need separate native mobile apps if browser did job.
So, I see that jQuery AJAX components replaced Java Applets and ActiveX components, and HTMX replaces jQuery. And single page web applications and webassembly replaced native applications.
Also I really don't understand those who write some server side HTML -> rewrite to Angular/React -> rewrite htmx. That is just crazy. Those developers don't have vision of UI architecture.
I love your youthful enthusiasm but you should probably go and learn some more about web development history as a comment like this will make you look a little silly in front of more experienced colleagues. The ebb and flow of movement between client and server side rendering and how it's always a moving arrow between an emphasis on one or the other, and this ebb and flow has been going on since the emergence of client side scripting as a potential alternative to server side processing. I'm old enough to remember completely (and I mean completely) static sites, and the early scripting days coding the back end in cgi-bin and the client side code *twice* in both JS and VBscript to get the best possible browser coverage. The pendulum continues to move, and will continue to do so as the technologies and standards change. The points the pendulum moves between shifts based on server processing capability, client processing capability and average connection speeds between the two, and despite all the grumbling all of us developers absolutely love the constant change because it means a steady stream of interesting technologies to build applications with!
@@chillyspoon
I never go to that pendulum. For me it was server handling rendering/static, server rendering with some client side rendered components where it makes sense, or render completely in client side. That is architectural choice and that doesn't change that much.
Also Javascript was mostly useless before 2009 or something. It was often disabled from browsers to avoid security issues. Early scripting was waste of time in public web sites.
I got my first graphql job a couple of weeks ago. I successfully told them its not a good idea for ongoing security, a simple mistake by a future developer exposes your internals to the world. So we are doing things properly now :)
I think there is value in both. I remember the old days when everything was done on the server side. I found it to be a pain to develop compared to desktop applications. One of the great things that happened with the SPA frameworks is they made web development feel like desktop development. That is a major reason why these frameworks were adopted. I do not think that is something that we need to drop, but we do need to recognize in a web environment that may not always be the best solution. That is where something like HTMX comes into play. From what I have seen HTMX looks like a huge improvement over the old way of doing things on the server side.
So I think the right approach would be to start your development off with a focus on the client side, and then when appropriate you add HTMX, or something similar, into the mix.
The problem with returning HTML (and not JSON) from your server is that you'd probably also like to serve mobile apps and don't want to duplicate your API
You can have a central pice of software for your backend that connects to adaptors via gRPC or whatever and those are the ones that send the HTMX/parse your components or in the case of mobile go from gRPC to http API.
I have been doing that even outside of HTMX, and it makes sense since that way you can have this be a service or a CLI or a whatever from a main core .....
Model View Controller exists for a reason
Most businesses don't have mobile apps. People prefer to go to the web than download apps for most things except social media and banking
I would argue that most apps are useless shit, that nobody really wants to install. There is a decline of mobile app because of that reason.
We don’t want 10million apps for minimal stuff. Just stuff that matters is used in apps
Design your server-side software properly an it's not much extra work to expose an API for an app *if necessary*. Most applications never get to that stage anyway.
Most mobile apps are just a web view anyway. I agree that some endpoints might still need to serve JSON, but not anything close to the majority.
I've been having a lot of fun working with simpler SSG's like astro and using htmx + partials. Something about it just feels so right compared to the SPA approach. It's not a magic bullet for everyone, but it is for me and it might be for a lot of others too. Then throw on Tailwind and you've got rocket fuel 😁😁
I thought the idea behind single page applications was instead of sending the whole page including all the html and css back and forth, we just need to send the data... what happened to that? Why are we talking about giant payloads?
Because someone needed a fictional scenario to push their library? Or they just didn't know what they were doing in the first place
Because this video is a sham and HTMX will never be used by anybody in production. Stick to MERN/MEAN/JAMStack or whatever and avoid this nonsense.
@@kitsunetantei That fictional scenario has existed since the creation of html and js devs just messed everything to avoid learning how it work.
Man I love memeing on PHP and Laravel but when I see how messy the JS ecosystem is, I'm glad we use JS for the frontend only.
PHP is way messier than JS, both in terms of the actual language and the ecosystem as well, what are you on about lol. Even Laravel relies too much on magic like RoR did which is why everyone is moving away from it.
for the frontend only? this is not 2007...
@@okie9025 what is everyone moving to NextJs lol
Basically Conway's law in effect. The software architecture reflects communication structure. PHP was all the rage back then to build server-side apps. It also assumes the server is secure, so basically moves the security model to the backend. We are still in a world of "Frontend' versus "Backend" versus "Database Administrator" and that's where Conway's law come into the picture, while security is across the stack.
I mean, no one is forcing anyone to send "massive" json, often people are sending data that is not needed for the current operations - graphql _is_ a way to slice that, as is jsonapi for that matter.
Shhh! Let them sell the new toy in town
I am also wondering, since we are moving towards IoT will IoT devices need HTMX? or they just need to send and consume data as needed? All Software is not Web Apps!
This is a subject I think a lot about, and honestly no matter where you put your code, if you have a lot of features or data it’s going to suck especially if you are trying to rush. Personally I prefer server side code because there is less state to muddle through
Teaching people about HATEOAS and actually-RESTful (as opposed to FIOHTTP) things is always a good time.
I never stopped doing this. I never "got" into the idea of sending back JSON then faffing about with it then putting it on the screen. Always seemed a round about way to do something but it was painful to do. HTMX just filled in the gaps I had and made some great fast sites now with very little additional JS. Funny that the world always comes back around. "We told you but you didn't listen"
For big companies it won't make sense to "go back". If you have a website & mobile app, plus several different business / customer service tools, the data can come from the same API's with different front end's for different purposes. Meaning there's 1 backend team serving up microservices for many frontend teams. With React & React Native and other stacks, it makes sense as code and developers can be shared.
No chance my FTSE100 company is adopting HTMX any time soon. It's nice for my side projects though.
@@gaiusjcaesar09 On I would never expect huge companies throwing it all away for HTMX that is insanity. HTMX will never be something for established code bases.
@@gaiusjcaesar09 Web developers, for some reason, seem to think that the browser is the only frontend that exists. If that were true, of course, mapping JSON to HTML would seem like redundant work.
You could have a backend that takes that JSON and renders it to html. Your html frontend then talks to the html backend.
A company will not drop a JS framework in favor of htmx, but for a new product you could have a back-end team working on a single API, which is used by both apps (web and mobile), and a couple of people specifically using these APIs to generate HTML pages and partials for the web app. The same company may work on different products. @@gaiusjcaesar09
I think the better solution is upgrading frontend devs to fullstack devs and allow them to customize the api/db schemas to meet their needs.
Next.js currently in not a great choice (slow HMR, buggy caching, major breaking changes, bad SSR performance).
Static sites with Astro combined with SPAs using Solid.js/Svelte/React for dynamic parts using something like TRPC/react-query is a much better solution.
Laravel? 😬
at that point, rails and laravel are just... better. In every way.
But the problem with Astro is that with every page reload or navigation to another page you need to redownload the entire UI framework you have chosen even if you have one single dynamic button that needs to ship JS.
I feel Astro is great to use as its intended purpose: pure MPA with maybe a particular page serving as an entry point for a SPA.
Like for Astro, the only thing people should learn in 2024.
if you think using only js, astro is probably the best. that being said, you should consider not using js on the server
I converted my Django based static website into a reactive SPA using HTMX. It was doable in less than a day, had I known HTMX a little bit better. Still, it only took me a couple of days of tinkering and it works beautifully
"converted it into a reactive SPA using HTMX" sounds like a hatecrime
Being able to delegate a maximum of logic to the client is a clear advantage, separation of concern is more powerful than the monolith of using SSR, you basically download your site from a cdn and solely use a well-made api to have a working client
I'm a middleware developer. So, my butter and bread is making APIs. I'm starting to see the appeal of server components though. I've dealt with teams asking for a field to be renamed before. I've seen a request asking to move some of their app logic into the API to take care of "tech debt" (they didn't want their sister app to reimplement part of the contract they defined for themselves -_-). Great video Theo! It connected a lot of dots for me.
What dots? What did you get from this video?
@@buza_me I'm trying to learn a front-end framework this year. I haven't fully understood server components because that's what an API is for my world. However, I see now that server components are the front-end's way of connecting directly to a datasource with the flexibility of defining their own contract.
@@josephgonzalez9342 I see. It is not exactly that, server components are just rendered on a server, serialized to binary and sent to a browser. That's (very very) basically an HTML template like Handlebars or whatever else, with a new approach to serialization and delivery, as I get it. Call the DB directly from your HTML template, no need for API's etc. That is what Theo is preaching. It may be a working approach for a very small team idk, but to me it smells. You can define actual API routes in Next, and call them from your templates, or from server actions. What Theo is preaching is that user-facing API servers should not exist. The only way for a user to get anything from your app should be through HTML, no fetches from the browser during user session etc. That is what I got from this video. Maybe I misinterpreted something, but yeah.
History is a pendulum
rather developer hype cycle is a hamster wheel
20 year whiplash ... people in awe of "static websites" ... ... "wow router" .. what year is this
@susiebaka3388 Yeah, I have my own small software company, all this time we've been doing basically PHP + Javascript for cute rendering on the side and we somehow survived. Now I'm listening I've missed the whole full circle of an epoch 😀
And that is all you need, for the vast majority of websites. Certainly if you are creating Figma or Canvas, then it is a different story. @@guestimator121
@@guestimator121 yeah thats so funny man hahahaha. glad to hear you've stuck it through! tbf the hype is that it's "on the edge" now so ideally time to first byte is lower. i think the modern hypetrain frontend stack is meant to get react devs to the edge faster, in more ways than one in if you catch my drift
Calling it the "htmx team" is very generous, it's literally just Carson
over 300 have contributed though.
Doesn't matter. Anyone who makes good productive software is doing job of a whole team, if it is anything more than a couple of files or multiple technology stacks.
I'd really like to see a real world example of a full stack complex application built with htmx. I've seen a lot of hype around this but yet to see a working example out there, in the public. I'm not hating on this, just need to understand the practicality of this new tech.
most web apps can be migrated to htmx with the entirety of their features intact. there are numerous examples of using htmx for crud apps out there, adjust ur searches.
The best thing about htmx is that it is not really a new tech/new concept. It has been re-branded, but the concept that it is built around has been with us almost since the beginning of WWW. Just a few other tech examples built on the same idea are AJAX with PHP, JSP, JSF or ... and there are many many complex enterprise applications around the world built on top of these technologies and they can scale very well if designed with modules in mind.
HMTX sucks (in comparison then using React) for complex web app state, because it does not have HMR (hot module reload).
You lose all your frontend state on a reload -> so you have to click/open the state you want again (if not saved in the url). Doing that 100x a day and it 10x easier to switch to Vite + React/Svelte/Solid.js for proper HMR.
@@davidsiewert8649 you can have HMR with HTMX too, and whether or not you get hot reloading or how easy it is to configure it, it is not dependent on HTMX itself, but the technology you use to render the htmx pages, because the state is meant to be managed there. If hot realoding is important for you you can choose a language and framework which supports it, and that's the best about it, you can even choose you language and you are not bound to JS.
Lol
I'm not sure, running blindly into server components and htmx is the solution. For very large and complex apps this is totally legit. But the majority of web apps are totally fine with API in my opinion.
The problem using 'just the API' needs much more knowledge then using HTMX+ you can use for backend whatever you want.
@@Fiercesoulking how so? At the end of the day HTMX is still just using an API, it is just the API is returning HTML instead of JSON.
@@evancombs5159yes, but the client is dumb. You can change the response and that’s all. No problems with what data does the api return.
No need to manage state and side effects. It gets messy really quick.
It is just a lot simpler. You write simple templates, return the content and thats all.
if you are only a backend dev it is@@evancombs5159
@@Fiercesoulking There are plenty of react.js developers out there, don't worry about that
Nextjs-like codegen stuff is 3rd solution for this, Js only for now tho. (Where you mix backend procedures with frontend code, it's get splitted and extracted to RPC authomatically)
In theory there is also multi target Kotlin, which would be able to do JVM backend and native/js frontend file splitting.
In the future it's reasonable to expect the ability to mix different imperative language calls in a single file, at least for Js, Go and Python.
This is so funny. I left frontend dev in like 2011, now I'm back, and I feel like I haven't missed a beat coz returning html from the server is popular again! Woot!
Aren't server components just api churn in UI disguise though? The backend still has to implement screen number of "endpoints", granted as a full stack developer and for just web clients it's great.
it's easy to rag on the client-side solution because it's flaw is observable at surface-level. Tbh, I agree with the approach htmx takes for certain scenarios, but relying on the server for every little interaction seems very ideological and downright insane.
With RSC the UI is tied to the data, you always need to modify both.
@@3_smh_3That's the thing, you don't have to rely on it for everything.
@@jameshickman5401 For most applications you only have a few places where heavy interaction happens. It is perfectly valid to use Solid, Svelte, Preact or even React for those cases. As you said, there's no need for everything to use them.
Incorrect. The API churn effects are massive amplified as a result of having to perform complex state management routines on attempting to ensure consistent changes which it fails to eventually deliver due to requirement changes on the shape of JSON objects, fat APIs and API responsibility expanding from just a program’s contract to an incomplete state management validator for the front end.
This even doesn’t account protocols like HTTP/1.1 and HTTP/2 which magnify the problem of fat payloads with large responses to the user and round trip request responses needed to fulfill the Frontend devs and clients needs that increase network latency at the end users expense. I haven’t even addressed db issues in terms of distribution of read and writes transactions. This is mainly a system architecture problem and if you as dev are not even addressing architecture approaches that facilitate the design of your backend services, db and front end systems then you are just transferring inefficiencies to end users.
We got clickbaited into watching more server components again and I'm loving it
So this is just to say, the UI team should not be the ones who have the authority or role (flexibility) to change the queries, but must offer continuous requests to modify the API via the backend team (the original complaint in the article from the UI team), such that the tuning is done via a secure proxy.
GraphQL is the solution to enable UI define the query, eliminating this pipeline overhead, but comes with the requisite of needing to define access and authorization on the field level of the schema definitions
Think it comes down to keep your data and code close together. If your UI is heavily dependent on database queries run it on the server. If your UI is high interactive and mostly needs session state (that maybe then gets persisted to a backend periodically, aka excalidraw, tradeview) rendering on client makes more sense. I think the flexibility React server components offers really gives devs the tools to satisfy both these types of situations. Sure it's not a perfect solution but nothing is and there will be rough edges no matter what tech you go with.
Great video. Makes me feel like my instinct was right, first when I kept developing server-side, and then when I decided to start studying htmx.
I've been playing around with htmx for a few evenings so far. I generally like the direction it's heading in. A still open question for me is how it handles e2es, css tooling, and the like. In my preliminary search it all seems possible. I just haven't yet formed an opinion on if I likes the available approaches as much as current front end tooling (holding for a new-to-me skills issue).
Unlike React, HTMX really is just a library (and not a framework). This means that you can make any other decisions you like for the rest of the stack. You can combine it with anything. For example, I've used it with Astro. Nothing stopped me.
@@DryBones111 react is a lib & u can combine it with Astro or even vue if u wanted
@@DryBones111 O.o It's not an overkill?
Depends on your use case. What if you need offline mode, optimistic UI updates, or multi-user conflict resolution? If the whole set of data that's visible to a client is relatively small, you might be better off synchronizing a complete data bundle with delta updates. Solves the same security and API churn problems, but provides instant cold starts, offline access, and the appearance of zero latency for interactions.
This is more like what multiplayer video games do. A big chunk of devs out there could stand to think about their networked apps a little more like "How would I build this if it was a realtime game?"
Either way, SSR or data replication are better options than a broad API, especially one that's basically a generic data-access layer like GraphQL or OData.
Looking forward to whenever server components can start being interactive again
one could argue that blazor server is exactly that - interactivity through web sockets.
@@modernkennnernbut why? Does it make sense to use sockets do send small changes?
Do you validate forms on server?
There is logic that makes sense on server and logic that makes sens on client. It just needs to be simple and efficient to move the logic and communicate between both. Which we still did not achieve.
@@EduardsRuzga > do you validate forms on server?
You could; you have to do it anyways for security reasons, so why write the same logic twice? (client-side for UX and server-side for security)
@@modernkennnern yeah I guess picked slightly wrong example. I was thinking of immediate response as user types. Seems like complete waste to go to server and back. In case of forms you do need to validate on server you are right.
But you do not need to write it twice. That's the point of Server components. You share the code. Make it easy to move the code between server and client to do better.
Moving all code to server or all code to client for simplicity reasons is a false thing. Both are worse for costs and product quality.
Real issue is that actually moving code on a whim between server and client is still challenging.
The lesson from more than half a century ago (on Multics) is that the API is the real use interface. The client-side app is more like a skin. THe big concept is factoring the problem to assure the API is the security surface. The additional benefit is that others riding to the API can be your friends instead of being seen as attackers. This should be taught in a good CS curriculum. Should be ...
I have not worked on it, but HTMX sounds like PHP making a comeback. It does not solve access control issues, it just moves them to the server side and now you have to generate the right HTML for each user rather than JSON.
No, its totally unrelated. React looks more like PHP to me, because of stuffing code and css in templates and even business logic in the weirdest places.
And if you mean 'server-side rendering', then sorry, but you don't understand how HTML works.
For me a problem is lack of fullstack developers. If a developer need an additional endpoint he should be able to make an additional endpoint by themself without asking anyone else.
Also I see, that many developers don't distinguish between internal and external API. External API is for people outside your company to integrate with your app, but you don't know what they will do with this API. Internal is when multiple parts of the same app communicate with each-outher (like frontend calling backend api), so you know exacly for what is it used and what data are needed.
In projects I do by myself, I typically have a codebase divided by modules (bigger sets of features) and in each module is code both for a backend and a frontend. For most people it probably sound like a heresy, but it is verry practical in most (not all) projects. I also made my own templating engine, so I can have the same view rendered both serverside and clientside.
Maybe someone can help me understand this... Isn't the View component of MVC literally what they were referring to? like the intermediate layer between the API and the html that is eventually served to the client?
Yes. MVC is slow. It renders the whole UI level. This does more like on an element level.
@@sergenalishiwa9097 Thank you!
depends which MVC you mean, because it has a totally different meaning for frontend devs.
@@alexd7466Tell me what it means to Front-end devs. Thanks :)
I feel like reactive frameworks have lost sight of their initial values; Deliver a responsive and reactive experience in a *simple* way. I think the downfall started when people became so obsessed with puting compute time on the client side, that they are willing to forgo the simplicity.
You think that because you don't know what the point of them was. It wasn't to make frontend web development simpler.
The point of these frameworks was to provide a way of rendering views clientside and enable the separation of presentation logic from the backend - to move the V in MVC to the browser. These frameworks were invented because we had a lot of trouble with separation of concerns back when MVC was king. Trust me, I lived through it. MVC was the only game in town when I started, and I don't look back fondly on those days.
That said, this is a common misunderstanding, and I think it stems from how these frontend frameworks marketed themselves. When "easier" and "simipler" were thrown around it was when we got into reactivity - "simpler" and "easier" were in comparison to earlier attempts to move views to the client such as frameworks like Backbone. Not to say they are easier than serving HTML snippets from the server.
This is fine, as long as you don’t need to go with multi client architecture.
If you use the same api for web and iOS/Android apps you can’t just deliver rendered HTML.
this is all cool for web development but what about mobile development? you need either rest or graphql apis for that
If you aren't using io functionality that isn't built into the browser, arguably it should be a web app not a mobile app anyways. But there are multiple mobile development options that utilize html.
You will still have your old endpoints for this.
There are some obvious pros and cons that are so clear. SSR makes sense if the needs of the UI are not super intricate or interactive, the additional requests and work in the server side is not an issue for your backend infra, the ssr approach or framework allows you to draw up clean and clear markup not ad libbd spaghetti, and you dont have any needs or requirements for that backend service to be a data api not an html api. There is your decision. Personally I'm a big believer in the front and back end bifurcation like every other form of software ( imagine if your game server or your mail server etc was in the business of sending UI ), and the idea of HTML api's makes me want to gag. But I also understand the push back against the overly complex constantly changing front end landscape nightmare (ahem react) and the desire and willingness to seek alternatives.
Yes, in-depth HATEOAS video please!! 🙏
The problem with where front-end APIs ended up, and the move to GraphQL, and the complexities atop, wasn't in the move to GraphQL, it was before that. The APIs churned because they were allowed to diverge per page, in the first place. So now, instead of having authenticated and authorized users with roles, accessing parts of data models, based on those permissions (that *need* to be there at any / every level, regardless), you have per-page models that drive every subview of every page... I've seen them get as bad as requiring a back and forth to change sorting order, or to remove items in a filter... which just guarantees an API rewrite as soon as the design team makes a page refresh, or the marketing team wants to add a new banner CTA.
The failure there is equivalent to redownloading game updates through Steam every time you make a settings menu change.
I’ve never had a problem creating performant user interfaces with general purpose APIs. You can add sorting and filtering parameters to API. You can make APIs that selectively denormalize. And you secure each endpoint by role.
Thinking you need to create APIs that generate data in exactly the format of the UI is just a premature optimization. Client side JS is fast. You can radically transform 1000 rows of server data in a few milliseconds, and should do this, rather than creating yet another endpoint.
Returning html is only consumable by browsers. JSON payloads are versatile.
Each has their limitations/issues/applications. That’s our job, to strategize and architect without using one approach as be all end all just because there are pain points.
Curious about the HATEOS so if you feel like making a video about it, I'll definitely watch it!
Ever heard of Rails (and Turbo)?
the issue with HATEOAS is the same as with every other attempt to use your backend as the single source of truth for every state: it goes against physics at the very core. Lots of tiny details are frontend app state related that you don't want to communicate with your backend because it's inefficient, and even worse, if the communication breaks down (think CAP theorem, you are building a distributed app/db) if you expect consistency, you will be disappointed.
second, and even worse, the customer, or person using your frontend is part of your system. This is inevitable because otherwise you haven't really built a coherent narrative of conveying information. So, you have 3 different agents, the BE, the FE and the human, and if only one of them is expected to dictate state to the other two that guarantees suffering and bad UX.
Hasura has solved the problem with the GraphQL security. It can control the access to a single field in the database table.
You have no idea how much I had to scroll to find this comment.
IMHO Hasura is THE easy-to-use solution for all your API needs. And it feels like nobody here knows its existence.
I think every large, enterprise software will sooner or later have to create a flexible query language, that is accessible to clients, because they either need to provide an interface to third party software or have highly user configurable UI, like dashboards, generated reports, complex search filters, etc.
But even when you aren't I'm not convinced HTML is a good client - server protocol. If the goal is to provide a stable backend API and then have the frontend team write a frontend API on top of it, you can just as easily do it with JSON or ProtoBufs or whatever your favorite format is. My problem with sending HTML (outside of live server solutions, that have other problems) is that I loose all the flexibility and ease that comes with functional reactive programming. ui = f(state), instead of directly manipulating the DOM was a huge step forward and under no circumstances do I want to give that up.
What about mobile apps? It’s great if the only clients you have are web apps. Love it. But when you also must support other non-HTML clients, the calculus changes somewhat, no?
Wait a minute, there are actual real and using their brain cells developers and not just bot-repeaters and fan boys in here? Mind blown!
htmx sends a "htmx-request" header on each htmx request, you can check if that and other headers if needed to determine whether to return JSON back but imo at that point might aswell just do all json or even better protobuf
I would love to see you talk more about HATEOAS
Regarding HTMX, have you seen the comments from a video called: "the case against htmx" by Mark Jivko
About time devs are warned about using HTMX on large projects.
"Endpoint that serve HTML" LOL, we did this in early 2000s. it was called server-side HTML processing or in other words, PHP/ASP.NET/JRE etc. The problem with this model is that HTTP requests are costly and many of the front-end functionality relies on cached data. You get a list of 20 json objects from your endpoint and you apply filtering to those objects on the client side in any way you wish. Trying to do this on the server will result in a ton more HTTP requests. This was the whole reason we moved dynamic HTML generation to the client.
The next thing they will "invent" is a full-stack framework for go. Go on rails and call it the newest and best thing around...
I feel like we spent a bunch of time moving away from this, so odd to be coming back.
It's easy to get caught abstracting when it comes to software development, it's important to understand WHEN you should.
7:00 With Server Components, now your frontend devs are writing SQL directly to the database. Which, ironically, makes them do the job of the backend dev team. They become full stack devs. If you want that, that's cool. But if you have a separate backend team who are the experts in everything backend, like writing and optimizing SQL queries, then it might not be what you want.
Server Components don't mean separation of concerns isn't a thing anymore. With an SPA, your backend probably already has layers of abstraction for controllers, services, repositories, entities, etc. (substitute any synonyms you like). The vast majority of that doesn't change. The only difference is that instead of your user-facing layer returning json, it uses Server Components to return HTML and JS.
In other words, front-end is still front-end, backend is still backend. Only difference is the API between the two is made up of public functions rather than JSON.
yep, and I don't see how's that more secure, even with Fullstack experts, the line is thin on Metaframewoks like Next
@@etiennelevesque6015 you could have front-end devs call some backend functions in RSC's, instead of writing SQL directly in RSC's... but haven't you then just shifted the API churn one step backwards? Now they will ask for tailor made functions which back-end devs will make (with numerous bespoke SQL queries)..
@@magne6049 It's true churn will never fully go away, but that's true of any system. There's a balancing act here as well: abstractions (which imply some amount of churn) and bespoke, straightforward solutions (which imply repetition). On the one hand, it takes mork work to design and maintain abstractions, but heavy logical repetition makes code harder to change as well and increases the risk for errors and inconsistencies.
Also, there's still value in RSCs because they reduce the amount of abstractions (and therefore work required to maintain them). Even if you keep a full architectural separation between FE an BE, it's much easier to change a method in a single code base than a web API. No more building entities to map between the API and your domain representation, no more (de)serialization and external input validation, no more backwards compatibility (and the accompanying 3-step process to remove stuff), etc.
To be honest, I personally believe that outside of highly specialized needs (deep technical problems, very high scale, etc.) fullstack devs are better suited to work on the majority of apps out there anyway. My opinion may therefore be biased, but I think RSCs still bring big benefits to teams that prefer to specialize on one side.
@@magne6049 Also, shifting the API churn one step backwards sounds like a great thing to me! That's the basis of the whole shift-left movement (be it applied to security, agility, testing, whatever). The earlier you can hit these kinds of frictions, the better it is because you've invested much less time and money into something before realising its not quite what you need.
A friendly suggestion: stop it with the over-exaggerated faces in the thumbnails
He’ll never stop because “UA-cam algorithm”
It’s why I click
People gravitate towards faces according to some research. It's more natural for people to recognize other hoomans than graphics alone. Hence clickbaity and favors the algorithm whether we cringe or not
It's such a double edged sword for creators. It looks really childish and cringe to people who are regular followers and looking for this kind of content.
However, they are more likely to get random clicks by people who get drawn towards the thumbnail because `haha funny face'.
So it's kind of a toss up between annoying your loyals or loosing some new viewers
@@Never-t4y this is so true actually, while I might have an opinion of I don't mind the face, there will be a percentage of viewers who may or may not be comfortable with it. As for the numbers, I can't really say ~ I might not know how your YT newsfeed looks like, but mine indeed has a lot of faces hence I've probably acclimatized to it. It's actually good to hear from others a different opinion about it since yt algo probably has desensitized me over time
I feel people are shilling HTMX a little too hard, we have already been through this. We will all go back to basically what was PHP development 10 years ago and end up back at frameworks.
The whole frontend saga just made me realize it takes about 10 years to become a competent senior software engineer.
After server components, I can confidently predict the next big trend is going be SOAs
SOAs ?
I really think Tipa is going to take off
@@bossgd100 Service Oriented Architecture.
The problem describe in the videos of a client needing access to multiple endpoints/services won’t get solve by running code on the server. Surface issue like access control might be solved easily but the main issue is API design.
If people keep requesting that you should add more endpoints to meet a UI needs, then the same rules will apply on server side. Bloated APIs will be replaced with bloated RPC contract that is impossible to maintain.
Then people will try solving that issue with SOA/SOAP before rediscovering REST
@@moneymaker7307 lol the end of your message is funny.
Thank you for your answer, I learned something today :)
I would love a longer video on HATEOS
I started a TypeScript/React -> Cloure/Hiccup/HTMX conversion recently. It took about 30 minutes of spiking HTMX usage to learn 80% of everything I would need on the frontend. Let's see if I can escape the JS ecosystem's grip... I'm using CSS modules with Vite, and I export the CSS class names to a JSON file for Clojure's consumption. Similartly I export i18next localizations to JSON.
vomited irl processing this comment
The idea of allowing SQL queries to be crafted on the client side makes me queasy.
I think a better solution to API churn/security issue would be to just ditch JSON in favor of Protobuf. Not only is it faster but it's easy to implement a system that guarantees your API is consistent.
Revolution! THis is how the web worked in the 90's. It all circles back but at every turn, it gets refined.
Having state on the client is a bad idea, it always has been.
Servers nowadays are sitting at 1% CPU doing nothing but being proxies to a database. IF they use any CPU, is because they are parsing huge JSON docs to be transformed on other JSON docs. We could use a little of that CPU so apps are fast on the client.
"this is not a new revolution"
It definitely isn't. Entire languages were based on this idea (such as php and jsp). It's not a new revolution, it's an old paradigm that was purposely left behind.
Thanks for the video. If you did the HATEOS video, I'd love if it went into the pros and cons.
I'd be interested in a video on HATEOS !
Honestly I think I’m ready to accept that I’m turning into one of those “but that’s how we always did it and it’s never let us down” type of devs and stick to pages-directory + tRPC. I think I’ll check out the server-first stuff when Remix, Solid, or TanStack release their versions.
i'm with you, but HTMX is not hard to learn, and if you want to keep coding in js/ts i suggest looking into Bun + Hono + HTMX stack which is basically, node + expressjs + htmx, but better.
But what if I want an iot device or a robot pull the data and not just display a pretty picture? I think this is one of that cycle thing, where in a few years there is a framework on top of HTMX to get json and render it on the client side. But what makes sense to me is to have an expressive API for the backend and the frontend is also on the server and only sends the final result to the client as html, so basically PHP.
I’m interested in hearing more about the HATEOS thing 🤔
You mean PHP was already doing this?
Exactly. You are mentioning something that will trigger a lot of people 😂
Its so weird that writing server-side code is like a "new" or "scary" thing in javascript land.
I'm a PHP dev so server-side stuff is just the norm. There are no "server components" or talk around creating html server-side. Its just the normal way to do things.
Or maybe i just don't know what the php community is talking about. i see so much javascript commentary
I still don't see how you can use this to build a webapp, mobile app, and support third party applications. Our company used to to that with a normal JSON REST api. I can't see supporting that with htmx or server components.
HTMX it's more client-side, but you can use any web server to render SSR (including mix with your web server API).
And I can't believe were tricked to consider salary a "private" field, so that companies can do whatever the hell they want to their employees!
We are going full circle. When I started web development, everything was done on the server pretty much. Back then we didnt even have jquery and ajax was pretty new.
In my opinion, as someone who will as a hobby write little tools for myself to use, I would be fine with this as long as there were APIs left available with the intent of tools being developed targeting them. I COULD use JQuery and parse datafields and be annoyed at it myself, and it ends up creating more work for everyone.
I am pretty new to development overall (only a few years of hobby coding before I got to uni) but I would be disappointed if the route I took to be more interested in coding became more closed off.
The problem is that somehow devs with no understanding about projects with more than two devs, a customer who expect that its easy to change the label of a value because there should be one location to change it and customer will be not happy when he has to create 4 tickets because the code is copied 4 times. The customer changes from unhappy to angry and loosing trust if such little changes create other bugs because when changing the same code by different devs it leads to code that is different and gets its one problems. It also happens that integrated products need a version update. If the interface get only a different signature you can be happy but if it also change its behavior and about ten different fronend calls use that methods direct you get bugs that are quite hard to reproduce and of course all is coupled and a nightmare to change or bugfix. This are only a few problems that will appear.
I don't think APIs will be killed, because we still need JSON to exchange information between servers. What I see here is APIs can still be used and then given to the HTML that will be returned from the server. I guess you get my point.
I like watching from time to time videos of web developers running in circles while I have been using ASP Net framework and asp ASPNet Core for ages that have just stuck to server rendering most of the time. You can still use javascript in addition no one is stopping you from that. and you can combine index db and local storage as well for things that don't need security
So what about native applications? Are we going to server render everything?
You'll have a back-end API which will send a generic JSON to a Nextjs server. Nextjs server will pluck the needed data out and narrow it down to some sort of shape suitable for charts, for instance. Then you call that route handler with server action for web front-end, and with and API call from your native front-end. Much simpler, isn't it? (hah)
@@buza_meAh, so this "simpler approach we used in the good old days" I keep hearing about means using 2-3 servers (one for JSON, one for HTMX, one Next.js) to serve your content instead of one. Great, what a joke. I'll just continue using JSON API backends and SPA frontends as if nothing noteworthy was made, because it wasn't.
Htmx is cool but the server running all the work also needs you need a bigger server ($) when the client can do some work and be free and very fast. I’m not sold tbh.
I have argued against "API churn" and the overzealous swerve to front-end for years. I'll keep doing it. I can't be stopped.
I think this movement back to the server is really just a byproduct of the maturing of tools for constructing data-driven UIs across many different client domains.
15 years ago you could not (easily) create a mono-repo of shared UI components across mobile, web, and server 15 years ago, for instance. APIs help facilitate communication across systems that do different things with the results, so as far as s2s APIs are concerned those aren’t going away. But the client-server sharing of UI components and client-side hydration tooling that is coming out is definitely is something that is crazy new and interesting.
How do you share UI components across mobile, web and server?
@@patiencebear I wonder that too . I mean you can use the same data models not a problem when you use JS or C# same goes for APIs which give you those data. Sending UI components across the internet is always a security as well a performance issues but this is the price you pay for a hyperdynamic webpage and easy UI updates. React Nativ and MAUI you don't see something like this except when you include a browser in your app.
@@Fiercesoulking I thought about something like Ionic or Cordova, there it would be possible - but that's not the same.
The cross-platform experience is rather "Meh" between desktop and mobile, in my experience.
and server components are not possible afaik
Bravo. Amazing explanation.
Every video that Theo releases makes me realize the benefits of RSC more and more
1. dont use graphql for api churn 2. dont just throw out whatever api your front end designers want on a whim 3. go ahead and shift your api churn over to backend spaghetti markup churn, if thats what you like.
endpoints to display pages, then proxy in any heavy lifting from the backend. job done.
honestly though, i've been using this approach since 2018. i wrote a small vanilla dom manip abstraction script and router that do this in sub 3k. and i'm no genius by any measure. just seems like a bit of common sense to me.
I think GraphQL has it's place and can benefit everyone if done right. You can, for example, implement authorization based on GraphQL directives.
Server components do remove a lot of the need for this but they can't be used for everything.
Especially with more of the frontend frameworks on mobile moving towards a declarative approach similar to React (Compose on Android, SwiftUI on iOS, Flutter, React Native) it has it's place. And it may still be possible to do something similar to server components on there, but it is much harder to do and if your application is anything other than a JSON-pretty-printer you won't get far with server driven UI.
A client does not always display HTML. It can be a mobile client, or even some fancy client that displays the whole state of the system with a set of LEDs. One [RESTlike] endpoint that emits JSON (or XML, or another form of raw data) can cover many types of clients. An endpoint that emits an HTML layout can be effectively used just to display HTML. What's the point?..
@@TapetBart Isn't plain PHP enough to serve just HTML? Or we have to use HTMX at first since we don't need app support; then we have to add GraphQL since we decided to support some apps; then we have to rework once again and add REST since we have to support some external integration, and so on and so forth... Why not just use an approach that _works_ well and _scales_ well?
@@TapetBart I'm not a fan of PHP either, but yes, it worked very well for serving HTML.
HATEOS sounds autological - I hate it. Would love a vid of you defending that pattern.