This Release Makes Me Want To Leave React...
Вставка
- Опубліковано 12 тра 2024
- Seriously. Elixir has always been my guilty pleasure and LiveView makes me rethink everything. Aaaaaaaaaa. So cool. Phoenix is dope and makes functional programming in a rails-like env fun
SOURCE
phoenixframework.org/blog/pho...
Check out my Twitch, Twitter, Discord more at t3.gg
S/O Ph4se0n3 for the awesome edit 🙏 - Наука та технологія
Elixir/Phoenix has been tempting me for years, might have to finally take the Elixir pill.
silly, you drink an elixir, you dont take it in pill form
It's awesome, but sooo different .. I highly recommend Pragmatic Studio's Elixir & OTP or Codegnome's Elixir courses. I was blown away by some features like the omnipresent pattern matching, etc.
😂 very true!
every elixir is a love elixir when you love elixirs :)
i took it and i ended up wanting more and more only to realise there's 5 companies in the whole country using it
It is crazy how Elixir/Erlang is not as popular as it should be.
It is dynamic-typing so it is probably about as popular as it should be. JavaScript, on the other hand, is way more popular than it should be.
JS is forced into popularity.
@@username7763 The dynamic typing isn't nearly as bad as people claim it is. Elixir's pattern matching and type-specific operators prevent those "whole classes of errors" people talk about. They aren't caught at compile time, but your general tests should catch them-that is, tests you would have anyway, NOT type-specific tests (I don't have any of those myself). The whole "let is crash" philosophy makes it more than fine for most systems. All that said, I'm not mad it's getting static typing, but I'm in no rush. I also don't work on HUGE systems, though.
i actually like elixir, sounds great for performance heavy apps, but...
as it "should be"? we got some celestial beings in the yt comments.
elixir has no mature type checking before runtime.
js has multiple issues (inexperiencedly designed lang, ecosystem rebirths every 8 years) but it has (pseudo) static typing with typescript.
I change the type of a field in my zod types, I get to know before pushing about edge cases with proper eslint config.
people choose, man. something better comes along? people switch.
webdev via wasm+some lang has yet to be proven as a good setup to switch out of js.
I write a web app with node+some framework, only learn 1 language for client and server side.
guess what happens when you are 1 year into building a really cool elixir app. oh, you need to learn js to deal with all the edge cases: the frankenstein app begins.
the guy is writing html inside a string for fuck's sake. i don't care if it has good highlighting, remember how nice it was writing jsx with typing? so you get know if you messed up the name or value of an attribute? gone
idk what else to say man.
@@username7763 bullshit
Loved this video Theo - as someone who hasn't written Elixir or Erlang since 2014 where I built a custom Unity3D Component Serialisation System for Realtime Networking, so happy to see Elixir coverage.
Doing Elixir now for 7 years and it is getting better and better every year. My biggest concern with it is that devs are rare as unicorns. However, once you've assembled the right team, the possibilities with Elixir are endless!
I once had a recruiter from samsung who contacted me for an elixir position there!
@@grimm_gen how did they find you?
That's not a big deal since it's quite easy to onboard pretty much any FP fanboy (Haskell, OCaml, even Scala)
is work in Elixir quite rare, I feel like it's a small niche.
Dev are rare maybe is an pros. Because you got less compatitive
let me tell you as someone who found elixir a few months ago after severe JS burnout: elixir phoenix is the way and is a beautiful piece of tech. come to this side, it is in fact filled with greener grass
As someone who tried it out of interest year or two ago, I couldnt make mx command work even while following tutorial step by step. Imagine not being able to get working project after running npx create-next-app
He used to program in Elixir when he was at twitch
now we will make phoenix for gleam and call it thunderbird
Is elixir blazor but not Microsoft?
We've never heard anything like this before
Crazy Good! And such a well written post to be able to capture the complex nuance in short sentences. Thanks.
Using elixir for last 6 years full time and very happy (moved from ruby). Thanks to mature core (OTP/beam) we have a lot of instruments for live debugging that helps with day to day debugging and dev. But I want to aware about hot-code reloading (swapping) - it's very hard and almost nobody from elixir community uses it for release processes. It's just increase release process complexity to non-acceptable level. It's only sound great but in reality state management and swapping is very hard. And when we are talking about web apps where state-less (http nature) is natural approach it's much simpler to go just with regular blue/green releases.
However during development hot-code reloading (specially on test-servers) it's a game changer.
I moved from React to Phoenix and LiveView about a year ago. Never going back.
Create a HTML MPA. Rewrite to PHP. Rewrite to Django. Rewrite to Ruby. Rewrite to Next.js. Rewrite to Svelte. Rewrite to Go. Rewrite to Rust. Rewrite to Elixir. We'll never get anything done lol 😂
Because social media developers only care about coding, they really dont care about make a product or solution. Just the fun of making it.
We should have stopped at "create an MPA". This was never hard. All the "requirements" that came after were invented by developers with feelings. I'm half joking, but I'm half serious as well - the large majority of projects would be *just fine* as an MPA with a bit of backend in Whatever. This stuff is as complex as we choose to make it.
What an odd joke/complain, are you annoyed by variety and innovation?
@@Erandros are you even a developer?
@@professormikeoxlong bruh
absolutely loving elixir and phoenix- feeling much more productive than nextjs
Wow this is incredible I haven’t used phoenix in a long time and I want to go back to it🎉 Amazing
i'm so glad that i'm learning this tech stack for a few months beside my full time job, looking forward to switch to this stack as my full time job
I think this video could do with a follow up. A lot of people are missing that the BEAM is the real special sauce and how well it compliments LiveView. Almost like flipped the other way around.
I have moved from Python to Elixir... I won't come back to anything else! Ruby syntax, Erlang ecosystem, beauty!
- rewrite your backend to Ruby on Rails
- rewrite your backend to nodejs
- rewrite your backend to go
- rewrite your backend to rust
- rewrite your backend to elixir
Will so much time spent rewriting it's a real wonder how anything gets done in this fkn industry.
The truth is nobody rewrites except for tech youtubers. The project I work on still uses PHP/backbone/jquery like it did 10 years ago.
A lot of the web still works on PHP and will for years to come. There are many new projects that still start on PHP, not no mention Java or C#. All of these cool new stuff happen only in twitter and youtube.
@@lastrae8129 and noobs
Nobody is rewriting anything lol. This is just the infinite content mill going
Douglas Crokford was right!
The industry needs a whole new alternative to JavaScript itself not just libraries!
Something to end all that fuckups
Changes of technology
And adopt widely with all browser bases
It's so good! I'm so excited. Thanks Theo, for covering this news!
I can't wait for Elixir to become a fully typed language like TypeScript as fast as possible.
I swear it will be one of the best language and tech to build APPLICATIONS in general. Web/Mobile/Embedded.......
They have a remarkable "base" features no one truly has
Why does Elixir need static types? It's pattern matching and testing workflow prevent the category of errors you're referring to
19:18 are you referring to The old Turbolinks? Or the current iteration of Turbo under Hotwire? Payloads now operate by either providing a frame tag to replace an existing frame tag (without loading whole template/layout) or specifying a swap of any ID'd DOM element with a small payload via turbo_stream (still without rendering template/layout). I guess the latter supports their argument of requiring additional dev tuning.
Yeah I think Theo is a bit out of date on Rails' frontend solutions but it looks like LiveView blows Turbo out of the water.
LiveView's creator seems to have spent time with React and internalized its advantages whereas I don't think DHH ever gave it any real consideration.
Still I don't think Theo should shit on Turbo and praise HTMX when they're doing fundamentally the same thing.
Making server side cool again? I feel like we just gone through a loop
Welcome to CS. I'm waiting for the inevitable backlash to cloud providers, and everyone runs back to in-house data centers.
@bruceleeharrison9284 getting away from cloud would be hard these days. But if i was gonna dream up a perfect scenario(imo) id say CS devs/engineers create a union that works to protects our rights and experience. But ALSO creates AT cost data centers for union members to utilize.
@@FrostbytedigitalIt’ll have to get a lot worse for developers and engineers to unionize. They’re not really the type to. You’d have more luck in specific industries where people have been abused for a long time like in game development.
@@Frostbytedigital it wouldn't be a reversion to what came before. It would be a new, updated approach that revisits the concept with a modern approach. Probably a way to outright buy capacity in a datacenter such that you "own" the machines. Services would allow web reconfiguration of the setup, some being immediate (since they can be controlled in software) and some having a lead time to physically setup. (e.g. installing a direct network line)
I can see this being so much cheaper than clouds that excess capacity will be bought to ensure little to no lead time for teams requesting hardware. Which will work fine right up until someone decides further excess capacity isn't needed and trims the budget. Then we'll be back to long lead times and cloud will become more appealing again.
Yay, the CompSci pendulum...
The pendulum between client and server has been swinging since the late teletype early dumb-terminal days. We seem to have new names every few years for mostly the same concepts. At least it's a client-vs-server loop... and not recursive... Unless you look at the bare metal/image/package/VM/container progression... that definitely _feels_ recursive.
I feel like that every day I go to work... I miss Elixir so bad
Those saying that PHP already did this don't get it...
PHP already did this
@@DKLHensenYou don't get it.
I bet both of your guy are right. First half of video, It just auto ajax and auto reload project when project code change. (At least on real time development, both php laravel livewire and elixir are equivalence, only final feature(shipped feature) always varies, because browser always changing, different web technology update at different pace against new browser feature)
To be fair. Web is a big and large ecosystem, too many stuff hiding that framework and library already do for us... and niche place that help here and there, just phoenix is slowly steady and never use shortcut like most javascript developer do, so they think it is magic.....especially those work around facebook era, should know that trick, but maybe they never use it, but they know
The innerHTML reminds me what I built in the pre-js-framework time period, where we used jQuery for being compatible with all browsers. Also we had to support IE 6.0, where it was easier to update the DOM with innerHTML and not do DOM operations, because they were too slow. The only downside is that you also lose the focus of an input field if the HTML is being updated that contains the input field. We did try to reassign the cursor position after an update.
It was much faster in loading the page and also in building the javascript.
just came to make the "so we're back to PHP?" comment without doing reading up on anything for even 5 minutes
To me I like incredibly thin clients where the server sends HTML and it’s done (an archivable resource); OR, I like thick clients where the user has so much to hand that the app might even run offline or in difficult environments. Even though this is amazing, I have almost no need for something in the middle - an app that is symbiotically tangled with the server and at the whims of my connectivity.
You make a good point about offline, it's one of the reason I'm not a fan of any form of SSR. If I have a section of an app I want to make offline, going from client side is pretty easy, work out some sync logic / storage and your done. If all my eggs are in the SSR camp, I'll have a much bigger job on my hands.
Recently picked up Elixir after being in JS for a decade and have to say it feels revolutionary.
This is the best theo. The educator with a great tone and not sounding pompous like in some videos.
I've know the liveview since it's inception, but I'm not using elixir for the last few years and really thought it was already 1.0/ready for production for a long time, really surprised to see that 1.0 was just released now. It shows the care the team takes.
This still super exciting. I will have to check out a bunch of these technologies. I still think Haskell is the closest syntax needed for modern full stack development. Everything is based off the C imperative syntax but most of what we do now is functional reactive programming so it makes sense to have a syntax more optimized for that.
Well, this is definitely the push I needed to check out Elixir/Phoenix. This looks really impressive.
Been using it for 5+ years now. Getting better all the time.
Nice. I don't know how much of this PHP is suited for, but I agree this is presented excellently.
PHP is suited for anything on the server-side. In the end you are just sending some plain html to a client.
Wow this is so interesting, things are getting more fun and fun.
Grandpa PHP has that "Told you so" face rn
22:44 I don't know about GraphQL. But if you have multiple services you don't want them to call the auth server all the time. So you use JWT instead. With that you just ask the auth server once for their public key and then you can check every request with checking the signature of the JWT. If it is valid you can read the data from the JWT and know which user it is. No need to do any additional call.
Btw. with the web sockets solution, either you don't use microservices but a monolithic application. Then you don't have multiple services, you just have one monolith that does authentication and all the endpoints. Or you have to split the web sockets and basically have 3 separate connections. So you kind of compare different things / different architectures with each other.
But what if there is a change in the user permissions? The JWT will be outdated and that may be dangerous, so you need to specify short times to live and re-validate (going all the way to the DB or centralized auth service). If you're going to do that why not simply use a traditional server side in-memory cache? In that case you can use cookies or whatever (that only hold user id) and check in the cache for the permissions. If the cache is short lived you are in the same situation as JWT except for the specific instance where the client is connected where you can edit the cache record directly (considering how load balancers work, in most cases will be the same server anyway).
@@Robert-zc8hr the privileges rarely change. You can have for example 15 minutes tokens and a refresh token. If privileges change you create a new 15 minutes token with the changed privileges after the other expired.
The alternative with the cache isn't good, as the auth service needs to be online all the time and each application server has to ask it. So if it is down, everything is down. It you have JWT and the auth server is down for 10 minutes, it's not that big of an issue. Some users can not use the services, but others can. Also you can use stateless logic like Cloudfront functions to check JWT tokens.
So we're back to PHP?
Yes, PHP with websockets. Really tho this looks pretty cool.
laravel livewire
@@glebtsoy4139 It is so much more lol. The first 3 seconds of the vidoe, Theo shows it updating a whole network of servers, their build caches AND pages on clients that are currently connected. Elixir/Beam/OTP are instanely well designed and battle tested.
@@glebtsoy4139 I'll get on that once php implements it
While I agree that PHP is amazingly well suited for the types of systems it is usually applied to, Elixir/BEAM is five steps ahead of everything in the industry. You souldn't focus solely on the templating, instead take a wider look at the runtime model, ability to spawn thousands of processes, to create server networks, use other technologies, have sane throws and sane logging mechanisms, switch between HTTP APIs, Websockets, RPC, async APIs without needing half of AWS to achieve it.
17:12 "This is like Qwik, but good and solving real problems" - I really wish you'd elaborate a bit more on this, because to me both libs seem to solving the same problems, just doing it differently. Both libs identify they need of splitting into static and dynamic parts, both only send minimal code for interactivity and both do fine-grained updates. What is the "real problems" that Phoenix is solving and why is it "good"?
I believe that a real difference between Qwik and Phoenix is that Qwik has a better user-story when it comes to frontend/browser only components/islands. Phoenix dev-experience would favor the back and forth between server and client over WS (at the moment of writing).
@@bas080 that is (more or less) my understanding as well. Each solution has its pros and cons, but claiming one is simply "good" and "solving real problems" (implying that the other one doesn't do neither of those things) is just childish.
Also to be fair. Users being on the existing page and not getting the update till they refresh is not some huge problem. Even massive companies like amazon accept that behavior and wont be changing frameworks to "fix" it.
He has talked about it in a js framework tier list. I think you can boil it down to having quik's weird syntax that he can't digest.
brother it can scale to 1m websocket connections its all REAL-TIME. Quik does not even compare in the slightest
I remember back in the C++ Borland Builder days, you could change a property on the representation of a UI component and it just updated live. In MS Visual C++ however you had to call a function and say if you wanted to push UI state into your model, or your model into UI state. The later won out... why was that?
Like mfc updatedata(false)
This is awesome news! I used to work with Elixir/Phoenix and LiveView. But I had the choice to work on a questionable product with those technologies, or on a great product with React/Next.js, and went for the latter. But sometimes I miss the elegance of Elixir/Phoenix. I'm not a big fan of Tailwind - it's like a hammer that makes everything look like a nail. But Elixir/Phoenix IS the right nail for it, it's the perfect match and I wouldn't like to use anything other than Tailwind with it.
Let me guess: a gambling site?
@@ironhammer4095 Not that shady, but close 😅
Reminds me a bit of atozed Intraweb at least the old versions I used a while ago. All the code you wrote was server-side. The dev tools acted like you were writing a desktop GUI application but it would do all the magic of syncing the GUI from the server. It had it's problems, it didn't scale at all and was resource heavy. The abstraction would break down at times. Maybe it's gotten better, I'm not sure. But I like the basic model.
How does this compare to what Blazor does with Websockets?
What do you do with your iOS Android apps? This doesn't make sense when you have native apps and treat the website as just another one of your platforms.
So wait, server side includes were the way to go all along? I was about 11 or 12 when they were the standard way to include dynamic content in static pages so my memory and understanding may be off, but I swear that at least conceptually it's roughly the same idea
Still not a huge fan of elixir syntax, but other than that, this looks fantastic!
I'm not entirely sure how well it would work with highly interactive sites that have animations etc, there might be some interaction delay still, just because network latency. Not talking about updating a progress bar, but things like tooltips, drawers, popups etc (unless I'm understanding this wrong. But I wouldn't want 100ms delay after clicking a button to show a popup).
But if you don't need that stuff and just want a semi-interactive site, it's very much promising. Kudos to the team for pushing this pattern!
Drawers and stuff are things that JS does good. You don't need liveview for that
They ran 60fps animations across the ocean in one of the keynotes and it worked without issue. Some folks even build browser games with LiveView. And usually you'd use css animations for most things anyway
The end result (demoed in the intro) reminds me of how cool it was to work with Meteor ~10 years ago.
Did Vercel... just use "The Conjoined Triangles of Success" for their graphic about blue-green deployments?
HA! Good spot
Thank you. We desperately need more variety in architecture, the json API monoculture needs to be challenged, HTMX and Liveview are two complementary approaches to do this
Why not using SOAP? Why not using XML? We moved away from them to JSON, because it's more simple. Sending HTML snippets is a way back in the wrong direction. I don't say it's bad in any case. And I agree that we need more variety in architecture. A better solution than React and Phoenix LiveView is for example static HTML. If you don't need dynamic content, just put it there as a static HTML. You can generate a blog with Hugo and don't need computing. Neither on client nor on server side. Of course that is not working for everything. Where you need a bit of dynamic content, you can provide it as a custom element. For example comments in a blog. But if you have for example a live ticker on that page, Phoenix LiveView is maybe a good solution for that. Can you add a custom element and serve it with Phoenix LiveView? 🤔
Coming from blazor I am very hesitant to have a server call to update any state. This sucks if you have a slow Internet connection, and makes it unusable if connections drop.
Curious if their solution works better
They're closing the connection once the page has loaded, and then the client components can take over for interactivity etc. at least that's one way to do it.
I'm also using blazor and while i love what microsoft is doing i'm starting to see the pitfalls... still has a long long way to go. (Currently using Blazor Hybrid on android and iOS, i love C# especially the functional parts)
It is not. It will suck with poor network exactly the same way as Blazor.
@@hauleth That doesn't make any sense, you can use Phoenix without keeping the connection alive, performance with that is equivalent with a normal request, you just get to draw faster
@@hauleth Same goes for blazor btw, and idk why you think blazor sucks lol
@@dahahaka I didn't said that Blazor nor LiveView sucks. Just both technologies will suck when there is poor connection between server and client.
i started learning phoenix liveview an year ago; haven't seen a complete framework like this. It is little difficult to learn it as there are not many tutorials available but books are solid.
It is unforunately very true. There are a lot of resources out there, sure, but nothing beats working on a real project or seeing someone's real-world codebase. I remember watching Jose Valim use Livebooks on his Advent Of Code Twitch stream, and god darn i learned so much from seeeing that. I suppose, a bit of a lucky thing is that the whole ecosystem, including Elixir itself, is written in Elixir, so we can fairly easily inspect their files and the way they manage projects. There are also a couple of amazing podcasts on Spotify worth listening to.
"might seem great if you are near the servers it is hosted, but as soon as you go somewhere else your experience sucks" is a perfect description of a specifc problem and not a generic one.
You might think that all apps are like email clients or file uploads or streaming, targeting all possible universe's users where distances could easily matter and introduce niche problems to those revenue generating specs, but most apps are fairly local. And most of their problems can be solved horizontaly.
I would argue that most MVPs are not worth the effort of fast speeds either. So you are only left with those niche apps. not niche in terms of user volume but in type of app.
Also keep in mind that distance is not the only factor for speed experience.
So how many flies did those bazookas killed?
imagine someone editing a big form and out of nowhere it change
Streaming is not just an SSR tech stack thing, been using Sockets for years now. Generally speaking the types of web pages I create are for commercial dashboard, data entry type systems, the majority of comms are via sockets, and like you pointed out in the video one advantage here is Auth is only required once, other advantages is that data can also be sent in binary, and even before HTTP2 allowed you to create a protocol that multiplexed the requests. Still use Rest end points, but generally this if for legacy comms, or B2B logic. In the long run this takes way less data than sending HTML, mainly because the data can be cached aggressively, and invalidated by the server triggering updates. SSR makes streaming easy, but please don't make claims that it takes less bandwidth than client side rendering, because it depends on how you do client side rendering, using REST is just one option that because of it's stateless model is not the best for performance.
What field is this?
Let's goooo 🚀
15:00 same thing I am still thinking in case of RSC when I got to know of it, for every updates sending the whole json representation of the page. How optimal is that ??
Or maybe be revalidateTag (sends the respective component's json representation) works differently than revalidatePath (sends the all components json representation) I guess
What drawing tool is Theo using at 21:14?
If the communication between browser-API server goes through HTTP2/3, the many requests might get a boost by running in kind of a batch on the same socket
But not auth what was his main concern
@@gerritweiermann79 The auth point is moot. If you are using JWTs you don't need to hit the Auth system every time.
I hope no serious application checks back with their auth system on every single request.
I listen to your videos with my 2 year old prior to switching to toddler music in the mornings….. it’s a matter of time before he starts saying f*ck
I don't understand this problem of out of sync state. My front end only ever maintains its own state. We solved this decades ago with MVVM (model, view, view model). The frontend is the view, it works off a view model (it's own state), and then the model comes from the server. This feels like a specific problem that's being appied to every situation, much like redux. Making your server responsible for UI state means your client is now tightly bound to the backend, making backend changes riskier. Want a simple UI change? Make two changes and two deployments. The benefits of even having an API fall away because every client now has to accept an HTML response. The moment you have a frontend that needs to have a different HTML structure returned, this architecture becomes a horrible mess. I expect to see "server side UI API responses were a mistake" videos in the the future.
It sounds like in MVVM can be expensive to make changes to the server model. Usually the UI is driving changes to the shape of the server model and development can become hampered by the UI team having to wait 3 to 5 business days for prioritisation, implementation, testing and deployment of server changes.
Stop making sense. If we made web development solved and boring, how could we talk about the new mistak-- err... "technology" of the week?
Just a consequence of trying to force JS on both backend and frontend and then blurring the line between the two of them. The server sends the DB data as JSON or whatever, the frontend uses it to create an UI. If that data changes, the server sends the updated version.
This is not hard at all, it is just made hard as marketing for certain tools.
@@simonhartley9158 MVC and it''s children are VERY battle tested by now and no, the UI shouldn't drive server model changes. The UI should display what the app needs to display and the server should be providing enough information to do that. You don't change the server in response to UI changes, you change BOTH in response to app design changes.
Management paperwork is a separate, unrelated issue. Changing the color of a button should definitely not require a backend change.
I feel every framework is driving towards being a new Visual Basic, which we abandoned for good reasons. JSON is pretty good for me, I can still do rich, dynamic UI without having to fight a renderer to produce the HTML I want the client to use.
@@Leonhart_93 your model doesn't seem to take advantage of the benefits of streaming, nor deal with update granularity vs. client side request waterfalls.
I swear half of these commenters are here just to be upset that their favorite technology isn’t getting promoted by the content mill today
I'm not a full stack guy, but I like to know what is going on. I hadn't heard a walk through of Elixir before but when I saw Theo react at time code 6:18 I immediately thought, I bet this is written in Erlang. Yup, that's what it is. That hot swapping stuff is pretty cool.
Used elixir to deploy a service maybe 4 years ago and it was a blast, but back then it seemed like the elixir ecosystem was a bit stagnant. Might have to consider this again, now that I am bootstrapping a new company
I'm Brazilian, Elixir was developed by a Brazilian coincidence? I have been programming elixir for over years
I rather wish for something that is focused on being local (client side) first. So like everything you do is local and synced with the server if available at some point in time, including the updating of the client itself.
How is offline support?
In 23:28 we were talking about Auth and how it would require 3 roundtrips to the server.
Wouldn't it make sense to have the profile and permissions within a JWT or Secure Cookie? If there's an update in one of these you could update the JWT or Cookie.
I know this depends a bunch on the architecture and the server, and whether you have it split into several services...
Secure cookie still require authentication if you use normal bearer token.
JWT have an issue of invalidation. If you want real-time JWT revoke, you again, need to do authentication for every round trip.
@@chakritlikitkhajorn8730
Yes. But you'd have less roundtrips as they are part of the token (correct me if I'm wrong).
Additionally, you can always have an in-memory storage with it, right? That way you can blacklist the JWT and it'd be fast.
If you have short lived JWTs (Expiring an hour or so) you can minimize an attack surface in which the in-memory storage becomes unavailable
I use blazor server at work which I guess works similarly to liveview. The problems we encounter at work is scaling and connection issues due to the constant need of being connected to the socket.
For example chrome browser saving features disconnect the socket and the state of said page is gone forever due to needing to reload the page. If the server is getting crowded latency becomes a big issue and interactive elements on the page feel sluggish.
I can’t say I’m a big fan of needing to be connected at all times to the server. How does Phoenix tackle these problems?
I have no .NET experience but the BEAM is exceptionally good at handling lots of connections. It favours equal distribution over raw speed. In terms of losing state, I think it's a general misconception with these technologies that you should be storing lot of ephemeral state. While the initial sell of LiveView was "no JS," it's moved FAR on from that tagline. It encourages doing many things on the client, like opening menus and whatnot, and they have helpers for that. If you have state that needs to survive a refresh then it needs to go in the database or local/sessionStorage. You have the same problem with with client-side JS frameworks if you are just storing state in memory.
you don’t need the round trip for every interaction, but if you do, Beam (VM) handles the connection trough lightweight processes that works independently handling millions of connections without increasing latency (unless you have another bottleneck)
Noob, but serious question: 24:55 -> Amongst Elixir, Go, Rust, Zig and C#, which ones have similar "included" approaches, delivering comparable results, in terms of requests volume reduction?
Or is it exclusive to Elixir's ecosystem? I'm just trying to understand if this is something new for Elixir specifically or actually for the whole scene.
This is new to the web. There's been some similar ish things in game dev for multiplayer games, but this is entirely new as a way to update HTML and manage a user's session over time
People are gonna come in here and reply "but TurboLink did this before!!!1!". Those people are wrong. Nobody's done a real concept of "long running per-user sessions on the server" in Ruby.
@@t3dotgg Thank you so much for such a fast reply! I'll definitely take a further look to understand more about this new feature, it indeed seems to be a very relevant mark!
How big of a project (number of online concurrent users) would you say is enough to justify choosing Elixir instead of Go (for example), if your main concern were to reduce financial costs while utilizing cloud services as the main backend? I know this is a broad question and heavily depends on the way the code is structured, but as an independent dev living outside the US, that's kinda my main nightmare and I rarely see people talking about these managerial aspects of development...
Can someone explain to me why people prefer to use k8s and react for all these?
Thanks Theo, amazing awesome!!!! Gosh darn Theo finally speaking my language,, JSON oh man nuts, gRPC? brew your own?... toss around 60K lines of JSON... each call.. huh? Download the entire SPA just to get started.. huh? RAM is cheap Theo!
Obv this breaks down for things which need frequent rerenders. Like a game for example
Yep! Server components do as well. IMO it's not a "break down" thing so much as not the solution space server-first tools operate in :)
Even react shits the bed for anything that is real-time. Canvas with WebGL/WebGPU is the only way to go.
Can we do offline first PWA apps with live view? Seems limiting...
No lol
@@t3dotgg that's a huge load of functionality just boom, poof, gone..
@@t3dotgg Actually yes, there are people (crazy people) who run a local liveview instance on your device, that powers the frontend, and communicates with the backends.
😂
@@t3dotgg liveview-svelte-pwa I am not sure is it a "offline first PWA apps with live view"...
It's so fucking cool!
So many vibes I wonder how long before we get a based reactivity updates
That the web still isn't a pubsub thing by default in 2024 would make my blood boil back when I learned programming in 2011.
Oh my, I hate you now.. You make me feel the URGE to come back to Elixir 😂😂
Keeping realtime web socket connection comes with lots of issues
There is a fallback to long-polling. Curious to hear which other issues you are referring to, because LiveView has been running in production for years.
Not a lot of issues, no, just some issues like anything in tech.
I love Elixir and Phoenix so much
It's exactly like hotwire for rails
I continue using react for react native, and for static sites, only use elixir for admin dashboards and graphql apis
Server-side Blazor has been doing it for a long time
Thank you. I was thinking, does it suffers from the same limitations in terms of scale or when user connection sucks? If I understood, this use websockets.
@@wakeuphugo yes, it's the exact same list of issues :)
@wakeuphugo Phoenix falls back to HTTP long poll when WS sucks
@@vrcca I think that blazor server does the same, since it uses signalR for the websocket part. But I'm not sure if it fallback to long poll after the WS is established.
I don’t think it does. At least not in my experience. If the connection is lost the page just stops being interactive until it’s reconnected. Which happens after a reload.
Every release makes me want to leave React. Only the food on the table makes me stay.
I'm making great money writing Elixir at work. Just keep an eye out :)
No, thank you. Isn't that what Blazor Server is already doing?
Does Blazor have the superpower of the Erlang-OTP BEAM? I don't think so. LiveView is powered by the most powerful virtual machine on the planet.
Yes, and it comes with a bunch of issues.
Using liveview in production for years and I didn’t have any problem with it
Damn, I guess I'm going back to Elixir... Even suspense is there now...
I have used the .Net version of this which is called blazor server. However, the big problem is the high latency for anyone who does not live close to the server.
You mean high latency? Because low latency is good as it's measured in seconds (or milliseconds).
I meant high
28:00 I think a lot of back-end devs start to realize that MVC doesn't work and the Component are the correct way to abstract, I think Django have them and Laravel also
I guess this is old school or "old guard" mentality...but I HATE the idea of having the server update or even know anything about the front end. In most of the applications I develop, a web frontend is only one option (and most of the time, not the only one being utilized)... this has always been my gripe with react. It was developer by engineers who could not figure out how to use the MVC design pattern.
What do you mean by “needing to auth 3 times”? We usually just use a JWT that is signed with a BE secret.
You have to decode the JWT and verify the signature every time genius. And that's if you're using JWT. There is other types of authentication, and tbh most services nowadays leave the auth part for an external service, so you will have to do 3 requests to those external services.
@@upsxace wow, toxic trash talk. No thanks. Just study some more :)
@@AlanPCS saying "genius" is enough to hurt you? wow. Tell me where I'm wrong please, I'm open to it (unironically)
@@upsxace nah… I think you are Genius enough to find it by yourself :) have fun!
What app are you working on where you don't need the ability to ban people or change their roles?
JWT doesn't magically solve those issues. Besides that JWT still requires resources on every request just cryptography instead of a DB lookup.
Just a heads up, when said as a noun, “attribute“ is pronounced with stress at the start, like “AH-tri-bute”. When it’s a verb, it’s pronounced as you did “uh-TRIB-Ute”. Wiktiomary has accurate transcriptions if you can read IPA. This changing of stress happens with lots of verb/nouns, like “the blue record”, and “they record a podcast”.
Regardless, awesome video, love hearing you talk about Erlang/Elixir adjacent things. And I’m 100% sure this mistake didn’t hinder anyone’s comprehension, just felt a little off when I heard it.
You need help
Why does he keep looking to the right side of the camera? Is he looking at his captors?
19:14 yay! no more `only: [...]`. That being said welcome back PHP and websockets in a trenchcoat.
Pease make a little course about phoenix :)
aha, ruby moneky patching in real time for server frontend application lol sounds really cool though
HTMX ftw. Our JS devs love Go/Templ. No JS, yay! :)
Except that sending rendered UIs to the client is almost always a larger payload that just some underlying JSON which sometimes can be very small. If one of your stated reasons was "bad internet connection", then all the more reason to keep the data transfer to a minimum.
In practice, payload sizes are very reasonable. In most cases, smaller than the equivalent JSON payload, which sounds counter-intuitive, but you have to realize that LiveView only updates the parts of the page that have changed. So a lot of information about the underlying models is never sent over the wire.
@@DerekKraan I doubt that. The average HTML page UI is 20 kb+, measured from my page. There is no way any JSON transfer ever comes close to that.
Usually it's like an object with 5 keys rendering a whole subsection of a page.
Or no JSON at all, that's what static elements are.
@@Leonhart_93 Note, I am not talking about the initial page render. On that one, we are making a trade-off between megabytes of JS and just sending the HTML. (I think LV still wins here by the way.)
On re-render, though, the payload is often very small. Phoenix LiveView, when it compiles your template, splits it into "dynamic" parts and "static" parts. The static parts never get sent over the wire after initial page render. Only the dynamic parts do. This trick keeps updates tiny for the most part.
@@DerekKraan Yeah but I can add countless arguments to that. Everyone has a powerful CPU in their pockets these days. Why make my server do extra load when their phones barely even register some extra processing?
The main bottleneck is always the internet connection, and I have no reason to believe that HTML will ever be less in size than JSON.
That's like creating new problems just because we are bored with the current solutions (which also happen to be back to much older approaches of server side rendering which I ditched).
@@Leonhart_93 Not everyone has a powerful CPU in their pockets.
The update payloads _are_ tiny. I explained how they do it, so if you "have no reason to believe", that's on you.
This is not "creating new problems". LiveView eliminates entire layers from your application. If this is not a benefit then I don't know what is.
I have been a happy user for years and will remain one.
You'd be surprised how many people are actually using Rust wasm for the frontend with Leptos and Dioxus - the number is only growing. You can't beat down a technology just because it's in its infancy
what about scaleability? isn't scaling websockets super hard?
In a BEAM world, nope, not really. It's very linear and the tooling to scale and share data between multiple servers is baked in and core to Elixir and erlang. It was literally designed for this from the ground up.
@@EightNineOne thanks for the answer!
Can we use this with a templating language that is not horrible ?
Could be worse. It could use JSX 🤢
Is "basically just HTML" terrible?
Isnt HTMX just what you want for this? Solves almost all issues and is much better than sending JSON around
Elixir mentioned
ELIXIR MENTIONED
oh