i low key hate that we still call it javascript... there ain't no java going on here only script.. we should just call it what i is i.e ecmascript and oracle and their trade mark (yes they hold javascript for some reason) can go to hell!
After "A new JavaScript framework every day" and "A new runtime every day," let JavaScript introduce you to an even better concept: "A new JavaScript language every day!"
How this sounds for me: - The JS ecosystem is an absurd clusterfuck. - There is a proposal to do X, which should help slow down the enfuckening of things. - But if X happens it will accelerate the enfuckening of other things.
it's litterally the xkcd meme : - There are 14 competing standards... - Oh wow, this is madness, we should have only one standard to unite everything instead of this mess! - There are 15 competing standards...
How I understand it is: We will make 2 languages, current one that would only expand with more capabilities and APIs, but not syntactic stuff, and one that would have new syntactic stuff. Build tools are expected to transform second into first and engines expected to execute first one. Basically, that's looks like what happens now in moat cases, but with clear distinction between JS you write and JS that arrives to browser.
- Yep, absolutely agreed. - Yep, that's the intend. - Yep, that's what'd happen. JavaScript is and likely always will be a hotchpotch of random shit thrown together that somehow sticks and glues ... untill it doesnt.
I have been thinking for a long time that the idea of all web sites being viable forever is ridiculous. I don't see why we don't time capsule current javascript and say all past sites work. And then update new web standards. I don't see why the browsers couldn't manage classic javascript and modern javascript. This way we can stop trying to add all these proposals, and cobble together a language that was written in 15 days originally, and clean up the mess based on what we have learned over the last few decades. The trick would be interoperability like a codebase that can run run both TS and JS. This would also be able to run Modern JS.
If you're compiling and transpiling and uglifying to the point that sourcemaps are something anyone thinks are a good idea, then you should be shipping a BINARY, not code.
@@omkargarde5867 You've misunderstood what WASM is, the exact same way I did when I first heard of it. It's not running JIT code on the host CPU. It's running made up machine code on a javascript VM. It's not about performance, it's about providing a compiler target. (And a shit one at that).
@@Asdayasman Why can't we just use Java VM or Dart VM if that's the case. You know google could've fixed the web if they didn't abandon Dart in the Browser.
@@asandax6 See now here's how I know you're not a JS developer - you're asking sensible questions that would lead to a good solution if investigated. JS developers did the exact opposite at every point possible.
This whole system is beyond ridiculous at this point. General solution : - rework engines so the DOM can be directly (an efficiently) handled by WASM - adapt tooling to efficiently support AssemblyScript - ??? - profit End result -> nice DX in pretty high level language (AssemblyScript) and good performance for end users (WASM)
This would require search engines to have the capablity to index wasm content. Which would be amazing, but requires coordination between too many actors
I like the idea, but for it to work the runtime needs to be simple, like .NET runtime. C# keeps getting new features every year, but most of the time they don't cause any change to the runtime. The high level C# code gets lowered (ex.: pattern matching is lowered to a bunch of if-else statements) and then finally compiled to .NET intermediate language. This strategy works because the language can get new features that usually don't require any change to the runtime, keeping it simple. To be honest, that's what I expected WebAssembly would be when it was first announced.
If all they want is effectively a "bytecode" language to support - why not just mandate that all JS should now compile to WASM instead and just support WASM in the browser?
because whoever tries to make a wasm compiler for javascript will most likely not succeed, javascript is too much of a dynamic language to be wasm-compiled
@@ivan.jeremic thank you for reminding me why i shouldn't use a framework inside a framework with 15 libraries to manage & maintain every 2 weeks ... i guarantee your not mad at me, it's the stress your ecosystem that's causing you to comment like this. i feel bad for newcomers to stumble upon people like you in the comments
I know you're making a joke, but honestly I think that JS2 would be the best thing. The chance to remove a whole lot of old JS that people don't use any more, prioritise the newer syntaxes and clean up a lot of the polluted parts of the language. Then could polyfil the older parts of the language.
19:28 the JS ecosystem has been evolving at an insane pace literally since I started professionally. It's been the case for at least that long that you will learn one tool, and in three months it's obsolete and you need to learn another. Sometimes the new tool is a way to not need to learn the next tool by combining all the tools in one, then that also becomes obsolete. JS devs are, by and large, a group of insane people who need to lay off the fucking methamphetamines for eight minutes. This is not the browsers' nor the users' problem.
Fundamentally there's nothing you can do in the language today that you couldn't do a decade ago, generally these changes to JS amount to just a different 'improved' way of doing something, for example nullish coalescing is a one line way of setting a default value. Its all so tiresome. That said, this constant state of change keeps us all in employment, so meh
If JS becomes a compiled language essentially, it’s worthless. It’s just the world’s slowest compiled language. At that point we may as well look at having JSSugar go to wasm or some other compiled source that can actually be far more performant. JS’s whole thing is that it runs, as is, anywhere. It’s basically Doom.
what you are defining my friend is a "scripting language" , this is what JS was until they started putting typescript on top because the soyboys that wrote C#/Java could not work without types Yes there was babel for making your js compatible with older browsers but still it was scripting
Seems to me that if you need all these features the focus should be on making WASM capable of controlling the DOM so other languages can compile into something usable on the web. If JavaScript can’t be everyone’s ideal language for the web then it should only be the default and WASM can open the door to other options. But also if implementers cant implement either the build or browser side of things fast enough. Then I think our community goes too fast and doesn’t think enough about the consequences of each change to the standard. We can’t even transition to ESM in a timely fashion so maybe slow down a bit?
you can already access the dom apis with rust through web-sys without any javascript glue code. leptos and dioxus are bult on top of web-sys. it should be possible to do this with other languages but no one has yet.
@@bear458ziif-s web-sys does use Javascript glue code, it's just hidden really well. It uses wasm-bindgen, which "is sort of half polyfill for features like the component model proposal and half features for empowering high-level interactions between JS and wasm-compiled code" -- The wasm-bindgen Guide
WASM can control the DOM, either through JS bindings passing messages or sharedArrayBuffer in a proxy, undetermined which would be less latent, the easier is probably good enough. I found it incredibly simple to expose WASM functions and call them from Alpine.js to basically just write reactive-HTML with the logic processed in WASM.
They just need to make web assembly have the same access to browser api that javascript currently do. Then rip javascript out and make it a web assembly package that is packaged with the browser. That way people can use whatever web assembly package of javascript they want or better yet just use another language. JS0 just seems like a worse version of web assembly.
Nobody wants to switch from JavaScript (Typescript) to Web Assembly. People who don't like JS really do struggle to understand why people like JS, don't they.
@@jonhobbssmith You'd still be writing JavaScript.You could have the build tool right inside your browser as a plug-in so that you could REPL and code within the browser. You really lose nothing except having to import your build profile to the browser.
@@jonhobbssmith javascript is not the "ultimate form of programming languages", so yes, people want to switch (or *start*) in languages other than ts(js). it's literally a choice.
Exactly it's just moving the issue to another team. But they kind of also have a valid point: Supporting backwards compatibility means all these small features like Error.isError have to stay in JS forever and pile up as more of these features get added, even if the older ones are obselete. This pile-up issue makes maintaining browsers or runtimes way harder. If they want simplicity then they should remove obselete stuff
@@ramsey2155 Sure. Let's move the compatibility issues to compilers and bundlers so that every older established website needs to recompiles and redeployed every few months because something in the browser got deprecated...
On one level, this just strikes me as an overcomplicated relabeling of what already exists: JS0 is just another name for native vanilla JS and JSSugar labels the existing ecosystem of tooling, frameworks, preprocessors, and syntactical sugar the dev community contributes and propagates on its own. When properly managed, such features, capabilities, and conveniences that emerge from that segment give rise to native functionality - officially recognized and supported in the ongoing ECMA standard and JVM runtimes. On another level, what this REALLY sounds like to me is a political maneuver to drop the existing commitment to maintain and advance core native support. We've lived through such dark times already and I thought we'd all learned the lesson: The more comprehensively state of the art functionality flows into native browser support, the better off ALL stakeholders will be, end-users especially. In my view, if we allow such a break in the continuum between third-party tooling and native browser support, the result will be fragmentation, incompatibility, and inconsistent implementation; leading eventually to latency of innovation and outright atrophy.
I may be off base here, but the main reason why anyone uses transpilers and other build tools is for businesses to support customers who never update their browsers. This proposal would be a nightmare for an enterprise web store to implement. The browser publishers should have room to implement new ECMAScript standards as they have capacity, too, which is the other reason for the transpiler infrastructure. I think you are right that we already have this system in practice, it's just that there are many stakeholders implementing ECMAScript standards. JS has always been about business and capturing marketshare on the web, so I'm also extremely wary of any group proposing a new ecosystem. Who profits? The JS ecosystem is as much about keeping the web browser business equal access for those who create for-profit webpages and web apps as it is for the developers having consistent tooling and ECMAScript standards.
End of the day, its up to google. Considering how much of the market share V8 has as a JS engine, if they decide they don't want to support new language features, they don't have to, and we'll be back to the dark days like when we had to support IE6. That's the problem when you let one company have such a monopoly.
@@KineticFaction While I agree with your insights, I would add Chrome's market dominance is due largely the way Google picked up the torch where Netscape and Mozilla Foundation fell short to advance V8 and put real energy into pushing native browser support not just for Chrome and Chrome-based browsers but also to the Webkit browsers. Maybe you think of them as monopolistic but the fact is someone needs to drive the standard and bring the constituents to the table. If there truly being monopolistic, they would've rolled out Chromescript by now and abandoned all open standards. Microsoft tried that ages ago with VBscript and I still think they intend to do something along those lines with Edge with a direct Typescript VM. If Google abandons that commitment, someone else will fill the void and, if is it Microsoft or Apple, each has an established tendency to go more proprietary, more walled garden, less open.
You can if you want. JavaScript is needed to run workers that invoke the WASM binary code, though. Some tools let you translate other languages to JavaScript or WASM. A compiled WASM is likely harder to debug using the browser's debug features so that's one reason to continue either using JavaScript or transpiling to it.
WASM doesn't even have strings, let alone objects (AKA dictionaries, hashes, or maps depending on who you are asking). JSON can't be natively supported without that. They came up with interface types but it really hasn't gone anywhere after years.
@@brennan123 I think that problem is already solved. WASI and the WASM component model exists already. It has every single type you mention here standardized down to the WASM ABI level. It isn't hard.
First they should be working on a ECMA standardized JS runtime for WASM, like an official version of Porffor also supporting JIT, governed similarly to the way the Rust codebase is developed. THEN split the ECMAScript language at the point of memory allocation, so there's a JS version that can be compiled and run without a runtime, and there's another JS version that needs a runtime.
I've actually created web apps using WASM and the experience was... okay? I mean, until it wasn't. Then it was a hot, miserable mess. Using tooling to make WASM easier, such as Leptos, makes things easier, but not entirely pain-free. I will say, however, that major WASM frameworks should support SSR, so ads and SEO shouldn't be negatively impacted. I think WASM is in the same place that Rust and Wayland are in... It's weighed down by bureaucracy and issues can be stuck in proposal stages for years. If WASM or browsers could agree to expose async/events API, DOM APIs, and networks APIs, then WASM would be a viable replacement to JavaScript. As it stands, you can call into JavaScript for these things, but it's painful.
@@reybontje2375 IAB, ACA and others are trash, we don't have trusted Agency that regulates internet advertisment in a first place, plus Apple just block some features on IOS for good. I don't see any future with WASM in advertisment.
Vanilla JS is far more powerful than most devs think it is. You can use TS to check types and nothing else, or just use check directives. I can see the appeal of this for folks in React (or similar) where the browser is a couple of levels of abstraction down, but for the rest of us I think it's incredibly toxic. Build tools are already a nightmare where you end up fixing bugs in your tooling to try and get the output you could have fixed directly. JS Sugar just looks like Google trying to spend less on their platform team.
the problem i have with this is that it greatly increases the difficulty of getting started with web development. now there will be layers that have functionality that doesn't work unless you have a specialized development environment, which is much harder for a beginner to understand than putting a script tag in a HTML file and having every part of JS work with no build step. i remember messing around with basic HTML when i was like 12, at that time i definitely did not understand using the terminal so using build tools would not have been easy for me
I'd argue that following a tutorial to download an IDE and run your first hello world in any compiled language is not notably harder than following a tutorial explaining how to write your first JS code in the browser console or an HTML file. But I live in compiled-language-land (and like to stay there), my views might be biased...
True, JS has so many users in the first place because of how easy it is to get started with it. No compilers, no build tools and no environment to set up....just a plain text editor and a web browser is all you need to get started. No installation is required. They are killing this simplicity by over engineering it.
@@BillieJoe512Sure, in the case of native land compilers are a necessity but in js we don't need those, browser runtimes are already optimized with all sorts of magic and jit. Why complicate things further if not necessary? You only need those transpilers/build tools for writing js at scale with a good DX. And I feel existing tools are doing great, tc39 staging process is also okay! Th problem is the insane speed we are moving at. The problem is the tribal nature of this community and tech giants wanting to takeover the whole ecosystem. Every language has its own quirks. Like C has existed since forever, and is being used widely but it also has its own problems. C standards are also kinda mess imho but no one in their community yells like the people in the js community. Comparing floating point arithmetic is a problem in most languages take python for example, but no one in their community says python sucks for this and here we have the js community with dedicated pages explaining why js is a weird language with the same examples
@@BillieJoe512 well in that case i still think this is a bad change, if we want compiled code in the browser we should expand the capabilities of WASM or offer another language, not just transpile something to a dynamic language.
It's not the first time JavaScript has seen a split The first one was during ECMAScript 4's development, due to which we got both ECMAScript 5 and ActionScript
This is a concept that has made the JVM and .NET's CLR such powerful language targets. It makes sense for the browser to have a runtime equivalent and use an intermediate language (JS0) just like how we have Java bytecode and MSIL. This also allows full interoperability with libraries written in other high-level languages targeting JS0. Is this a smart move considering we're not starting from square one? I'm not sure.
- Other's have touched on this already, but to me an elephant in the room for this proposal is that the authors give zero reason for JS Sugar to even exist. All the supposed benefits they cite would come from freezing/near freezing JS0, there's no advantage to having an additional "standard" for JS that has to be transpiled by tools in order to work (and arguably some big disadvantages). I think the authors realized that openly proposing a feature freeze on JS would be hugely unpopular, so they added the JS Sugar proposal to make it sound better, even though JS Sugar is basically the worst of both worlds between transpiled languages like typescript and vanilla JS - The chrome team seems to be still drawing the wrong conclusions from their surveys of JS developers and ignoring the evidence from usage statistics (which is funny, because they should have some of the best access to the latter). Most of the web isn't using a framework/"library" like react on the frontend, the most common solution there is jQuery. I don't have direct statistics on this, but based on the aforementioned prevalence of jquery I _strongly_ suspect that most custom javascript is already no-build, or at least close to it.
The argument for JSSugar was quite clearly to still have a common standard for new language features going forward. Otherwise it's the wild west of language features and we're right back to babel configs with 4000 plugins of extension syntax.
@@jakobkristensen7024 The issue is that if I want to use a JSSugar only feature (say for example pattern matching), I *have* to use a build tool with this proposal instead of using hand-written JavaScript. This makes it harder to integrate as e.g. a Java/C#/whatever project now has to also depend on something like node/npm to get the transpiler, add that to the build pipeline, hook up build dependencies, etc. Them there's making sure that editing the file rebuilds the JS0, hot reloading works, etc. You will still have issues where a transpiler written in Java/C#/Rust/whatever may not support every feature, so you would be tied to a specific transpiler. It's shifting the work from the browser to the transpiler. But now you have transpier bugs to care about, what JS/JS0 engines it supports, etc.
@@jakobkristensen7024 You would already have to set up such a config if you wanted no features, so there's no advantage there. Further, not having a ton of config flags to turn on individual language features isn't an advantage of having one _global_ standard for everything that transpiles to JS0, but rather an advantage of having one standard _per language_ that does so. You'd get the same benefit without JS Sugar from people opting for e.g. TypeScript, PureScript, CoffeeScript, Elm, ReasonML, Gleam, Etc. in their projects, with the added advantage of each of those languages being able to evolve faster than JS Sugar could because JS Sugar would need to get buy in from a comparatively broader base of users.
@@jakobkristensen7024 Except you'd already need a build tool and all the complexity that brings, and not having a lot of config options depends on having one compile target and standard PER LANGUAGE, not one language. Other languages like TypeScript, PureScript, CoffeeeScript, Etc. would be able to evolve faster than JS Sugar because JS Sugar would need to get buy in from a much broader group of developers before implementing a feature.
@@jakobkristensen7024 You'd still need a transpiler, so no advantage there. And avoiding the wild west comes from adopting a standard, not from there only being one. There would still be only one e.g. TypeScript.
I like how they say "No evidence suggesting tooling use would decrease." did they see the hype over HTMX, a lot of people want to not use tooling. This is insane! Completely agree with you Theo! 😁
I said that 10 years ago, let the browser be just a wasm runtime and send the the engines you need over the wire, for example (HTML/CSS renderer, JS Runtime, or any other runtime for example JAVA, C# ).
i am very very opposed to this part of the beauty of the web is that every website is """"kinda"""" open source because you can see the code run on your computer by killing vanilla js this just pretty much enforces that web dev no longer has this avallibility i see this a threat to the open web and am thusly opposed
@@AntiAtheismIsUnstoppable you can already nobody and obfuscate your code and even just using TS makes it harder too. And you can do with what i've heard that google does with recaptcha which is make an entire virtual machine running its own compiled programming language all inside of obfuscated javascript and they update it now and again so that it is literally impenetrable. But what i'm talking about is the "default" which is vanilla JS is somewhat readable. This change makes it so that default is now unreadable (if someone wanna make it readable they just put their code on github as open source - but that's basically opt in now instead of opt out) Now the end user really has no idea what is running on their browser while right now on a good portion of smaller websites that don't specify make it difficult - they might be able to at least poke around with it.
Can you imagine if C said "writing compilers is hard so could you just pre-compile your C so we don't have to do as much work?" This is fucking madness. The reason buildtools are so common isn't because they're a great idea, it's because YOU ALREADY AREN'T DOING YOUR JOB. "I really love my build step" - no dev ever.
That is literally exactly how compilers work. All compilers, including the js runtimes, internally perform several lowering steps progressively moving code from the high level source language to the target language. This proposal is basically advocating making a specified intermediate layer that everyone can target and consume instead of everyone reinventing the wheel.
C not only has a build step but also a macro system that reprocesses code before that. Still easier to deal with than older browsers. Oh and the bit "Enterprises should just change"... who do think is paying for most of the webapps being written? The enterprises. They've got 100s each.
The browser makers: "We fucked up web components by not knowing how people build things; we should fuck up JavaScript by not knowing how people build things."
WASM doesn't have good DOM access except through JS which is a pretty huge part of what JS does. WASM almost always fails on what makes the web: accessibility, CSS, etc. Most implementations just render to a Canvas and the canvas isn't HTML...
JS+HTML+CSS is the BASIC of modern computing; it's a free, common starting place for new programmers. Please at least make it possible for some useful code to be developed straight in the browser so we don't lose this feature.
@@jakobkristensen7024 Because you will need to understand JS0 and JSSugar to debug things in browser. JS0 won't run in browsers without a compiler. If you ask me, it's a smart move for Google, they will be the only ones with an updated compiler. Everyone else: Node, Deno, Bun, will need to implement their own updates. In the long run, Google wants to open a new market and sell this "new" compiler.
I don't use a build tool at all. Have never needed one. Each HTML page gets whatever JS it requires for just that page and only that page. So this stupid proposal would be useless to me. I love how a lot of these modern developers forget that Javascript is first and foremost a browser based language.
worse than useless. the fact that js is human readable and writeable is literally it's only redeeming quality. this change threatens to throw that down the drain.
Just move all JS features that can be poly-filled to „stdlb.js“ and „stdlib_legacy.js“, then reduce the JS engine by those features. Implement a logic that imports both of those libraries, if no import statement in the user’s code references one of those libraries, else import as given in the user‘s code. Finally, ship those two library files with every browser download. This should shrink the JS engine‘s code quite a bit.
I have been doing web development professionally for 20 years. This proposal forgets that the web is for everyone. I can't tell you how often I write vanilla JavaScript for things outside the gigantic web application I work on. I have bookmarklets to format a link and title of my MR request to post in Slack. I have a Chrome snippet to create the branch name for my current Jira Ticket. Most of my personal projects are Vanilla Javascript. I don't want us to lose the ability to contribute to the web with just a text editor and browser.
SIMD is not "large amount of data control" lol, simd is instructions on vectors of data. CPU have vector unit so it can batch-process vector if supported operation with just a few or even one clock cycle. for example, there is permutation operation available
Proposal makes sense to me. A few points: 1. We should have a bytecode that can be a compilation target. WASM is too low level but we don't need every feature in JS to be possible in the bytecode. WASM also moves too slow. How many years do you need to implement strings FFS? Shipping actual source code and calling it the "compiled" is weird. It's still source code and not optimal for bandwidth and requires additional parsing by the browser. 2. Garbage collection should be present in the platform. 3. Every type that is valid in JSON should have a native representation in the bytecode. 4. New APIs will constantly need to be added. This regularly happens. Bluetooth, WebMIDI, WebGL, Canvas, heck even the first version of CSS was an add-on. This does not mean we need to update the language or bytecode though, just what APIs are exposed. That should be a bit easier. 5. Look for inspiration from JVM and CLR. 6. No need for classes, spread operators, or any of these high level features. 7. const, var, let (just pick one, we don't need all 3). let is probably fine (immutability can be enforced at a higher level). Willing to have const as well if implementers say they need it for performance reasons but there are ways to infer something is const. Compilers are good for implementing higher level features. You don't need the target platform to support them for them to be usable. JVM has Java, Scala, Lisp, Ruby, Kotlin, Clojure, Groovy, Python, and even Haskell variants. This should be proof enough you don't need every single feature of the language to be implemented in the target. Having a more baseline web compilation target would enable richer and more varied tooling.
If one is going to give up direct authoring then why bother transpiling into anything at all? Just compile to some form of bytecode and be done with it.
There is no form of bytecode that runs on all machines. WASM is not bytecode. You also don't want this future you are advocating for, because it creates a developmental hell where you will always run into weird bugs where you need to understand the target code you are compiling to (like WASM).
@@gamechannel1271 Sorry but your points are ridiculous. 1. You don't need to run on every machine. You target a single VM and can now target every platform where that VM has an implementation. 2. WASM is bytecode. 3. Your argument would also imply that every compiled langauge is bad if you don't understand x86 assembly, ARM assembly, JVM bytecode, Python Bytecode, or even modern JS engines. Compiled languages are cool and good and interpreted languages should die.
This feels really similar to how C# is compiled: it is first 'lowered' into a simpler version of C# before being compiled into MSIL. This makes it much easier to implement certain language features without worrying about implementing it in the runtime (which means it can built faster, but also compatible with existing runtime versions)
Build tools shouldn't be required to use JavaScript. I have at least 2 projects where I hand-write JavaScript without using build tools. Polyfills can be useful for supporting older browsers (and can be usable without build tools), but shouldn't be required to use a new feature as they can introduce performance issues compared to native implementations. I shouldn't need to use a build tool to use syntax features -- if I did I may as well use TypeScript or some other language.
ESM and BigInt are super important on the server. BigInt in particular is also in the category of "making things that were previously impossible, possible". But yeah, in the browser, you are usually dealing with smaller numbers, and important computations where precision is important are often delegated to the server anyway, thus making the regular Number type sufficient. And any kind of module system (be it ESM, CommonJS or one of the older ones like AMD) are taxing on the browser without build tooling that bundles it into the minimum number of files possible. It seems like engine implementers and V8 in particular, are forgetting that their work is important not only in browsers (where they have plenty of data to show how much each feature is used, and how it affects users), but also in servers. The fact Node, Deno and Bun weren't even mentioned in passing on the slides tells me this is very much a blind spot for them... They're lumping them into the "tools" bag, when they're in fact more akin to a browser... Technically, they're a runtime... Not a browser runtime, but a server runtime. Still. Improvements to ECMAScript affect server runtimes even more than they do browsers nowadays... And that in turn allows tooling to improve too, since tools often run on the server runtimes... And tooling improvements in turn allow developers to be more productive, which in turn makes better products for end users. If engine implementers are under staffed, that seems like the real issue here... They very much shouldn't be.
The thing is, browsers _have_ to rely on JS. Devs on the server don't, it's a choice they make. There's plenty of other languages with their own runtimes and frameworks you could use on the server. On the browser, in the end you have to use JS at some point. So I can see how these things are not seen as a priority and are part of tools devs opt for.
It wasn't long ago that you posted a video about Oracle (still) keeping JavaScript under their control. The person at the time was asking Oracle to "leave" JavaScript to the open source community. Well, in my opinion, everything that has been happening with JavaScript (TypeScript, linters, etc., practically every day something new is invented) is due to Oracle's negligence. JavaScript doesn't have a strong group or organization behind it to say "no" to certain things, which is why it has become the mess it is today.
If only we could distribute compiled binaries into stand alone units that performed certain applications then we’d be onto something….hear me out, any language you can think of but compiled into machine code….
I've only heared mysteries of such things. Legends, even. That one would feed a file into this "kompailar" and out comes a whole program, that just runs, and is small... Ah, dreams...
Java has a virtual machine, and a language, with a compiled intermediate, and features are split between them. C# is a syntactic layer over a similar implementation core called .net. This seems to be where JS wants to be going, if I'm hearing this right. The main difference is that the standards body ships both a standard platform, and a standard compiler. You can use alternatives, including compilers for alternate languages, but there is a standard implementation that is shipped. If JS standardized this split and draw the boundary right, interesting things could start happening like "Ruby for Node" and "Python for Browser" that downcompile to JS0, but a lot depends on exactly where they draw the boundary, and them stepping into the tooling space with standard implementations
This is shitting on the whole current JS single language ecosystems. My immediate response is that this fundamentally sounds very much "solving minor inconveniences by arbitrarily partitioning the core ecosystem of the entire goddamn internet."
Sounds so great, let's introduce even more bloated installed packages instead of bloated JS runtimes, and also enjoy more supply chain attack and more left padding, neat.
At least part of this issue can be alleviated with a better standard library(see deno's JSR standard library) however the rest of this is insane to me. If you want to make JIT compilers in browsers less complex then introduce proper typing in JS already.
I'm not following the logic for why in this implementation there would be additional security risk from the tooling perspective. If the runtime is enforced to run JS0, and they spend their resources to ensure that JS0 is 'safe', whatever that's supposed to mean, and if all of the fun stuff is implemented in JSSugar which is compiled down to JS0, why would the JS0 implementation pose any risk if the runtime, hypothetically, is ensuring its safety?
I think the points made in the presentation are roundly quite good, in my opinion. My understanding is that they're positioning JS0 as a more reliably static compilation target rather than a language to be written in. This is all fair, but my question is this: isn't this what WASM aims to do? Isn't this a 'WASMscript' proposal?
Why don't we just have some kind of versioning thing? And make it backwards compatible, such that if a site doesn't specify a version, it uses classic JS, and then new versions are YOLOed.
It's at the point where adding another "over" is actually accurate and a great display of the levels of abstractions, tools, and hoops. It is overoverengineered.
i'm using htmx and raw js files for a web frontend at my workplace :) why would you force anyone to use fancy tooling mantained by third parties who can just... stop doing it???
200 IQ solution: Add macros to vanilla javascript. That way users can use vanilla js to extend the language. C did macros in the 1970s, it's time javascript catches up!
Personally, I feel like the worst thing of this proposal is that these tools will become even more susceptible to feature-creep and we'll end up in a situation where things have constantly be started from scratch because tools saw high adoption and are now afraid to break things for their existing users. Or in other words; this will end up making projects like Webpack go through Webpack's life cycle (aka becoming to bloated and slow with no real intention to clean it up and instead rely on other tools to replace it) in very quick succession.
If some browsers don't want to implement some "syntax sugar" kind of JS-feature just because it's possible to implement with a polyfill; then just make the polyfill built in to the browser.. Kind of how chipset manufacturers sometimes implement complex instructions using microcode instead of silicone. That way it will work in browser without a bundle of polyfills that everyone has to download multiple times from every website everywhere, because the polyfill is already in the browser. But in general polyfills are much slower than a native implementation even if the polyfill is already cached because after all you then have to re parse and interpret a lot of extra code rather than having it run natively, so it's not ideal.
Look at those mentioned bugs. 7:41 BigInts, Class scopes, optimising super, resizable buffers, WeakRef... Probably some of the others too; all of those would _have_ to go in the JS0 part. How would a compiler handle BigInts in any kind of performant way? Also, obviously classes were a mistake; we all knew it, they did them anyway and now were stuck with them.
Truth - evolving the language for its own sake does introduce systemic risk for billions of end users. Freezing browser-native JS language and focusing development on WASM features does feel like a collective win. There has to be a complexity ceiling on our browser scripting language, and chances are we've already reached it.
If think about it in a Java way. Browser would need just implement the JIT interpretater and dev tools will "compile" the sugar and human-readable code into JIT. So, essentially, "just give a compiler for the JS". Browser will implement "JVM" and all sugar will exist only in the compilation time. P.S.: I still miss types in runtime, so it would be nice for JIT to support it as well :D
This won't sail because it will break the whole 25 years (or when the JS came) worth of websites in modern browsers. Also, the Ruby on Rails is going no build. That's a monumental evidence that tooling is decreasing. And seriously, such suggestions need to be done by thinking beyond javascript because other languages use javascript without tooling for enhancement if required on the web without going full crazy on tooling.
Let's just make web assemblier, like web assembly, but it can manipulate the dom, but has no interop with JS. And we can all forget about this. But you know, at the end of the day, this could've happened with BASIC instead. So I'm not sure if we are living on the better or worse timeline right now.
I don't understand. Aren't they proposing exactly what we already have. We have ECMA script which is what is actually run in the browsers. And then we have build tools which can handle transpilation of experimental features or even languages (like typescript and coffee script). And on what planet would it be easier for a build tool to implement features BigInt or async generators?
After watching the video.... Aren't we already doing this with TypeScript? TypeScript is a superset, and ends up transpiling down until it reaches a compatible version. TypeScript enums are an example. Now, there's another thing, syntactic sugar, ends up using less code than the JS0 that it will transpile down to. That{s because transpiled code will likely be mangled and minified.
Its a good idea, but what we REALLY need is a compiler! Remove Javascript from the browser and replace with WASM. Let Javascript itself manage its own features, not when browser decides to implement them in engine. Browser should only build support with calls over WASM and thats it. That way, when a feature wants to get into javascript it gets into the compiler and its there for everyone! No need to wait for browser updates, adoption or certain companies (you know which ones) not wanting to implement something. And there would be no need for any polyfills! If we could go a step further and abstract the render engine to wasm somehow, we could also get compilers for HTML and CSS!
We should come up with a nice, consistent binary DOM API, and port JavaScript to a generic, standardized VM, so we can run JavaScript or any other language in our browsers, without re-engineering the runtime every time somebody wants to add a new tiny change.
They are SOO close to discovering compiled languages. Guys, just make WASM something that can be a replacement for js, and compile your js/ts INTO WASM. (Written by a rust enthusiast)
I would be down for that because then I could avoid rust completely and use Zig LOL (I think zig has the best wasm compiler, I looked into rusts's version and it was a lot harder to implement. I have only written Zig once, and I was able to get some working wasm in a very short time.)
SIMD is pretty much built into the chip and allows you to operate on multiple values at the same time like doing 4 adds at once. It’s a low level operation so it makes sense. AVX is an example of SIMD that’s part of the x86 ISA
So in a situation where a compiler is not needed you're screwed with the more efficient and readable syntax features? I was pretty sure that at least one purpose of compiling something was to make the code run faster and take less resources. It doesn't sound like that would be a thing that much with this. A polyfill might run slower.
Compiler is a generic term for different things, it doesn't have to be an optimizing. Compilers can be used to provide compatibility, optimizations, translations, packaging, not mutually exclusive.
It's an interesting proposal. What's probably worth to point out is it seems to be only saying that new features going forward will only have to be implemented at the tool level and none of this precludes devs who want to continue working on the browser directly in JS0 as they have been, which is important to me since I generally disacree with this modern trend of everyone running all these tools just to take an interpreted language and spit out... the same interpreted language, but worse.
I push back on the notion that javascript "already" requires build tools to be productive. In my view, that was definitely true 5 or 10 years ago. Much less so now. Although build tools do have benefits, a lot of those benefits don't come entirely for free. And you can certainly get by without them. I often get the sense that web devs are so used to piling shit on top of their stack by default that they don't understand what the foundation is capable of.
My idea of an ideal future is that the desugaring step and optimization step are made as independent as possible. Design JS0 as a dead-simple intermediate representation. Think Java bytecode. In fact, it could straight up be WASM. All of the syntactic sugar in JSSugar (and TC39 proposals) would be required to have a polyfill/desugaring implementation. Then, almost completely independently, build tools can coordinate with engine implementations to define annotations/comments for JS0. Any metadata that engine devs need to optimize the runtime would be communicated through those annotations. But you should be able to strip any and all annotations out of a JS0 module without impacting the correctness of the program. That means that a minimal build tool could just use the desugaring implementation of any language feature, without providing any annotations, and still produce something that's correct. Likewise, a minimal engine can opt out of implementing an optimization using those annotations, and still behave correctly so long as it supports core JS0. People seem to love LLVM and Java for taking a similar approach. Why not do it here?
split up into Java and Script
😂😂😂
😂😂Yoo
Well java is a reserved word in a framework called Reality
Based comment
i low key hate that we still call it javascript... there ain't no java going on here only script.. we should just call it what i is i.e ecmascript and oracle and their trade mark (yes they hold javascript for some reason) can go to hell!
After "A new JavaScript framework every day" and "A new runtime every day," let JavaScript introduce you to an even better concept: "A new JavaScript language every day!"
gotta learn rust if this continues.
@@brencancer zig is better
@@igoralmeida9136 go is better
This is getting out of hand! Now there's two of them!
@@igoralmeida9136 True, but Lisp and COBOL are the languages of the future!
How this sounds for me:
- The JS ecosystem is an absurd clusterfuck.
- There is a proposal to do X, which should help slow down the enfuckening of things.
- But if X happens it will accelerate the enfuckening of other things.
This one unexpectedly got me
it's litterally the xkcd meme :
- There are 14 competing standards...
- Oh wow, this is madness, we should have only one standard to unite everything instead of this mess!
- There are 15 competing standards...
clusterfuck^2 is what they want. That's the only way to prevent your job being taken by an AI.
How I understand it is: We will make 2 languages, current one that would only expand with more capabilities and APIs, but not syntactic stuff, and one that would have new syntactic stuff. Build tools are expected to transform second into first and engines expected to execute first one. Basically, that's looks like what happens now in moat cases, but with clear distinction between JS you write and JS that arrives to browser.
- Yep, absolutely agreed.
- Yep, that's the intend.
- Yep, that's what'd happen.
JavaScript is and likely always will be a hotchpotch of random shit thrown together that somehow sticks and glues ... untill it doesnt.
babe wake up, two new javascript based javascript abstractions just dropped
web devs on their way to abstract and wrap things to make things "easier"
Babe: "let me die in my sleep please"
Babe: "What? This sounds is just a normal day in the JS world. *turns away and snores away* "
new frameworks? NO! NEW TWO JAVASCRIPTS!
I have been thinking for a long time that the idea of all web sites being viable forever is ridiculous. I don't see why we don't time capsule current javascript and say all past sites work. And then update new web standards. I don't see why the browsers couldn't manage classic javascript and modern javascript.
This way we can stop trying to add all these proposals, and cobble together a language that was written in 15 days originally, and clean up the mess based on what we have learned over the last few decades.
The trick would be interoperability like a codebase that can run run both TS and JS. This would also be able to run Modern JS.
if build tools is required for javascript to have new feature then why not just use some other lang which complies to javascript
If you're compiling and transpiling and uglifying to the point that sourcemaps are something anyone thinks are a good idea, then you should be shipping a BINARY, not code.
@@Asdayasman I think so too but wasm doesn't seems to be ready or maybe will never be
@@omkargarde5867 You've misunderstood what WASM is, the exact same way I did when I first heard of it.
It's not running JIT code on the host CPU. It's running made up machine code on a javascript VM. It's not about performance, it's about providing a compiler target. (And a shit one at that).
@@Asdayasman Why can't we just use Java VM or Dart VM if that's the case. You know google could've fixed the web if they didn't abandon Dart in the Browser.
@@asandax6 See now here's how I know you're not a JS developer - you're asking sensible questions that would lead to a good solution if investigated. JS developers did the exact opposite at every point possible.
Maybe the community should resolve esm vs commonjs before another fork
The year is 2074 and CommonJS was almost eliminated...
I don’t doubt they introduce another way to handle imports instead of resolving this problem.
I would say that's solved already. If you still depend on something that requires commonjs, you should consider removing that.
That was resolved. One is an ecma standard, the other is not.
@@pxkqd haha, i wish man, try building a VSCode extension and then *just* remove VSCode 😢
This whole system is beyond ridiculous at this point.
General solution :
- rework engines so the DOM can be directly (an efficiently) handled by WASM
- adapt tooling to efficiently support AssemblyScript
- ???
- profit
End result -> nice DX in pretty high level language (AssemblyScript) and good performance for end users (WASM)
So simple and elegant yet noone talks about it
absolutely real
what is that...
simplicity?
efficiency?
a solution with a chance to work?!
ew
This would require search engines to have the capablity to index wasm content. Which would be amazing, but requires coordination between too many actors
@@gabrielus123 do they index JavaScript now? No. Sites are rendered server side specifically for indexing
I like the idea, but for it to work the runtime needs to be simple, like .NET runtime. C# keeps getting new features every year, but most of the time they don't cause any change to the runtime. The high level C# code gets lowered (ex.: pattern matching is lowered to a bunch of if-else statements) and then finally compiled to .NET intermediate language. This strategy works because the language can get new features that usually don't require any change to the runtime, keeping it simple. To be honest, that's what I expected WebAssembly would be when it was first announced.
If all they want is effectively a "bytecode" language to support - why not just mandate that all JS should now compile to WASM instead and just support WASM in the browser?
because whoever tries to make a wasm compiler for javascript will most likely not succeed, javascript is too much of a dynamic language to be wasm-compiled
@the_agent_z then throw out the confusing JavaScript haha
just ditch javascript and use c imo
@@clementpoon120 js and it's consequences has been a disaster for computing.
If JavaScripts so good why didn't they make JavaScript2
If JavaScripts so good why didn't they make 2 JavaScripts*
they already did, it's called JQuery
You mean ECMAScript 4? Well, should be easy to find the history on that mess 😅
@@ivan.jeremic thank you for reminding me why i shouldn't use a framework inside a framework with 15 libraries to manage & maintain every 2 weeks ... i guarantee your not mad at me, it's the stress your ecosystem that's causing you to comment like this.
i feel bad for newcomers to stumble upon people like you in the comments
I know you're making a joke, but honestly I think that JS2 would be the best thing. The chance to remove a whole lot of old JS that people don't use any more, prioritise the newer syntaxes and clean up a lot of the polluted parts of the language. Then could polyfil the older parts of the language.
I QUIT.
I quit 2 years ago. Now I'm working with C.
I feel this in my bones!
We all quit
And you'll improve by doing so 😊
native forever!!
19:28 the JS ecosystem has been evolving at an insane pace literally since I started professionally. It's been the case for at least that long that you will learn one tool, and in three months it's obsolete and you need to learn another. Sometimes the new tool is a way to not need to learn the next tool by combining all the tools in one, then that also becomes obsolete.
JS devs are, by and large, a group of insane people who need to lay off the fucking methamphetamines for eight minutes. This is not the browsers' nor the users' problem.
AMEN! Finally someone talking sense
Fundamentally there's nothing you can do in the language today that you couldn't do a decade ago, generally these changes to JS amount to just a different 'improved' way of doing something, for example nullish coalescing is a one line way of setting a default value. Its all so tiresome. That said, this constant state of change keeps us all in employment, so meh
If JS becomes a compiled language essentially, it’s worthless. It’s just the world’s slowest compiled language. At that point we may as well look at having JSSugar go to wasm or some other compiled source that can actually be far more performant.
JS’s whole thing is that it runs, as is, anywhere. It’s basically Doom.
what you are defining my friend is a "scripting language" , this is what JS was until they started putting typescript on top because the soyboys that wrote C#/Java could not work without types
Yes there was babel for making your js compatible with older browsers but still it was scripting
Hermes JS Runtime:
@@SXsoft99soy devs are one who can't work with types and like vannilaJavaSh*t
@@rohithkumarbandariyes
Well at least it used to...
Seems to me that if you need all these features the focus should be on making WASM capable of controlling the DOM so other languages can compile into something usable on the web. If JavaScript can’t be everyone’s ideal language for the web then it should only be the default and WASM can open the door to other options.
But also if implementers cant implement either the build or browser side of things fast enough. Then I think our community goes too fast and doesn’t think enough about the consequences of each change to the standard. We can’t even transition to ESM in a timely fashion so maybe slow down a bit?
you can already access the dom apis with rust through web-sys without any javascript glue code. leptos and dioxus are bult on top of web-sys. it should be possible to do this with other languages but no one has yet.
@@bear458ziif-s because the js glue code is in web-sys. WASM doesnt have access to DOM API
@@bear458ziif-s web-sys does use Javascript glue code, it's just hidden really well. It uses wasm-bindgen, which "is sort of half polyfill for features like the component model proposal and half features for empowering high-level interactions between JS and wasm-compiled code" -- The wasm-bindgen Guide
WASM can control the DOM, either through JS bindings passing messages or sharedArrayBuffer in a proxy, undetermined which would be less latent, the easier is probably good enough. I found it incredibly simple to expose WASM functions and call them from Alpine.js to basically just write reactive-HTML with the logic processed in WASM.
@@bear458ziif-s there are JS bindings in other languages that target WASM, not just rust.
They just need to make web assembly have the same access to browser api that javascript currently do. Then rip javascript out and make it a web assembly package that is packaged with the browser. That way people can use whatever web assembly package of javascript they want or better yet just use another language.
JS0 just seems like a worse version of web assembly.
This would detach JavaScript development from the browser and move it to a separate project and even let you pick the version you want to run.
I agreed with this
Nobody wants to switch from JavaScript (Typescript) to Web Assembly.
People who don't like JS really do struggle to understand why people like JS, don't they.
@@jonhobbssmith You'd still be writing JavaScript.You could have the build tool right inside your browser as a plug-in so that you could REPL and code within the browser. You really lose nothing except having to import your build profile to the browser.
@@jonhobbssmith javascript is not the "ultimate form of programming languages", so yes, people want to switch (or *start*) in languages other than ts(js).
it's literally a choice.
they're like - implementing new JavaScript features is hard, let's freeze the language and offload the difficulty to transpiler, lol
Compiler driven development was a thing in C/C++ for a long time. Just break the language and add what we want
Exactly it's just moving the issue to another team.
But they kind of also have a valid point: Supporting backwards compatibility means all these small features like Error.isError have to stay in JS forever and pile up as more of these features get added, even if the older ones are obselete. This pile-up issue makes maintaining browsers or runtimes way harder.
If they want simplicity then they should remove obselete stuff
@@ramsey2155 Sure. Let's move the compatibility issues to compilers and bundlers so that every older established website needs to recompiles and redeployed every few months because something in the browser got deprecated...
To break it properly, start with removing „var“ and switching the default to „use strict“.
😂
On one level, this just strikes me as an overcomplicated relabeling of what already exists: JS0 is just another name for native vanilla JS and JSSugar labels the existing ecosystem of tooling, frameworks, preprocessors, and syntactical sugar the dev community contributes and propagates on its own. When properly managed, such features, capabilities, and conveniences that emerge from that segment give rise to native functionality - officially recognized and supported in the ongoing ECMA standard and JVM runtimes.
On another level, what this REALLY sounds like to me is a political maneuver to drop the existing commitment to maintain and advance core native support. We've lived through such dark times already and I thought we'd all learned the lesson: The more comprehensively state of the art functionality flows into native browser support, the better off ALL stakeholders will be, end-users especially.
In my view, if we allow such a break in the continuum between third-party tooling and native browser support, the result will be fragmentation, incompatibility, and inconsistent implementation; leading eventually to latency of innovation and outright atrophy.
I may be off base here, but the main reason why anyone uses transpilers and other build tools is for businesses to support customers who never update their browsers. This proposal would be a nightmare for an enterprise web store to implement.
The browser publishers should have room to implement new ECMAScript standards as they have capacity, too, which is the other reason for the transpiler infrastructure.
I think you are right that we already have this system in practice, it's just that there are many stakeholders implementing ECMAScript standards. JS has always been about business and capturing marketshare on the web, so I'm also extremely wary of any group proposing a new ecosystem. Who profits?
The JS ecosystem is as much about keeping the web browser business equal access for those who create for-profit webpages and web apps as it is for the developers having consistent tooling and ECMAScript standards.
End of the day, its up to google. Considering how much of the market share V8 has as a JS engine, if they decide they don't want to support new language features, they don't have to, and we'll be back to the dark days like when we had to support IE6. That's the problem when you let one company have such a monopoly.
@@KineticFaction While I agree with your insights, I would add Chrome's market dominance is due largely the way Google picked up the torch where Netscape and Mozilla Foundation fell short to advance V8 and put real energy into pushing native browser support not just for Chrome and Chrome-based browsers but also to the Webkit browsers. Maybe you think of them as monopolistic but the fact is someone needs to drive the standard and bring the constituents to the table. If there truly being monopolistic, they would've rolled out Chromescript by now and abandoned all open standards. Microsoft tried that ages ago with VBscript and I still think they intend to do something along those lines with Edge with a direct Typescript VM. If Google abandons that commitment, someone else will fill the void and, if is it Microsoft or Apple, each has an established tendency to go more proprietary, more walled garden, less open.
"We can't let Google have a monopoly on the JS ecosystem"
"Ok, so we'll let Tooling implement these things through polyfills"
"Wait, no not like that"
This has to be a joke. Almost all those CVE's they listed are for low-level features that can't even be transpiled.
At this point, why not directly rely on WASM?
I think it's harder to compile to wasm for new features
You can if you want. JavaScript is needed to run workers that invoke the WASM binary code, though. Some tools let you translate other languages to JavaScript or WASM. A compiled WASM is likely harder to debug using the browser's debug features so that's one reason to continue either using JavaScript or transpiling to it.
WASM doesn't even have strings, let alone objects (AKA dictionaries, hashes, or maps depending on who you are asking). JSON can't be natively supported without that. They came up with interface types but it really hasn't gone anywhere after years.
@@brennan123 I think that problem is already solved. WASI and the WASM component model exists already. It has every single type you mention here standardized down to the WASM ABI level. It isn't hard.
First they should be working on a ECMA standardized JS runtime for WASM, like an official version of Porffor also supporting JIT, governed similarly to the way the Rust codebase is developed. THEN split the ECMAScript language at the point of memory allocation, so there's a JS version that can be compiled and run without a runtime, and there's another JS version that needs a runtime.
Dude this is an incredible idea
This is what I thought would be the proposal from the thumbnail
WASM looks more and more appealing with each passing moment
I agree. If WASM/WASI could get past proposal stages, then I'm 100% switching to WASM.
except its basically dead, you cant let people use it in advertisment, and ads powers internet...
I've actually created web apps using WASM and the experience was... okay? I mean, until it wasn't. Then it was a hot, miserable mess. Using tooling to make WASM easier, such as Leptos, makes things easier, but not entirely pain-free.
I will say, however, that major WASM frameworks should support SSR, so ads and SEO shouldn't be negatively impacted.
I think WASM is in the same place that Rust and Wayland are in... It's weighed down by bureaucracy and issues can be stuck in proposal stages for years.
If WASM or browsers could agree to expose async/events API, DOM APIs, and networks APIs, then WASM would be a viable replacement to JavaScript. As it stands, you can call into JavaScript for these things, but it's painful.
@@reybontje2375 IAB, ACA and others are trash, we don't have trusted Agency that regulates internet advertisment in a first place, plus Apple just block some features on IOS for good. I don't see any future with WASM in advertisment.
@@reybontje2375 leptos and dioxus support ssr. they have from the beginning. it's yew that didn't for a long time.
Vanilla JS is far more powerful than most devs think it is. You can use TS to check types and nothing else, or just use check directives. I can see the appeal of this for folks in React (or similar) where the browser is a couple of levels of abstraction down, but for the rest of us I think it's incredibly toxic. Build tools are already a nightmare where you end up fixing bugs in your tooling to try and get the output you could have fixed directly. JS Sugar just looks like Google trying to spend less on their platform team.
Web Dev is a peculiar manifestation of Hell.
How many levels of hell are there again? Whatever the answer, add one for JavaScript and another one for web development as a whole
"JS WebDev"
the problem i have with this is that it greatly increases the difficulty of getting started with web development.
now there will be layers that have functionality that doesn't work unless you have a specialized development environment, which is much harder for a beginner to understand than putting a script tag in a HTML file and having every part of JS work with no build step.
i remember messing around with basic HTML when i was like 12, at that time i definitely did not understand using the terminal so using build tools would not have been easy for me
I'd argue that following a tutorial to download an IDE and run your first hello world in any compiled language is not notably harder than following a tutorial explaining how to write your first JS code in the browser console or an HTML file. But I live in compiled-language-land (and like to stay there), my views might be biased...
True, JS has so many users in the first place because of how easy it is to get started with it. No compilers, no build tools and no environment to set up....just a plain text editor and a web browser is all you need to get started. No installation is required. They are killing this simplicity by over engineering it.
@@BillieJoe512Sure, in the case of native land compilers are a necessity but in js we don't need those, browser runtimes are already optimized with all sorts of magic and jit. Why complicate things further if not necessary?
You only need those transpilers/build tools for writing js at scale with a good DX. And I feel existing tools are doing great, tc39 staging process is also okay! Th problem is the insane speed we are moving at. The problem is the tribal nature of this community and tech giants wanting to takeover the whole ecosystem.
Every language has its own quirks. Like C has existed since forever, and is being used widely but it also has its own problems. C standards are also kinda mess imho but no one in their community yells like the people in the js community. Comparing floating point arithmetic is a problem in most languages take python for example, but no one in their community says python sucks for this and here we have the js community with dedicated pages explaining why js is a weird language with the same examples
@@BillieJoe512 well in that case i still think this is a bad change, if we want compiled code in the browser we should expand the capabilities of WASM or offer another language, not just transpile something to a dynamic language.
@@krelsen7 I agree, I would love to see WASM capabilities expanded
It's not the first time JavaScript has seen a split
The first one was during ECMAScript 4's development, due to which we got both ECMAScript 5 and ActionScript
ECMAScript 4 got so much feature creeped that the whole proposal was eventually scrapped
This is a concept that has made the JVM and .NET's CLR such powerful language targets. It makes sense for the browser to have a runtime equivalent and use an intermediate language (JS0) just like how we have Java bytecode and MSIL. This also allows full interoperability with libraries written in other high-level languages targeting JS0. Is this a smart move considering we're not starting from square one? I'm not sure.
- Other's have touched on this already, but to me an elephant in the room for this proposal is that the authors give zero reason for JS Sugar to even exist. All the supposed benefits they cite would come from freezing/near freezing JS0, there's no advantage to having an additional "standard" for JS that has to be transpiled by tools in order to work (and arguably some big disadvantages). I think the authors realized that openly proposing a feature freeze on JS would be hugely unpopular, so they added the JS Sugar proposal to make it sound better, even though JS Sugar is basically the worst of both worlds between transpiled languages like typescript and vanilla JS
- The chrome team seems to be still drawing the wrong conclusions from their surveys of JS developers and ignoring the evidence from usage statistics (which is funny, because they should have some of the best access to the latter). Most of the web isn't using a framework/"library" like react on the frontend, the most common solution there is jQuery. I don't have direct statistics on this, but based on the aforementioned prevalence of jquery I _strongly_ suspect that most custom javascript is already no-build, or at least close to it.
The argument for JSSugar was quite clearly to still have a common standard for new language features going forward. Otherwise it's the wild west of language features and we're right back to babel configs with 4000 plugins of extension syntax.
@@jakobkristensen7024 The issue is that if I want to use a JSSugar only feature (say for example pattern matching), I *have* to use a build tool with this proposal instead of using hand-written JavaScript. This makes it harder to integrate as e.g. a Java/C#/whatever project now has to also depend on something like node/npm to get the transpiler, add that to the build pipeline, hook up build dependencies, etc. Them there's making sure that editing the file rebuilds the JS0, hot reloading works, etc. You will still have issues where a transpiler written in Java/C#/Rust/whatever may not support every feature, so you would be tied to a specific transpiler.
It's shifting the work from the browser to the transpiler. But now you have transpier bugs to care about, what JS/JS0 engines it supports, etc.
@@jakobkristensen7024 You would already have to set up such a config if you wanted no features, so there's no advantage there. Further, not having a ton of config flags to turn on individual language features isn't an advantage of having one _global_ standard for everything that transpiles to JS0, but rather an advantage of having one standard _per language_ that does so. You'd get the same benefit without JS Sugar from people opting for e.g. TypeScript, PureScript, CoffeeScript, Elm, ReasonML, Gleam, Etc. in their projects, with the added advantage of each of those languages being able to evolve faster than JS Sugar could because JS Sugar would need to get buy in from a comparatively broader base of users.
@@jakobkristensen7024 Except you'd already need a build tool and all the complexity that brings, and not having a lot of config options depends on having one compile target and standard PER LANGUAGE, not one language. Other languages like TypeScript, PureScript, CoffeeeScript, Etc. would be able to evolve faster than JS Sugar because JS Sugar would need to get buy in from a much broader group of developers before implementing a feature.
@@jakobkristensen7024 You'd still need a transpiler, so no advantage there. And avoiding the wild west comes from adopting a standard, not from there only being one. There would still be only one e.g. TypeScript.
I like how they say "No evidence suggesting tooling use would decrease." did they see the hype over HTMX, a lot of people want to not use tooling. This is insane! Completely agree with you Theo! 😁
Why not then compile JS to WASM and ditch JS in the browser completely??
Because you STILL have no access to the browser API from WASM.
@@voidwalker7774ok, lets give WASM access to the dom, browser api.. 😜🫡
because the people who run web browsers are obsessed with backwards compatibility, and are only willing to compromise on it in the name of security.
I said that 10 years ago, let the browser be just a wasm runtime and send the the engines you need over the wire, for example (HTML/CSS renderer, JS Runtime, or any other runtime for example JAVA, C# ).
@@ivan.jeremic that will slowdown everything, js alone can be heavy on the network let alone a runtime
i am very very opposed to this
part of the beauty of the web is that every website is """"kinda"""" open source because you can see the code run on your computer
by killing vanilla js this just pretty much enforces that web dev no longer has this avallibility
i see this a threat to the open web and am thusly opposed
agreed!
So it's a way to hide code for the users?
How will that effect security and malware development on the internet?
@@AntiAtheismIsUnstoppable you can already nobody and obfuscate your code and even just using TS makes it harder too. And you can do with what i've heard that google does with recaptcha which is make an entire virtual machine running its own compiled programming language all inside of obfuscated javascript and they update it now and again so that it is literally impenetrable.
But what i'm talking about is the "default" which is vanilla JS is somewhat readable. This change makes it so that default is now unreadable (if someone wanna make it readable they just put their code on github as open source - but that's basically opt in now instead of opt out)
Now the end user really has no idea what is running on their browser while right now on a good portion of smaller websites that don't specify make it difficult - they might be able to at least poke around with it.
@AntiAtheismIsUnstoppable
It'll basically be security via obscurity. The worst kind of security....
Can you imagine if C said "writing compilers is hard so could you just pre-compile your C so we don't have to do as much work?"
This is fucking madness. The reason buildtools are so common isn't because they're a great idea, it's because YOU ALREADY AREN'T DOING YOUR JOB.
"I really love my build step" - no dev ever.
It’s their only other opportunity to delve into the cmd line to show off their emoji riddled pass fail scripts
That is literally exactly how compilers work. All compilers, including the js runtimes, internally perform several lowering steps progressively moving code from the high level source language to the target language.
This proposal is basically advocating making a specified intermediate layer that everyone can target and consume instead of everyone reinventing the wheel.
It’s not that we love the build step, but we love writing typescript, jsx etc. So implictly we gotta give build step some love
C not only has a build step but also a macro system that reprocesses code before that. Still easier to deal with than older browsers.
Oh and the bit "Enterprises should just change"... who do think is paying for most of the webapps being written? The enterprises. They've got 100s each.
this is part way funny because this is exactly how C++ started, as a pre-compiler to C…
The browser makers: "We fucked up web components by not knowing how people build things; we should fuck up JavaScript by not knowing how people build things."
If JavaScript HAS to be compiled, there is no reason to not compile directly to something like WASM
The only reason not to is because WASM sucks and it'll always suck.
will it always suck? :( that would suck
@@amagicmuffin1191 Will Always Suck Maybe
@@DarthVader11912 I'm new to this, can you explain why does wasm suck?
WASM doesn't have good DOM access except through JS which is a pretty huge part of what JS does. WASM almost always fails on what makes the web: accessibility, CSS, etc. Most implementations just render to a Canvas and the canvas isn't HTML...
JS+HTML+CSS is the BASIC of modern computing; it's a free, common starting place for new programmers. Please at least make it possible for some useful code to be developed straight in the browser so we don't lose this feature.
JS0 would still be human readable and programmable though. How would you *lose* anything from this?
@@jakobkristensen7024 Because you will need to understand JS0 and JSSugar to debug things in browser. JS0 won't run in browsers without a compiler.
If you ask me, it's a smart move for Google, they will be the only ones with an updated compiler. Everyone else: Node, Deno, Bun, will need to implement their own updates.
In the long run, Google wants to open a new market and sell this "new" compiler.
I don't use a build tool at all. Have never needed one. Each HTML page gets whatever JS it requires for just that page and only that page. So this stupid proposal would be useless to me. I love how a lot of these modern developers forget that Javascript is first and foremost a browser based language.
Just happens when people try to cram JS into everything. Looking at Electron
worse than useless. the fact that js is human readable and writeable is literally it's only redeeming quality. this change threatens to throw that down the drain.
Basically, make all language development, all new features, all new ideas, based on the honor system. I don't see how that can fail. /S
Just move all JS features that can be poly-filled to „stdlb.js“ and „stdlib_legacy.js“, then reduce the JS engine by those features. Implement a logic that imports both of those libraries, if no import statement in the user’s code references one of those libraries, else import as given in the user‘s code. Finally, ship those two library files with every browser download. This should shrink the JS engine‘s code quite a bit.
i like this. this shit keeps me in my job. no way they gonna train new models as fast as they invent new js runtimes, frameworks and standarts. gj
I have been doing web development professionally for 20 years. This proposal forgets that the web is for everyone. I can't tell you how often I write vanilla JavaScript for things outside the gigantic web application I work on. I have bookmarklets to format a link and title of my MR request to post in Slack. I have a Chrome snippet to create the branch name for my current Jira Ticket. Most of my personal projects are Vanilla Javascript. I don't want us to lose the ability to contribute to the web with just a text editor and browser.
wtf he is just reading it aloud
SIMD is not "large amount of data control" lol, simd is instructions on vectors of data. CPU have vector unit so it can batch-process vector if supported operation with just a few or even one clock cycle. for example, there is permutation operation available
SIMD can be understood by example -- look on UA-cam for the excellent "simdjson" explanatory talk
@@DarrenJohn10X I would recommend reading some books on computer architecture instead. You can’t learn a thing by watching entertaining videos
Sssshhhh, he's a JS/TS dev, he not know what native code's full power really is! ;)
Proposal makes sense to me. A few points:
1. We should have a bytecode that can be a compilation target. WASM is too low level but we don't need every feature in JS to be possible in the bytecode. WASM also moves too slow. How many years do you need to implement strings FFS? Shipping actual source code and calling it the "compiled" is weird. It's still source code and not optimal for bandwidth and requires additional parsing by the browser.
2. Garbage collection should be present in the platform.
3. Every type that is valid in JSON should have a native representation in the bytecode.
4. New APIs will constantly need to be added. This regularly happens. Bluetooth, WebMIDI, WebGL, Canvas, heck even the first version of CSS was an add-on. This does not mean we need to update the language or bytecode though, just what APIs are exposed. That should be a bit easier.
5. Look for inspiration from JVM and CLR.
6. No need for classes, spread operators, or any of these high level features.
7. const, var, let (just pick one, we don't need all 3). let is probably fine (immutability can be enforced at a higher level). Willing to have const as well if implementers say they need it for performance reasons but there are ways to infer something is const.
Compilers are good for implementing higher level features. You don't need the target platform to support them for them to be usable. JVM has Java, Scala, Lisp, Ruby, Kotlin, Clojure, Groovy, Python, and even Haskell variants. This should be proof enough you don't need every single feature of the language to be implemented in the target.
Having a more baseline web compilation target would enable richer and more varied tooling.
If one is going to give up direct authoring then why bother transpiling into anything at all? Just compile to some form of bytecode and be done with it.
There is no form of bytecode that runs on all machines. WASM is not bytecode. You also don't want this future you are advocating for, because it creates a developmental hell where you will always run into weird bugs where you need to understand the target code you are compiling to (like WASM).
@@gamechannel1271 Sorry but your points are ridiculous.
1. You don't need to run on every machine. You target a single VM and can now target every platform where that VM has an implementation.
2. WASM is bytecode.
3. Your argument would also imply that every compiled langauge is bad if you don't understand x86 assembly, ARM assembly, JVM bytecode, Python Bytecode, or even modern JS engines. Compiled languages are cool and good and interpreted languages should die.
This feels really similar to how C# is compiled: it is first 'lowered' into a simpler version of C# before being compiled into MSIL. This makes it much easier to implement certain language features without worrying about implementing it in the runtime (which means it can built faster, but also compatible with existing runtime versions)
Build tools shouldn't be required to use JavaScript. I have at least 2 projects where I hand-write JavaScript without using build tools. Polyfills can be useful for supporting older browsers (and can be usable without build tools), but shouldn't be required to use a new feature as they can introduce performance issues compared to native implementations. I shouldn't need to use a build tool to use syntax features -- if I did I may as well use TypeScript or some other language.
Time for me to go back to making terminal apps in C/C++. I'm done with this level of chaos tbh
ESM and BigInt are super important on the server. BigInt in particular is also in the category of "making things that were previously impossible, possible".
But yeah, in the browser, you are usually dealing with smaller numbers, and important computations where precision is important are often delegated to the server anyway, thus making the regular Number type sufficient. And any kind of module system (be it ESM, CommonJS or one of the older ones like AMD) are taxing on the browser without build tooling that bundles it into the minimum number of files possible.
It seems like engine implementers and V8 in particular, are forgetting that their work is important not only in browsers (where they have plenty of data to show how much each feature is used, and how it affects users), but also in servers. The fact Node, Deno and Bun weren't even mentioned in passing on the slides tells me this is very much a blind spot for them... They're lumping them into the "tools" bag, when they're in fact more akin to a browser... Technically, they're a runtime... Not a browser runtime, but a server runtime. Still.
Improvements to ECMAScript affect server runtimes even more than they do browsers nowadays... And that in turn allows tooling to improve too, since tools often run on the server runtimes... And tooling improvements in turn allow developers to be more productive, which in turn makes better products for end users.
If engine implementers are under staffed, that seems like the real issue here... They very much shouldn't be.
The thing is, browsers _have_ to rely on JS. Devs on the server don't, it's a choice they make. There's plenty of other languages with their own runtimes and frameworks you could use on the server. On the browser, in the end you have to use JS at some point. So I can see how these things are not seen as a priority and are part of tools devs opt for.
It wasn't long ago that you posted a video about Oracle (still) keeping JavaScript under their control. The person at the time was asking Oracle to "leave" JavaScript to the open source community.
Well, in my opinion, everything that has been happening with JavaScript (TypeScript, linters, etc., practically every day something new is invented) is due to Oracle's negligence. JavaScript doesn't have a strong group or organization behind it to say "no" to certain things, which is why it has become the mess it is today.
If only we could distribute compiled binaries into stand alone units that performed certain applications then we’d be onto something….hear me out, any language you can think of but compiled into machine code….
I've only heared mysteries of such things. Legends, even. That one would feed a file into this "kompailar" and out comes a whole program, that just runs, and is small...
Ah, dreams...
Twice as many frameworks!!! Wonderful!
Java has a virtual machine, and a language, with a compiled intermediate, and features are split between them. C# is a syntactic layer over a similar implementation core called .net. This seems to be where JS wants to be going, if I'm hearing this right. The main difference is that the standards body ships both a standard platform, and a standard compiler. You can use alternatives, including compilers for alternate languages, but there is a standard implementation that is shipped. If JS standardized this split and draw the boundary right, interesting things could start happening like "Ruby for Node" and "Python for Browser" that downcompile to JS0, but a lot depends on exactly where they draw the boundary, and them stepping into the tooling space with standard implementations
This is shitting on the whole current JS single language ecosystems. My immediate response is that this fundamentally sounds very much "solving minor inconveniences by arbitrarily partitioning the core ecosystem of the entire goddamn internet."
Sounds so great, let's introduce even more bloated installed packages instead of bloated JS runtimes, and also enjoy more supply chain attack and more left padding, neat.
At least part of this issue can be alleviated with a better standard library(see deno's JSR standard library) however the rest of this is insane to me.
If you want to make JIT compilers in browsers less complex then introduce proper typing in JS already.
You can measure the seriousness of this entire drama by the design of the slides and font choices. Comic Sans.
situation: there are 14 competing standards...
I'm not following the logic for why in this implementation there would be additional security risk from the tooling perspective. If the runtime is enforced to run JS0, and they spend their resources to ensure that JS0 is 'safe', whatever that's supposed to mean, and if all of the fun stuff is implemented in JSSugar which is compiled down to JS0, why would the JS0 implementation pose any risk if the runtime, hypothetically, is ensuring its safety?
I think this is a prank - If they would’ve wanted to be taken seriously, they would not have used some comic sans derivative for their slides
you clearly haven't been to the industry long enough...
It almost looks like a direct troll to DHH and his No Build approach...
@@ElvenSpellmaker- Well, if they are trolling DHH, maybe I *do* like the proposal after all. 😂
@@CEOofGameDev touché 😂
youd be surprised...
I think the points made in the presentation are roundly quite good, in my opinion. My understanding is that they're positioning JS0 as a more reliably static compilation target rather than a language to be written in. This is all fair, but my question is this: isn't this what WASM aims to do? Isn't this a 'WASMscript' proposal?
A proposal to stay the same
This happens when one thinks that "new features" to a programming language leads to a the better future.
Why don't we just have some kind of versioning thing? And make it backwards compatible, such that if a site doesn't specify a version, it uses classic JS, and then new versions are YOLOed.
That’s what is slowly killing Java
while it sounds good in a vaccum, I can't help but feel like this solution (and ecosystem as a whole) is fatally overengineered.
It's at the point where adding another "over" is actually accurate and a great display of the levels of abstractions, tools, and hoops.
It is overoverengineered.
i'm using htmx and raw js files for a web frontend at my workplace :)
why would you force anyone to use fancy tooling mantained by third parties who can just... stop doing it???
200 IQ solution: Add macros to vanilla javascript. That way users can use vanilla js to extend the language.
C did macros in the 1970s, it's time javascript catches up!
Personally, I feel like the worst thing of this proposal is that these tools will become even more susceptible to feature-creep and we'll end up in a situation where things have constantly be started from scratch because tools saw high adoption and are now afraid to break things for their existing users. Or in other words; this will end up making projects like Webpack go through Webpack's life cycle (aka becoming to bloated and slow with no real intention to clean it up and instead rely on other tools to replace it) in very quick succession.
If some browsers don't want to implement some "syntax sugar" kind of JS-feature just because it's possible to implement with a polyfill; then just make the polyfill built in to the browser.. Kind of how chipset manufacturers sometimes implement complex instructions using microcode instead of silicone. That way it will work in browser without a bundle of polyfills that everyone has to download multiple times from every website everywhere, because the polyfill is already in the browser.
But in general polyfills are much slower than a native implementation even if the polyfill is already cached because after all you then have to re parse and interpret a lot of extra code rather than having it run natively, so it's not ideal.
Look at those mentioned bugs. 7:41
BigInts, Class scopes, optimising super, resizable buffers, WeakRef... Probably some of the others too; all of those would _have_ to go in the JS0 part. How would a compiler handle BigInts in any kind of performant way?
Also, obviously classes were a mistake; we all knew it, they did them anyway and now were stuck with them.
Truth - evolving the language for its own sake does introduce systemic risk for billions of end users. Freezing browser-native JS language and focusing development on WASM features does feel like a collective win.
There has to be a complexity ceiling on our browser scripting language, and chances are we've already reached it.
When runtimes like deno and bun are wanting to make devs lives easier, browsers are gonna be going backwards??
If think about it in a Java way.
Browser would need just implement the JIT interpretater and dev tools will "compile" the sugar and human-readable code into JIT.
So, essentially, "just give a compiler for the JS".
Browser will implement "JVM" and all sugar will exist only in the compilation time.
P.S.: I still miss types in runtime, so it would be nice for JIT to support it as well :D
Imagine how much time is wasted globally on the insesant JS tail chase.
Super excited to code in JS++
This won't sail because it will break the whole 25 years (or when the JS came) worth of websites in modern browsers.
Also, the Ruby on Rails is going no build. That's a monumental evidence that tooling is decreasing.
And seriously, such suggestions need to be done by thinking beyond javascript because other languages use javascript without tooling for enhancement if required on the web without going full crazy on tooling.
Let's just make web assemblier, like web assembly, but it can manipulate the dom, but has no interop with JS.
And we can all forget about this.
But you know, at the end of the day, this could've happened with BASIC instead.
So I'm not sure if we are living on the better or worse timeline right now.
and suddenly PHP gets more devs (:
I don't understand. Aren't they proposing exactly what we already have. We have ECMA script which is what is actually run in the browsers. And then we have build tools which can handle transpilation of experimental features or even languages (like typescript and coffee script).
And on what planet would it be easier for a build tool to implement features BigInt or async generators?
lets take the thing that everybody hates, and make it unavoidable
After watching the video.... Aren't we already doing this with TypeScript?
TypeScript is a superset, and ends up transpiling down until it reaches a compatible version.
TypeScript enums are an example.
Now, there's another thing, syntactic sugar, ends up using less code than the JS0 that it will transpile down to. That{s because transpiled code will likely be mangled and minified.
Its a good idea, but what we REALLY need is a compiler! Remove Javascript from the browser and replace with WASM. Let Javascript itself manage its own features, not when browser decides to implement them in engine. Browser should only build support with calls over WASM and thats it. That way, when a feature wants to get into javascript it gets into the compiler and its there for everyone! No need to wait for browser updates, adoption or certain companies (you know which ones) not wanting to implement something. And there would be no need for any polyfills!
If we could go a step further and abstract the render engine to wasm somehow, we could also get compilers for HTML and CSS!
do you understand why WASM SUCKS at the first place? thats why no browser vendor won't adopt it
I can't even figure out babel, screw this
JavaScript is already AT LEAST two languages
We should come up with a nice, consistent binary DOM API, and port JavaScript to a generic, standardized VM, so we can run JavaScript or any other language in our browsers, without re-engineering the runtime every time somebody wants to add a new tiny change.
They are SOO close to discovering compiled languages. Guys, just make WASM something that can be a replacement for js, and compile your js/ts INTO WASM. (Written by a rust enthusiast)
I would be down for that because then I could avoid rust completely and use Zig LOL (I think zig has the best wasm compiler, I looked into rusts's version and it was a lot harder to implement. I have only written Zig once, and I was able to get some working wasm in a very short time.)
i’m so excited to see so many replies here suggesting wasm, i really want to write rust instead of typescript at work 🤞
@@demoncorejunior look into leptos and dioxus. you can already do it.
@@demoncorejunior look into leptos and dioxus. you can already do it.
If everyone would have adopted dart back in 2012. There would not have been this problem.
This slideshow is an insult to frontend devs for obvious reasons, and an insult to designers for using horrible fonts
SIMD is pretty much built into the chip and allows you to operate on multiple values at the same time like doing 4 adds at once. It’s a low level operation so it makes sense. AVX is an example of SIMD that’s part of the x86 ISA
So in a situation where a compiler is not needed you're screwed with the more efficient and readable syntax features? I was pretty sure that at least one purpose of compiling something was to make the code run faster and take less resources. It doesn't sound like that would be a thing that much with this. A polyfill might run slower.
Compiler is a generic term for different things, it doesn't have to be an optimizing. Compilers can be used to provide compatibility, optimizations, translations, packaging, not mutually exclusive.
@@dealloc Sure but we all started doing this originally to "minify" for the most part.
New JavaScript Frameworks, Runtimes, Browsers, Compilers, Bundlers and now the language itself, what a predictable surprise
You mean more clowns wanna join the circus stage? Sure, bring them on!
"Growth for the sake of growth is the ideology of the cancer cell." -Edward Abbey
Ok, that's enough. Let's move to other languages at this point. Just improve wasm and move on from JS.
It's an interesting proposal. What's probably worth to point out is it seems to be only saying that new features going forward will only have to be implemented at the tool level and none of this precludes devs who want to continue working on the browser directly in JS0 as they have been, which is important to me since I generally disacree with this modern trend of everyone running all these tools just to take an interpreted language and spit out... the same interpreted language, but worse.
Yeah I notice a lot of comments on this video treating JS0 as an intermediate bytecode or other format that is not human-readable
So if I add cream to my sugar does it bloat my backend? Is Diet JS more or less likely to make my mouse twitch. How about memory leakage?
OK hear me out, what if instead of splitting JS into JS and JS we just leave the system as it is and use ClojureScript.
It's called desugaring in normal programming languages.
I tell you what , my proposal, build the web again without javascript
Finally a video where you don’t present yourself as some unhinged self centered teenager. I hope you try your maintain this vibe
I push back on the notion that javascript "already" requires build tools to be productive. In my view, that was definitely true 5 or 10 years ago.
Much less so now. Although build tools do have benefits, a lot of those benefits don't come entirely for free. And you can certainly get by without them.
I often get the sense that web devs are so used to piling shit on top of their stack by default that they don't understand what the foundation is capable of.
More Wordpress drama pls! ⚡
My idea of an ideal future is that the desugaring step and optimization step are made as independent as possible. Design JS0 as a dead-simple intermediate representation. Think Java bytecode. In fact, it could straight up be WASM.
All of the syntactic sugar in JSSugar (and TC39 proposals) would be required to have a polyfill/desugaring implementation. Then, almost completely independently, build tools can coordinate with engine implementations to define annotations/comments for JS0. Any metadata that engine devs need to optimize the runtime would be communicated through those annotations. But you should be able to strip any and all annotations out of a JS0 module without impacting the correctness of the program.
That means that a minimal build tool could just use the desugaring implementation of any language feature, without providing any annotations, and still produce something that's correct. Likewise, a minimal engine can opt out of implementing an optimization using those annotations, and still behave correctly so long as it supports core JS0. People seem to love LLVM and Java for taking a similar approach. Why not do it here?