WebAssembly: 4 Predictions for 2024

Last year, I called 2023 the Year of WebAssembly. Looking back, I would say this prediction came true. Several WebAssembly (Wasm) standards made major progress. Multiple languages made huge progress. Python and Ruby now include Wasm support in their releases, and the official Go project is adding support for Wasm and WebAssembly System Interface (WASI).
The folks behind Spring I/O hosted the first Wasm I/O conference in Barcelona, and several months later the Linux Foundation hosted WasmCon in Seattle. GlueCon and DockerCon both had WebAssembly tracks. Suborbital was acquired by F5 (makers of NGINX), and Adobe tried to acquire Figma (but was thwarted by regulators), showing that two of the industry’s most Wasm-forward companies have demonstrated market value. Throughout the year, analyst reports and news articles became increasingly bullish on WebAssembly’s maturity and potential.
Looking forward to 2024, I see four key events on WebAssembly’s horizon.
1. Wasm Is AI’s Perfect Match
A fascinating phenomenon in developer tooling is how often programming language paradigms are coupled with infrastructure advances. Java accompanied the web in the 1990s. Python was the big data language as NoSQL-style databases took hold. The Go language boomed alongside the container ecosystem.
This trend will hold as artificial intelligence (AI) advances, but the technology winner here will be WebAssembly. There are three reasons why Wasm is the ideal match with AI workloads.
- Wasm’s platform neutrality, which extends to GPUs, makes it portable across hardware. A developer can build AI apps locally using slow (but cheap) CPU or GPU inferencing, and then deploy onto a cloud system that has massive (and expensive) AI-grade GPUs. And not once will the developer have to ask, “What is the GPU architecture of the deployment environment?”
- Wasm’s fast startup times mean AI inferencing can be done on demand without waiting for a virtual machine or container to come online. That, in turn, means saving money (via efficiency) on costly GPU resources.
- Wasm’s portability and small binary sizes mean that the application can be moved as close to the data and GPU as possible.
One of AI’s big themes in 2024 will be efficiency. How do we shave off time? How do we cut cost? How do we run more apps on the same hardware? Wasm is an efficiency boon.
2. Three Big Standards Are Finished
Wasm is standardized under the auspices of the World Wide Web Consortium (W3C). The core Wasm standard was finalized years ago. But there are three add-on standards that are vital to Wasm’s success:
- WASI
- Memory management
- The component model
All three of these standards have been in flight for a few years. And in 2023, all three of them saw major enhancements and advancements.
WASI is entering Preview 2 in early 2024. This stage introduces networking support, the last major feature. Given the heightened pace of both the standard and the reference implementation, I believe WASI will reach 1.0 by the end of 2024.
The memory management extension to Wasm allows guest languages to delegate memory management to the host runtime. While this is a low-level detail of how Wasm works, when this specification is implemented, languages like Katlin, Java and .NET will not need to implement their own memory managers. This speeds up adding Wasm support to the most important and broadly used languages.
Finally, the Wasm component model unlocks Wasm’s true potential. With the component model, one Wasm binary can treat another as a library. And the source language of the library doesn’t matter. That means that for the first time in the history of computer science, libraries from arbitrary languages can work together. Your Rust app can import a Python library that in turn uses something written in Go. This will change the way developers work with dependencies and libraries. There are already a few implementations of the component model, but the specification will be finalized and production ready in 2024.
3. Wasm’s Home Is on the Server Side
Wasm was originally written to execute in the browser. But these days, the momentum for Wasm seems to be building more on the server side of things. Radu Matei, chief technology officer at Fermyon, recently said that when he goes to WebAssembly conferences, “everyone is talking about the cloud and servers. I’m hard pressed to have even one discussion on client-side Wasm.”
I don’t think Wasm in the browser will go away. Companies like Figma and Adobe have demonstrated its value for high-performance browser computing. But I do think that the primary use case for WebAssembly will be on the cloud. In 2023, I predicted that serverless functions would emerge as a sweet spot, and based on our evidence at Fermyon (100,000 Spin downloads and thousands of apps deployed to Fermyon Cloud), it is certainly taking off. But there is more to cloud than just a programming model.
I expect Wasm to make major inroads in the Kubernetes ecosystem and start showing up in places where efficiency, scalability and cost are big concerns. From edge to data center, Wasm’s home will be server-side.
4. Progressive Enhancement Means Wasm on Both Sides
There’s a nuance to the server-side story that deserves special mention. In the last few years, we have seen the rise of web development frameworks that can (optionally) execute on the client, on the server or split across the two.
In such apps, the developer writes one codebase that contains all of the logic. But at build time, the app may be built to run entirely on the client, or have some part of the logic that is executed on the server side. The terms client-side rendering (CSR) and server-side rendering (SSR) describe these cases.
Among this emerging class of frameworks, some are already harnessing Wasm. The Leptos framework lets developers write web applications in Rust, and then compile them to either CSR or SSR versions. The CSR version already uses WebAssembly. Fermyon recently paired up with Leptos to try something fun: Could we run Wasm on both sides, client and server? The answer is a resounding yes.
In the future, will we be able to write our web apps in Python, Go, Rust or whatever language and have them execute in this dual CSR/SSR mode? The outlook is good. In 2024, I think we’ll see more projects that will do exactly this. And this is the first step toward Wasm’s true strong point: As a binary format, it can already run just about anywhere, but tools like this will make it possible to intelligently locate the binaries where it makes the most sense to run them. That may be cloud, or browser or edge, but ideally the decision will be made automatically. And developers won’t have to give it a second thought.
2024 Is the Year of Wasm in Production
If 2023 was the year Wasm exploded onto the scene, then 2024 is the year Wasm will make its way into production. AI applications will make use of Wasm’s portability and GPU agnosticism. The component model will allow us to share libraries regardless of origin language. Completed specifications mean rock-solid implementations. And we’ll see an emerging class of web applications that can run either on the client side or the server side.
In my career, I’ve had the advantage of being in the right place at the right time on several occasions. I experienced the emergence of web applications, the development of content management systems, the beginning of public cloud and the rise of the container ecosystem. None of them made me as excited as I am about Wasm. The combination of technologies that are emerging alongside it means we will build better, more secure and more portable applications, and (thanks to the component model) with the benefit of truly shared functionality. Wasm has the potential to transform both application development and platform operations.
And we’ve got front-row seats to watch this happen in 2024.