Volume 23, Issue 3

Special Issue on WebAssembly




WebAssembly: Yes, but for What?

  Andy Wingo

WebAssembly: Yes, but for What? The keys to a successful Wasm deployment

WebAssembly (Wasm) has found a niche but not yet filled its habitable space. What is it that makes for a successful deployment? WebAssembly turns 10 this year, but in the words of William Gibson, we are now as ever in the unevenly distributed future. Here, we look at early Wasm wins and losses, identify winning patterns, and extract commonalities between these patterns. From those, we predict the future, suggesting new areas where Wasm will find purchase in the next two to three years.

Web Development




WebAssembly: How Low Can a Bytecode Go?

  Ben Titzer

New performance and capabilities

WebAssembly: How Low Can a Bytecode Go? Wasm is still growing with new features to address performance gaps as well as recurring pain points for both languages and embedders. Wasm has a wide set of use cases outside of the web, with applications from cloud/edge computing to embedded and cyber-physical systems, databases, application plug-in systems, and more. With a completely open and rigorous specification, it has unlocked a plethora of exciting new systems that use Wasm to bring programmability large and small. With many languages and many targets, Wasm could one day become the universal execution format for compiled applications.

Web Development




When Is WebAssembly Going to Get DOM Support?

  Daniel Ehrenberg

Or, how I learned to stop worrying and love glue code

When Is WebAssembly Going to Get DOM Support? What should be relevant for working software developers is not, "Can I write pure Wasm and have direct access to the DOM while avoiding touching any JavaScript ever?" Instead, the question should be, "Can I build my C#/Go/Python library/app into my website so it runs with good performance?" Nobody is going to want to write that bytecode directly, even if some utilities are added to make it easier to access the DOM. WebAssembly should ideally be an implementation detail that developers don't have to think about. While this isn't quite the case today, the thesis of Wasm is, and must be, that it's okay to have a build step.

Web Development




Concurrency in WebAssembly

  Conrad Watt

Experiments in the web and beyond

Concurrency in WebAssembly Mismatches between the interfaces promised to programmers by source languages and the capabilities of the underlying web platform are a constant trap in compiling to Wasm. Even simple examples such as a C program using the language's native file-system API present difficulties. Often such gaps can be papered over by the compilation toolchain somewhat automatically, without the developer needing to know all of the details so long as their code runs correctly end to end. This state of affairs is strained to its limits when compiling programs for the web that use multicore concurrency features. This article aims to describe how concurrent programs are compiled to Wasm today given the unique limitations that the Web operates under with respect to multi-core concurrency support and also to highlight some of the current discussions of standards that are taking place around further expanding Wasm's concurrency capabilities.

Web Development





Unleashing the Power of End-User Programmable AI

  Erik Meijer

Creating an AI-first program Synthesis framework

Unleashing the Power of End-User Programmable AI As a demonstration of what can be accomplished with contemporary LLMs, this paper outlines the high-level design of an AI-first, program-synthesis framework built around a new programming language, Universalis, designed for knowledge workers to read, optimized for our neural computer to execute, and ready to be analyzed and manipulated by an accompanying set of tools. We call the language Universalis in honor of Gottfried Wilhelm Leibniz. Leibniz's centuries-old program of a universal science for coordinating all human knowledge into a systematic whole comprises two parts: (1) a universal notation by use of which any item of information whatsoever can be recorded naturally and systematically, and (2) a means of manipulating the knowledge thus recorded in a computational fashion, to reveal its logical interrelations and consequences. Exactly what current day LLMs provide!

AI




Bridging the Moat:
Security Is Part of Every Critical User Journey


  Phil Vachon

Security Is Part of Every Critical User Journey How else would you make sure that product security decisions serve your customers?

Next time you're working on a new product or feature or the next time you're yawning your way through a product development meeting, raise your hand and propose that security outcomes and risks be defined at each step along critical user journeys. Whether you're building an integration between enterprise systems, a user-facing application, or a platform meant to save your customers complexity and money, putting security at the forefront of the product team's challenge will be transformative.

Bridging the Moat, Security




Kode Vicious
In Search of Quietude


Learning to say no to interruption

Kode Vicious | In Search of Quietude? KV is old enough to remember a time before ubiquitous cell phones, a world in which email was the predominant form of intra- and interoffice communication, and it was perfectly normal not to read your email for hours in order to concentrate on a task. Of course, back then we also worked in offices where co-workers would readily walk in unannounced to interrupt us. That too, was annoying but could easily be deterred through the clever use of headphones.

Business/Management, Development, Kode Vicious


 


Volume 23, Issue 2




AI: It's All About Inference Now

  Michael Gschwind

Model inference has become the critical driver for model performance.

AI: Its All About Inference Now As the scaling of pretraining is reaching a plateau of diminishing returns, model inference is quickly becoming an important driver for model performance. Today, test-time compute scaling offers a new, exciting avenue to increase model performance beyond what can be achieved with training, and test-time compute techniques cover a fertile area for many more breakthroughs in AI. Innovations using ensemble methods, iterative refinement, repeated sampling, retrieval augmentation, chain-of-thought reasoning, search, and agentic ensembles are already yielding improvements in model quality performance and offer additional opportunities for future growth.

AI




Develop, Deploy, Operate

  Titus Winters, Leah Rivers, and Salim Virji

A holistic model for understanding the costs and value of software development

Develop, Deploy, Operate By taking a holistic view of the commercial software-development process, we have identified tensions between various factors and where changes in one phase, or to infrastructure, affect other phases. We have distinguished four distinct forms of impact, warned against measuring against unknown counterfactuals, and suggested a consensus mechanism for estimating DDR (defect detection and resolution) costs. Our approach balances product outcomes and the strategic need for change with both the human and machine costs of producing valuable software. With this model, the process of commercial software development could become more comprehensible across roles and levels and therefore more easily improved within an organization.

Business/Management, Development




Generative AI at the Edge: Challenges and Opportunities

  Vijay Janapa Reddi

The next phase in AI deployment

Generative AI at the Edge: Challenges and Opportunities Generative AI at the edge is the next phase in AI's deployment. By tackling the technical hurdles and establishing new frameworks, we can ensure this transition is successful and beneficial. The coming years will likely see embodied, federated, and cooperative small models become commonplace, quietly working to enhance our lives in the background, much as embedded microcontrollers did in the previous tech generation. The difference is, these models won't just compute; they will communicate, create, and adapt.

AI




Research for Practice
The Point is Addressing


  Daniel Bittman with introduction by Peter Alvaro

A brief tour of efforts to reimagine programming in a world of changing memories

Even something as innocent as addressing comes from a rich design space filled with tradeoffs between important considerations such as scaling, transparency, overhead, and programmer control. These tradeoffs are just some of the examples of the many challenges facing programmers today, especially as we drive our applications to larger scales. The way we refer to and address data matters, with reasons ranging from speed to complexity to consistency, and can have unexpected effects down the line if we do not carefully consider how we talk about and refer to data at large.

Memory, Research for Practice




Drill Bits
Sandboxing: Foolproof Boundaries vs. Unbounded Foolishness


  Terence Kelly with Special Guest Borer Edison Fuh Drill Bits | Sandboxing: Foolproof Boundaries vs. Unbounded Foolishness

Sandboxing mitigates the risks of software so large and complex that it's likely to harbor security vulnerabilities. To safely harness useful yet ominously opaque libraries, a simple mechanism provides ironclad confinement—or does it?

Code, Development, Drill Bits, Security




Kode Vicious
Can't We Have Nice Things?


Careful crafting and the longevity of code

Kode Vicious | Cant We Have Nice Things? We build apparatus in order to show some effect we're trying to discover or measure. A good example is Faraday's motor experiment, which showed the interaction between electricity and magnetism. The apparatus has several components, but the main feature is that it makes visible an invisible force: electromagnetism. Faraday clearly had a hypothesis about the interaction between electricity and magnetism, and all science starts from a hypothesis. The next step was to show, through experiment, an effect that proved or disproved the hypothesis. This is how empiricists operate. They have a hunch, build an apparatus, run an experiment, refine the hunch, and then wash, rinse, and repeat.

Code, Development, Kode Vicious




The Soft Side of Software
Peer Mentoring


  Kate Matsudaira

My favorite growth hack for engineers and leaders The Soft Side of Software | Peer Mentoring

Stop waiting for a senior mentor to appear. Your peers are some of the most valuable mentors you'll ever find. Start leveraging those relationships, sharing insights, and bringing value to every conversation. Your career will thank you for it.

Business/Management, The Soft Side of Software


 


Volume 23, Issue 1




From Function Frustrations to Framework Flexibility

  Erik Meijer

Fixing tool calls with indirection

The principle of indirection can be applied to introduce a paradigm shift: replacing direct value manipulation with symbolic reasoning using named variables. This simple yet powerful trick directly resolves inconsistencies in tool usage and enables parameterization and abstraction of interactions. The transformation of function calls into reusable and interpretable frameworks elevates tool calling into a neuro-symbolic reasoning framework. This approach unlocks new possibilities for structured interaction and dynamic AI systems.

AI




Operations and Life
A Clean Approach to Process Optimization


  Thomas A. Limoncelli

What I learned from my dishwasher about automating processes

My soap-loading technique isn't revolutionary, but it does demonstrate a point about process design: You can eliminate delays in starting a process by front-loading tasks whenever possible. Front-loading changes when you do tasks but not their order. The process still involves a loop: load dishes, add soap, press start button, empty dishes repeat. You've only changed your mental model of where the loop starts.

Development, Management, Operations and Life




The Surprise of Multiple Dependency Graphs

  Josie Anugerah, Eve Martin-Jones

Dependency resolution is not deterministic.

It seems like it should be easy to avoid installing vulnerable open source software, but dependency graphs are surprisingly complex. At the time of writing, the latest version of the popular npm tool webpack has millions of potential dependency graphs depending on circumstances during its resolution. The exact graph chosen for a given package can depend on what other software is being built, what kind of system is building it, and even the state of the ecosystem on a given day. As a result, the developer and user of a package may end up with very different dependency graphs, which can lead to unexpected vulnerabilities.

Open source




Fifty Years of Open Source Software Supply Chain Security

  Russ Cox

For decades, software reuse was only a lofty goal. Now it's very real.

The xz attack seems to be the first major attack on the open source software supply chain. The event-stream attack was similar but not major, and Heartbleed and Log4j were vulnerabilities, not attacks. But the xz attack was discovered essentially by accident because it made sshd just a bit too slow at startup. Attacks, by their nature, try to remain hidden. What are the chances we would accidentally discover the very first major attack on the open source software supply chain in just a few weeks? Perhaps we were extremely lucky, or perhaps we have missed others.

Open source




String Matching at Scale

  Dennis Roellke

A call for interdisciplinary collaboration and better-directed resources

String matching can't be that difficult. But what are we matching on? What is the intrinsic identity of a software component? Does it change when developers copy and paste the source code instead of fetching it from a package manager? Is every package-manager request fetching the same artifact from the same upstream repository mirror? Can we trust that the source code published along with the artifact is indeed what's built into the release executable? Is the tool chain kosher?

Development




How to Evaluate AI that's Smarter than Us

  Chip Huyen

Exploring three strategies: functional correctness, AI-as-a-judge, and comparative evaluation

Evaluating AI models that surpass human expertise in the task at hand presents unique challenges. These challenges only grow as AI becomes more intelligent. However, the three effective strategies presented in this article exist to address these hurdles. The strategies are: Functional correctness: evaluating AI by how well it accomplishes its intended tasks; AI-as-a-judge: using AI instead of human experts to evaluate AI outputs; and Comparative evaluation: evaluating AI systems in relationship with each other instead of independently.

AI




Kode Vicious
Analyzing Krazy Kode


Accounting for the emotional state of the person who wrote that code

There actually are about six or seven emotions, or so I'm told. But the one state you should really try to avoid is confusion, which isn't actually an emotion but instead a state of mind. Code created by a confused mind shows itself in the randomness of naming, which is not handled by modern, fascist, programming languages like Go. Sure, you may have your names in the proper case and your spaces in the proper place, but you can still name a function PublicThingTwo() if you want to, and this is a sure sign of trouble.

Business/Management, Kode Vicious


 


Volume 22, Issue 6




The Soft Side of Software
My Career-limiting Communication


  Kate Matsudaira

Be thoughtful about your content. You've got a lot riding on it.

Whether in email, documents, or slides, use punchy visuals to make content easier to digest with your most important points clearly highlighted. Make sure that data, charts, and photos are unambiguously labeled, with any caveats noted. In general, steer away from pie charts, averages, and percentages. That's because, as popular as these devices might be, they often manage to tell only part of the story and miss opportunities to highlight the relative size of datasets, outliers, or trends over time.

Business/Management, The Soft Side of Software




Systems Correctness Practices at AWS

  Marc Brooker, Ankush Desai

Leveraging Formal and Semi-formal Methods

Building reliable and secure software requires a range of approaches to reason about systems correctness. Alongside industry-standard testing methods (such as unit and integration testing), AWS has adopted model checking, fuzzing, property-based testing, fault-injection testing, deterministic simulation, event-based simulation, and runtime validation of execution traces. Formal methods have been an important part of the development process—perhaps most importantly, formal specifications as test oracles that provide the correct answers for many of AWS's testing practices.

Concurrency




Intermediate Representations for the Datacenter Computer

  Achilles Benetopoulos

Lowering the Burden of Robust and Performant Distributed Systems

In-memory application data size is outstripping the capacity of individual machines, necessitating its partitioning over clusters of them; online services have high availability requirements, which can be met only by deploying systems as collections of multiple redundant components; high durability requirements can be satisfied only through data replication, sometimes across vast geographical distances.

Data, Distributed Computing




Simulation: An Underutilized Tool in Distributed Systems

  David R. Morrison

Not easy but not impossible, and worth it for the insights it can provide

Simulation has a huge role to play in the advent of AI systems: We need an efficient, fast, and cost-effective way to train AI agents to operate in our infrastructure, and simulation absolutely provides that capability.

AI, Distributed Computing




Operations and Life
Give Engineers Problems, Not Solutions


  Thomas A. Limoncelli

A simple strategy to improve solutions and boost morale

This technique is about providing the "why" instead of the "how." Instead of dictating specific solutions, present the problem and desired outcome, and let your team figure out how to solve it. This fosters creativity, shared ownership, and collaborative problem-solving. It also empowers the team to strive for the best solution.

Management, Operations and Life




Kode Vicious
The Drunken Plagiarists


Working with Co-pilots

The trick of an LLM is to use a little randomness and a lot of text to guess the next word in a sentence. Seems kind of trivial, really, and certainly not a measure of intelligence that anyone who understands the term might use. But it's a clever trick and does have some applications.

AI, Kode Vicious




Drill Bits
Retrofitting: Principles and Practice


  Terence Kelly with Special Guest Borer Ziheng (Aaron) Su

Retrofitting radically new functionality onto production software tests every skill of the programmers craft. A practical case study illuminates principles for bolting new tricks onto old dogs.

Code, Development, Drill Bits




The Price of Intelligence

  Mark Russinovich, Ahmed Salem, Santiago Zanella-Béguelin, Yonatan Zunger

Three risks inherent in LLMs

The vulnerability of LLMs to hallucination, prompt injection, and jailbreaks poses a significant but surmountable challenge to their widespread adoption and responsible use. We have argued that these problems are inherent, certainly in the present generation of models and likely in LLMs per se, and so our approach can never be based on eliminating them; rather, we should apply strategies of "defense in depth" to mitigate them, and when building and using these systems, do so on the assumption that they will sometimes fail in these directions.

AI


 



 




Older Issues