On Message Passing Architectures

The scale that most software is traditionally written is one of a cottage industry, small teams of artisan programmers build custom logic for a specific use case. When projects attempt to be written on an industrial scale, encyclopedias of software like Windows and Linux and Oracle's offerings, all of the weight of those programmer years produces monstrous bohemoths of modular design. When we think of best practice for this sort of programming, local state (even thread local state), becomes increasingly important as no programmer knows enough about the behavior of the system to account for side effects. Shared state is dangerous when the tools used make it easy to produce unintentional sharing.

Most of the lessons supposedly learned engineering software at this scale is actually anachronistic. It is much like the proverbial onion in the varnish, an ingredient which no longer serves it's intended purpose, because the state of the art has already advanced far beyond shared memory architectures running on machines no more powerful than my cellphone. (which is the rough approximation of how powerful the most powerful servers were when these ground rules were laid down).

If you look at most web applications today, you will see developers following down the big software path. Programs are written on top of monolithic frameworks, which are extended via plugins which operate within the shared memory space of the webserver itself. A developer's program consists of a set of glue to bind the static and dynamic elements together via APIs, DBs, and DSLs. Some have begun to embrace shifting the burden to the client, by transferring control to JavaScript modules in the browser, but here too they build upon frameworks like jQuery and operate within a single volatile namespace.

There is a model for programming we can spot in Erlang he glimmer of an idea which is gaining traction among the advanced guard. Tiny peers, connected by networks, passing messages to eachother. It is basically the idea of taking the Internet's architechture and building software in its image. If every component of your GUI, every resource in your data store, every bundle of business logic is a well named actor to whom you can send messages to, you not only gain the ability to distribute your applications, but also break the bounds of serialization enforced for years by programming model of the CPUs. In effect, application oriented superscalar design does more to unlock instruction parallelism than any VLSI compiler ever could.

In my current work, message flow diagrams are playing a curical role. You can understand the entire behavior of the application by tracing the messages passed. System debugging requires tools that for now are home-brewed; that I hope in the near future of offer as a platform. Much like sitting on an interface with tcpdump, most debugging is done by watching when a message reaches a peer. The output of a peer is itself a spate of messages, which must in turn be routed to the applicable peers. While this might seem complicated it is a slight spin on RPC, wherein the endpoints are not known to the party requesting the remote procedure be invoked.