DAVE'S LIFE ON HOLD

Message message everywhere and all the code did shrink

One of the limiting factors of code reuse is the implicit coupling that comes from running in a context that shares state.   Take for example closures which capture state from an enclosing context, to transfer the closure as a value requires transferring the entire continuation of the program at that point if the enclosed value references any shared state. 

Consider the simple case where a pointer to a shared library function is used to perform some operation. This shared library is probably part of a reusable module, which is linked at run time based on configuration. If you've ever used Perl, Ruby, Puthon, Lua, Erlang, Java, Smalltalk, etc and called out to a database or crypto function or compressed data, or just accessed a high resolution timer you've used one of these.   Because these modules require use of the C library's symbol table and the dynamic linker, there is no good way to decouple these behaviors from the shared memory space. 

Some programming languages like Self, handle this problem by wrapping this shared state in proxy objects which serve as interfaces to the native code. Live proxies can not be transported off world, and die if their associated system link is severed. Any object transported with a proxy reference is transferred with a dead proxy attached which will resurrect only if the same system link is reestablished.  This decoupling of interface and system only works because the proxy is smart enough to manage to low level connection. 

This breakdown in reuse and portability is a product of tight coupling at a systems layer between the OS, the user space libraries, and the program itself.  VMs like the JVM, LLVM, and the Smalltalk VM, do not fundamentally protect us from this coupling. Anyone who has run any sufficiently complex system will be familiar with this phenomena of DLL hell (jar hell, whatever). Rather than fix this fundamental coupling, we see solutions like .Net assemblies which add additional meta-data to help avoid conflicts, but do nothing to sever the bonds. 

For all that it is hated, the X11 protocol actually gets a lot of this right.  When I move my mouse on the X server, all of the X clients can receive a mouse motion message. When a client wants to redraw part of the screen, it sends messages to update the drawing context, transfer data to the sever, and update the screen.  No operation is dependent upon shared state on a single physical device. 

Recently, I've spent a lot of time thinking about how wrong object oriented programming has gone in practice. We have abstract conceptual models with no clean mapping to the underlying reality. And if there is one thing I've learned through trial and error is "reality always wins".   If you think of the VM as an abdication of responsibility on the part of the language designers you'll get my current train of thought. 

So what would I do to replace the VM?  How about a collection of system objects that map directly to hardware, with well defined interfaces, that encapsulate the hardware itself. With hardware we can not inspect the internal state, so the modeling in this fashion holds. Raising a pin to access a control register is analogous to sending a message and getting a response. 

So we can have a Memory object (mentioned in a previous post) which sends and receives messages that model memory access. We can have a Processor object which maps to the CPU or more likely a single core on the die. We would also need Storage, Network, Screen, Graphics (GPU), Mouse, Keyboard, Touchpad, USB, etc.  Most of these already exist in my JavaScript "VM" that forms the basis of the Phos environment.   Each of these "VM" objects can either delegate to or actually be an object tailored to the specific hardware.  This means the interface to our CPU object will have to map to the actual CPU's instruction set.  The goal of the Compiler object is then to send messages to the Memory object which the CPU object can then interpret in accordance with its instruction set. The CPU object itself basically needs no more interface than a goto: anAddress method.  We could add more advanced interfaces that allow us to bind: Object to a CPU, and also reset and even upgrade: Microcode; but these really aren't necessary for a functional system. 

When you cut out the crap, and map your problems directly onto hardware objects, you gain a clear view of how to arrange your programs. If you have an object that models the Screen, you can change resolutions, update the image displayed, and receive messages for synchronizing refresh events.  That's about it.  Want to draw a line?  Well you could ask a Line object to tell the screen how to draw a line; or you could send a sequence of messages to tell the screen directly (aka implement Line draw yourself).   The Screen may be separated from a Display object to provide a virtualized rendering context that spans multiple Screens, or to transparently proxy for remote Screens and Displays. You could either implement a protocol like X11 or have a Display object that knows how to relay messages to remote displays via the Network object. Once again you basically have implemented X, but on a lower level of abstraction.  If messaging in general becomes network transparent, as everything is a message send, then even remote Memory and CPUs would be accessible to any object. And it is then that we see, in the tangled mass of wires and radio waves, there is only one machine.