DAVE'S LIFE ON HOLD

The Layer Problem

Over the past few months, I've had the good fortune to be funded to work on a set of pet projects, that taken together provide a body of work that will dramatically alter the way I build things in the future. One of the problems I've been working on overcoming is "The Layer Problem", in which layers of abstraction designed with a specific consumer in question cease to be relevant once you begin operating across all layers. When you are designing the hardware, designing the operating system interfaces, designing the application code, and building the user interface, you will encounter a moment of horror when you realize the problem is the concept of layering itself.

For example, let's say I'm building a device with a memory mapped bitmap display. Since I'm laying out the HDMI interface on the board, generating the 720p signal, and writing the software that is going to feed that display from a memory buffer at a fixed frequency, I don't really want to have to write a DRM compliant video driver or add a Linux kernel module. I just want to write RGB data to a fixed address, and need Linux or BSD to stay out of my way (via mprotect or mmap). And since I'm storing data on a SDcard with a SPi interface module, I start to question of the need for a file system when I can design May memory controller to buffer the blocks and only flush when the buffer needs flushing. I can just memory map the SD card and call it a day. The only reason I need an address after all is to identify where on the bus the data should be routed.

These are the thoughts you start thinking about when you don't need those layers of abstraction. I know that no one else software is going to run on my device, I don't need to run legacy applications either. This device will sit in a room, performing specific functions, and will get a firmware upgrade periodically by updating the SDcard. Don't like the upgrade, swap out the SDcard for the old one. You want to port an application to this platform, well that's nice, do it my way. Sure you can reprogram the FPGA, knock yourself out! It is all open, it is all documented, but I don't need to support all those layers of legacy. Layers of compatibility with the past have a cost, a cost to develop, to test, to maintain, to support. Why incur that cost if we can get from A to E and just skip B,C,D all together?

This thinking has resulted in an unexpected side effect, I could also just leave those layers on systems that already piled them deep, too deep to easily remove, but I could always emulate the simpler systems in those heavily layered systems. Unsurprisingly, many of those layered systems just use similar simpler interfaces at their lowest level. If I have a memory mapped hardware device, I can also create a memory mapped display app on Mac or Windows, my code still works the same. Even if I have a custom CPU design, I can still emulate it using other instructions on another processor, at the cost of some performance, but the code can run the same way, Finally, I can approach storage the same way, just using the OS mmap syscall, and a solid understanding of the device layer. A 2GB SDcard image or a 2GB file are largely indistinguishable once you've written the 20 lines of assembler once. In fact, that is true for networks, graphics, files, audio, mice and keyboards; do the work once and you're done forever.

Back in the 80s, we wrote our own everything. I wrote assembler routines to write text to the text buffer region, planar memory layouts for different graphics cards, and PWM on the speaker over and over again. Every app we wrote required first building all of the wheels, and assembling the axels and chassis, and then writing your app. After you've written a CGA graphics routine a few times, you just start including your best code in each new project. By the 1990s, you had to learn VGA and SB16 registers, but hacks like mode X made doing graphi so much simpler with less bit swizzling. By 1994, every app I could want to write I could write from scratch, and that's when I started running Linux as my desktop. All the device drivers, X, the Internet, combined with fewer well documented components due to increasing dependence on OS vendors like Microsoft, made it nigh impossible to do everything low level. The last machine I had full access to the video card was a set of Matrox cards in 2000. For 20 years the pressure for layers has been growing, and for the last 13, those layers have largely won. But those layers aren't healthy or even helpful.

As I've peeled back the layers, I've found myself rediscovering the joy of programming, and building more and more compnentry faster and faster. I now have all of the code to program like it is 1983, but with 2014 hardware. My binaries are 2-3k in size for a full application, start nearly instantly, and are portable across operating systems. I'm once again questioning basic assumption, like are files necessary? Do I really need a database? Why bother with HTML? Seriously why bother? Browsers are now fast enough to fake being a memory mapped bit mapped device. Just bit blit data over a WebSocket and display on a canvas, and treat the page as a frame buffer. If you draw your graphics using old school frame buffer techniques it will work on a Linux /dev/fbX device, on an application using my framebuffer daemon, or on a HTML5 canvas with my frame buffer daemon port. Audio is the same problem. Text rendering is easy enough, antialiased line drawing is trivial, and B├ęzier curves are a day's work. Once you've done it, you're done.

I am now finding myself building layers on my own foundation. I am writing less and less core code, and more and more application code, As networking was one of the first things I addressed, I am now reusing applications via network RPC. Since I long ago solved that "Button problem", I reuse my button code. With my storage code, I never need to write another store, and I connect to my stores over the network so they are always available. Text consoles, I've got text consoles galore. So I've built my own layer problem. The problem with the old layers was I had to write to them. Each program required solving the same god damn problems over again, and each OS and each application required solving the problem again in some stupidly trivial but different way. By designing the interfaces as I would design my idealized hardware, I've fixed the hard problem by fixing the hardware interface problem. And now that my hardware interface doesn't suck, I'm designing my hardware to match. The argument for operating systems is that all these hardware vendors produce shitty interfaces and won't expose their documentation, so we need this layer of indirection to solve the shitty interface problem. We need ever increasingly complex hardware because all this shitty software all wants to own the hardware, and the software needs to be protected from other shitty software, but that requires more shitty libraries to share across more and more shitty apps. So by piling shit on top of shit, we end up with a huge heaping pile of shit so deep the lower levels are fossilizing.

But we can solve this problem by designing interfaces that don't suck, and buying open hardware with clean interfaces, and building applications that work with the hardware. And then we can solve the layer problem.