DAVE'S LIFE ON HOLD

A Framebuffer Server

There's been this project I've thought about doing for years, but never seemed to find the three hours necessary to implement it. Until last night. The basic idea is that you memory map a file to a texture and used the MAP_SHARED flag to expose the texture to multiple programs. Rather than have each program own it's own window, they merely bitblit to the shared memory object. The second feature I wanted was the ability to send a raw frame to a TCP port and have it displayed. The TCP server code merely is tens on a port and then reads a screen from the client before disconnecting. This way you can effectively share screens over the network between servers.

To implement it the other night, I decided to use SDL2. The advantage of SDL2 is that it will run just fine on both Mac OS X (which doesn't have a /dev/fbX device list) and Linux (which usually does, but often is owned by X anyways). I also get cool beans features like hardware acceleration and scaling and texture streaming on those devices that support it. It also happens to have an ok set of networking APIs, which will mean if I ever feel like porting to Windows I can. The basic design is also simple:


This basically boils down to a callback per timer, and a giant switch statement. The renderer code handles mapping the file's contents to hardware at 60fps on my machine, and some fairly rudimentary socket code that just incrementally receives a full buffer into the texture makes it trivial to nc some rgba data to a port and see it. I tested the file mapping interface with:

tiff2rgba -c none $1 rgba;
dd if=rgba conv=notrunc of=$2

And the network interface with

cat rgba | nc -4 localhost 6601

The image data needs to be in the correct format and aspect ratio, but as I picked ABGR encoding and 720p aspect ratio because it is high enough resolution for most monitors. In the future, I will separate the network code from the core server, and add a second port for sending compressed video. But as compression adds complexity and it already works as is, I may just avoid it entirely in the end.

What I like about this solution is that it is the antithesis of the typical windowing solution. I just want a simple old school VESA like interface, without the complexity of sharing a window. Building scene graphs, specifying user interactions, palette management, and all the detritus of GUIs are things to avoid until you can't get away without them.

In the future, I would like to send the same screen data over WebSockets to a browser to be displayed on a canvas. I can also go the other way, and have a canvas display on the remote Framebuffer server. I may implement a postscript / PDF / HTML5 Canvas style rendering library which uses the Framebuffer server as it's display target. There's a surprisingly low barrier to doing just that. That too can be controlled via a very simple Forth implementation, that takes commands over a socket and then renders to a memory mapped region.

Or I can just keep dd'ing files onto my screen.