DAVE'S LIFE ON HOLD

Deconstructing User Interface

The past week I've been involved in a discussion about the fundamentals of user interface design from a technical perspective.  Having been a game designer for a number of years, and having worked on several cross platform gaming and GUI frameworks, I have had a considerable amount of in the trenches experience fighting with everything from 2D bitbliting, 3D scene graphs with shaders, to custom generating tiled MPEG2 streams.   Just think of the fun of tile based graphics programming in 16x16 non-square pixels and controller responses in the 500ms range!  I've had to hack on some pretty exotic hardware as well. 

So coming at HTML5 programming, I have a natural tendency to fall back to what works. In the DOS days, before Mode-X, we had 2 color graphics ( your choice of monitor: green & black, yellow & black, and if you were rich white& black). My first personal computer had CGA 4 color graphics, and wow talk about a shift in mentality, planar graphics!  Now you had to swizzle more bits at funky offsets in memory to display the right color!  EGA added a wonderful 16 color palette, which required constantly swapping out whenever you changed your sprite set, but look at the color!  Finally we get to VGA 8bit, then 16bit extensions, then 24bit, then on some weird high end cards 30bit, then back down to 24 for most applications. 

Welcome to HTML5 canvas.  

You can easily view a HTML5 as an arbitrarily sized bit mapped display with 24bit color graphics.  The Canvas has built in a subset of the Postscript drawing routines (ones that PDF also implements) and a bitblit operation known affectionately as drawImage.  If you ever did much Postscript programming, you'll find this model intuitive although the names have changed to protect against lawyers. 

But how do you actually make use of this wonderful retro interface implemented in most modern we browsers?  Well first of all not like the DOM.  The DOM is one of the shittiest GUI APIs ever invented. If you think about it as a scene graph API that prevents you from easily moving subtrees or determining their bounds, you've only scratched the surface of what is wrong with the DOM. 

Don't Do It That Way!  

So now you have your Canvas, the first thing to do is get the DOM as far away from your code as possible. First step is implementing proper controllers. I don't mean the poor excuses web frameworks use when they discuss MVC (inaccurately) but the original meaning of the term:

Mouse, Keyboad, Touchpad, Camera, Mic

By creating objects which model the physical controllers, that send messages to (invoke methods of) other objects (Views and Models) we can build a proper MVC framework as in olden days (say the 1980s).  Best of all, we don't have to code for dozens of keyboards and mice and all sorts of drivers as in the DOS days, because the browser abstracts this out. 

you can see my take on this here

We can also make the Canvas itself a View or we can model it as a Screen controller that sends draw messages to Views. This particular decision is something I am increasingly leaning towards, as the animation loop support in browsers is improving.   Similarly, an abstract Timer controller mimicking and old fashioned PIC is also pretty cool for sending out a "tick" message that updates the views and models internal clocks.  I'm increasingly interested in a Speaker controller which mimics a volume knob, and like the Screen fires off a "play" event to audio views to initiate the next chunk for each channel.  When you have 3D surround sound with 5 channels + bass,  having control over each individual speaker becomes a godsend. 

Emulating low level devices has some spectacular benefits that aren't readily appearant to those accustomed to IDEs and mammoth class libraries.  Probably the most important being, you don't waste time on scaffolding.  Let's say you have a HUD, and odds are you will write a HUD view that will render the HUD. You'll use a bitmap for the main HUD, and then spot draw the bits that change.  You can tune your HUD to update 5x/s and settle for 200ms refresh times, saving a ton of cycles drawing information faster than human cognition.   To do that on a typical GUI framework like Android means getting down and dirty with custom widgets and overriding the surface drawing routines, while being careful to avoid tearing or flashing due to HUD overlap. 

The same rule applies to all manner of graphics, you don't need to refresh every bit of the Screen, just the bits that change. If you look at old 8bit arcade games, you'll notice a tendency for small sprites in wide fields with fast movement. This can be done because smoothness is a relative perception. Your brain can be tricked to fill in the missing bits especially if your focus is on a small dot.  Parallax scrolling takes advantage of this by using two different backgrounds and having one move more often and further than the other.  The scrolling feels less jarring because our brains treat it as it we're moving fast.  We just can't fix our focal point in two distances at once.  Add a 3rd focal point where all the action is and a bunch of flashing lights and you've got an epic space battle on your hands. 

If you are willing to forego the immensely powerful devices in your pocket and focus on the supercomputer sitting on you lap, you can take advantage of a level of hardware acceleration that makes the 8bit gamer in me cry.  WebGL offers the potential to displace most native gaming.  The reason for this is pretty simple, 3D hardware does the heavy lifting in most modern games, and most of the actual game play is already implemented in a "scripting" language.   If you look at game like Eve or World Of Warcraft, you're looking at a game engine controlled by Python or Lua.  Many game engine providers have already begun using JavaScript as the glue between all of the low level bits, and this trend has not slowed. 

So why does writing  code that directly interacts with keyboard and mouse matter in a WebGL world?  Because once again there is only a single canvas DOM element representing the full gamut of your world view.  Your polys won't be addressable via the DOM, and you will need to do your own 3d collision detection to figure out what the pointer is referencing.  All of the same skills necessary for writing traditional 3D games in low level languages are necessary in your browser as well. 

The ability to break down your UI into hit boxes and bitmaps (or polys and textures) opens up a wide range of new possibilities of using the web to deliver almost any application. As the browsers enable the access of more and more hardware, they themselves define a unified operating system across multiple platforms.  With the growth of mobile computing, we will see a substantial increase in the volume of applications targeting mobile browsers.  But for these apps to be performant, programmers will need to deconstruct their user interfaces and relearn the techniques of the platforms that have come before.