Using Canvas More Effectively

Over the past couple years, I have been using HTML5 canvas tags more and more. Most applications I build now start with a canvas tag, and a few core object overrides, and the result is a cross platform 2d application framework that can actually perform better than DOM manipulation and CSS3 transforms. It is also a great playground to show off the hard won tricks learned doing Mode-X programming back in days of yore. Do you remember how to layout a pixel in planar format? Remember when you had to initialize your graphics card by bit banging ports to tweak it into a 480x360 display? Luckily for you if you do, using HTML5 canvas is both far easier and faster.

On of the key principles of doing anything efficiently with canvas is getting your scales right. If you only set your canvas width and height in CSS, you'll find your resulting images have the wrong size and aspect ratio. The reason for this is the view has it's own coordinate system, and with be scaled like any other image via CSS. To keep from pulling my hair out, when I grab the 2d or webgl context, I also explicitly set the canvas elements height and width to the observed size. This keeps my pixels square and lined up.

The next bit is how to handle events. Most web apps attach events to the individual item in the scene graph, I mean DOM, and the pray that the capture and bubble just happens to work when all of the scripts are in place. When using canvas heavy apps, especially on touch devices, it doesn't pay to use the built in event model. Sure you can attach events to the canvas, but you then still have to build your translation layer to integrate into your scene graph. In most of my apps, I just take ownership from the document on down, using capture, stopPropagation, and preventDefault. The end result is I can use my scene graph and widget logic to filter events by hand. But it also means I can easily dispatch events to all active elements.

By creating a multi-reciever send in the base controller class, I can dispatch events to objects not even in the scene graph. Since I am not relying upon DOM events beyond the initial controller, I can define application specific events, as well as, translate and collate events. Mouse events, touch events, and keyboard arrow keys can all be mapped to the same set of application specific events. This allows not just for correct behavior across different input devices, but also makes it possible to alter that behavior by configuration. If you have ever changed your key settings in a game you know what I mean.

This additional layer of indirection can actually be a huge performance win. On any of the next generation of browsers the setInterval timeout will be min capped to 4ms. (this change won't hit Firefox 4, which is limited to 10ms). Most devices don't fire DOM events at nearly this rate, but consider this the maximum rate at which you can fire off 'draw' events. 250fps is not really possible on any actual application on current hardware, but we can dream of a day when it is. If you are capturing events though, the time between when the event is first handled and the time to the start of the next draw is the minimum latency behind a response. By capturing an event at the first stage of the capture phase, we can avoid walking the entire scene graph and bubbling. If we have a scene where thousands of objects are going to update their state based on a frequent event like a move event, we can flatten the entire dispatch process, and only contact each node once. It also means we can still model each object in the scene as if it were just receiving a message, since that is all that is happening.