Since I started programming large networks of objects, a very specific defficiency in nearly all extant programming languages has become painfully obvious. Objects do not regularly change their shape to adapt to their surroundings or new state. Instead, we either model objects in a classical vein as if they were Platonic ideals. Even in behavioral systems, we build inheritance chains which prevent us from hiding unwanted behavior. Instead of only exposing only those methods which make sense at a particular time, we stupidly expose all methods all the time, or continuously expose a subject of methods with friends or children all the time. Never do we allow an object to undergo the sort of transition of forms we see in the natural world
We do a poor job at modeling simple things we need in everyday programming:
- senior citizen
We differentiate the list of acceptable behaviors in relation to the state of an object in the programming of the real world all the time. We don't give the same rights and privileges to all ages, nor are all ages capable of exhibiting the same behaviors. How often do you see this modeled as:
As if O(t) was a constant. Objects in any living system have a history. We expend vast amounts of resources trying to pretend that they don't. When was the last time you used a generational garbage collector? How many operations simply instantiate the same logical object over and over and over again? We don't view this cost as unnecessary, because we'd like to pretend that each instance is perfect, that is until a few microseconds pass and it is time to discard it to the dustbin of history.
This gave rise to a programming style in which message sends allowed objects to share code. One objec could trade it's DNA with another, gifting it with mutations that gave it super powers! It also made it possible to think about objects as entities that exhibit different behaviors over time:
O(m) x M -> O'(m)
This simple view also meant that messages could also take away behavior as well:
O'(m) x M' -> O(m)
This is a very simple concept but turns classical object oriented programming for a loop. You can now model changes in states of an object as a sequence or graph of classes or types:
O(m) x M -> O'(m)
O'(m) x M' -> O''(m)
O'(m) x M'' -> O'(m)
O''(m) x M'' -> O(m)
By changing the exposed interfaces as the state of the object, we. can hide inappropriate behavior from others. We can also ignore inappropriate behavior of others, and respond only to those messages that make sense in our current context. We can also reason about our program based solely on the state diagram of the exposed behavior. We can also build in resiliency, by allowing a single object in it's time to play many parts.
If we define a Class as:
Class = the set of exhibited behaviors
We can say that for any Object O at some point t there exists a Class C(t) that reflects that object's state. As such, C(t) is a mirror function which describes the MetaObject over time. But since time is an independent variable, unless O explicitly responds to the passage of time, C(t) is likewise constant.
An Object O does not exist outside of context, however, and is the recipient of some stream of Messages M(t) over a period of time (possibly infinite). This means if M(t) is applied to O(t) and M(t) is a member of C(t) then O(t) will vary if and only if M(t) alters the set C(t). In short an object only changes when it want to or is acted upon by and outside force that coerced it to adapt.
You can view this as a fundamental law of programming.
Now a few of you are sputtering in your coffee, this will never fucking work what the fuck is he smoking?!?
Let's consider the humble socket:
- Closed = open
- Connecting = connected | close
- Connected = read | write | close
- Reading = ondata | close
- Sending = onsent | close
- Closing = closed
On the left is the state of the socket, neé Class, on the right is the set of behaviors that make sense in that state. If the Socket object's public and private interfaces changes to only allow messages that make sense for that state we can reason about the state transitions with confidence:
- Closed open -> Connecting
- Connecting connected -> Connected
- Connecting close -> Closing
- Connected read -> Reading
- Connected write -> Sending
- Connected close -> Closing
- Reading ondata -> Connected
- Reading close -> Closing
- Sending onsent -> Connected
- Sending close -> Closing
- Closing closed -> Closed
And voilá a well defined state machine which will ignore any attempt to read from a closed socket, or write before it connects, and can navigate all the marvelous complexity of synchronizing with the hardware. Oh and the math is easy too!
But but but, you sputter what about this fringe case?
Look you do this already but you write tons of fucking guard code, or event based state FSMs, or you pretend you do this and hack up a bunch of code that constantly throws exceptions and crashes in production. When you start have millions of processes all talking to eachother over a distributed network you can't be trying to account for every possible message that might come your way. Someday somewhere you're going to get a message that just doesn't make sense, is late, or duplicate, and you won't have a proper guard against it. This methodology dirges the problem and adopts a new viewpoint in which the problem of synchronization is hidden behind encapsulation.
This is a much better way to do it.