In programming, we use the concept of encapsulation. Think of a hi-fi: if I add a radio to my setup, I expect to have a box with a few buttons on the front, and a few sockets on the back. This is the interface of the radio, and I expect it to be simple and predictable. What I don’t want is have to get out a soldering iron to connect it to my amp, or, even worse, have to fiddle around on the circuit board whenever I want to change the station.
We routinely apply this principle to software, but I’m beginning to wonder whether we should apply it to other systems too.
Let’s take a purely hypothetical example.
The Systems team at Brenda’s Badgers are making some infrastructural changes. One of these is to retire a server called Sauron, which is well past its prime and keeps failing unexpectedly. They send out an email to the dev team announcing the imminent decommissioning of Sauron, and asking for any potential problems to be flagged up.
Now, a couple of years ago it was decided that, rather than referring to servers by name, it would be a good idea to give them service names, which could resolve to their IP address. Sauron is used for various purposes, so it goes by the service names sett-cache-2.brendas.prod, brock-store-01.sql.brendas.prod and web-server-001alpha.www.brendas.prod. The developers always use these service names, and few of them were even around in the days of Sauron.
So, when the email comes round about Sauron’s imminent demise, no one is too concerned; this sounds like a piece of legacy infrastructure, so it must be someone else’s problem.
Crisis is narrowly averted when a developer in the Sow Care API Team does a search through their code, and finds a single reference to Sauron. They find this is being used in the same context as references to sett-cache-2.brendas.prod, and decide to do a bit more investigation, which uncovers the various services behind which this machine hides. A last-minute panic ensues, and, a couple of days behind schedule, Sauron is finally laid to rest.
[Now it seems to me that this has all come about because of a failure of encapsulation. The dev team shouldn’t need to know that there is a server called Sauron. Their concern is that there are machines that fulfil the services of sett-cache-2.brendas.prod, brock-store-01.sql.brendas.prod and web-server-001alpha.www.brendas.prod. By expecting the dev team to know about Sauron, the Systems team are leaking details of the internal implementation to the outside world. Instead, they should be using the common vocabulary of the service names.
Furthermore, if all the references to these services use the proper service names, rather than referring directly to Sauron, then the responsibilities of these services can be moved across to other boxes by changing the DNS configuration, rather than doing a full release, which removes a significant amount of fragility.]
So, in a post-panic retrospective, the Systems and Dev teams at Brenda’s Badgers got together to discuss why the process was so fraught. They agreed on the following changes:
- In future, all code should exclusively use service names, to remove the fragility of pointing directly to individual boxes.
- In future communications, the Systems team would refer to these boxes by the service names that point to them, rather than by their externally irrelevant names.
And everyone lived happily ever after.
[Of course, this isn’t really the end of the story: the service names themselves are smelly, as they seem to include version numbers. Further investigation might reveal multiple servers with load balancing distributing calls amongst them; this in turn might prompt questions about whether the dev team should even know about the individual machines, or should just be able to treat the services as black boxes.]
So, the moral of this little fable is that the notion of encapsulation is not only useful when writing code, but can also have applications in wider contexts: communicate data that need to be shared with a consistent vocabulary, and hide everything that doesn’t need sharing.