Roles vs Activities

At work, Gonçalo sent round a link to an article by Michael Lopp on management and engineers. Lots of discussion ensued, much of it fairly tangential.

This discussion got me thinking about the difference between roles and activities, and I’m going to sketch out these ideas here.


It is easy to talk about roles: a Project Manager does X, a Product Manager does Y; a Developer does φ, a Tester does χ, and Architect does ψ. This thinking encourages us to assign roles to people: to turn them into jobs.

Lopp talks about ownership, and certainly, if a role is assigned to a person, you know where to go to to get answers in that domain. If I have a Project Manager, I know who will give me updates on the progress of the project; if I have a Tester, I know who to ask to test a piece of functionality.

But it’s also about blame. If I have a Project Manager, then I know who to shout at if the project falls behind schedule; if I have a Tester, I know who to sack if a vulnerability is released that leaks personal data.

And I wonder whether the focus on individuals filling roles not only encourages this focus on blame, but remains attractive when a culture of blame persists.

Getting away from blame

At my work, when something goes horribly wrong, we carry out a blame-free post mortem. We establish the facts of the incident, acknowledge the aggravating factors, take note of the mitigating factors, and come up with a plan for the future. During this process we recognise that people did things, but don’t get hung up on criticising them for their actions; rather we try to understand why they acted in the way they did, and how we can make this better next time.

This approach differs radically from the traditional approach of declaring ‘heads will roll’, initiating a witch hunt, and ensuring that the persons responsible are at the very least made to feel thoroughly shitty, and quite possibly relieved of their duties.

In conducting a blame-free post mortem, we are not interested in roles and responsibilities, we are interested in actions and activities. It matters less who acted than what action was taken; not who failed to act, but what action would have helped.


Recover from incidents is smoother if we focus on activities rather than roles, actions rather than people; can this shift in emphasis help elsewhere? I think it can.

Again, where I work we don’t have Architects. This doesn’t mean we don’t do Architecture: we do it all the time. Our team whiteboards are decorated with particoloured diagrams of the systems we’re building. We treat Architecture as an activity, not as a role, and this means that many people are involved, understanding of the decisions is pervasive, assumptions are more likely to be challenged, and single points of failure are less likely to exist.

And what happens if we make a catastrophically bad architectural decision? Well, there is no one to point the finger at, no convenient repository for blame, as the decision was collective and consensual. Instead, we can recognise that the decision was poor, learn from that, and adapt and move on.


I have seen how a limited shift in focus from roles to activities can work well. I wonder whether a more comprehensive shift would have further advantages. This isn’t to suggest that everyone should be engaged in all activities all of the time, but rather that by introducing flexibility, collaboration and sharing, we might be able to move further away from a culture of blame.

Encapsulation isn’t just about code

In programming, we use the concept of encapsulation. Think of a hi-fi: if I add a radio to my setup, I expect to have a box with a few buttons on the front, and a few sockets on the back. This is the interface of the radio, and I expect it to be simple and predictable. What I don’t want is have to get out a soldering iron to connect it to my amp, or, even worse, have to fiddle around on the circuit board whenever I want to change the station.

We routinely apply this principle to software, but I’m beginning to wonder whether we should apply it to other systems too.

Let’s take a purely hypothetical example.

The Systems team at Brenda’s Badgers are making some infrastructural changes. One of these is to retire a server called Sauron, which is well past its prime and keeps failing unexpectedly. They send out an email to the dev team announcing the imminent decommissioning of Sauron, and asking for any potential problems to be flagged up.

Now, a couple of years ago it was decided that, rather than referring to servers by name, it would be a good idea to give them service names, which could resolve to their IP address. Sauron is used for various purposes, so it goes by the service names, and The developers always use these service names, and few of them were even around in the days of Sauron.

So, when the email comes round about Sauron’s imminent demise, no one is too concerned; this sounds like a piece of legacy infrastructure, so it must be someone else’s problem.

Crisis is narrowly averted when a developer in the Sow Care API Team does a search through their code, and finds a single reference to Sauron. They find this is being used in the same context as references to, and decide to do a bit more investigation, which uncovers the various services behind which this machine hides. A last-minute panic ensues, and, a couple of days behind schedule, Sauron is finally laid to rest.

[Now it seems to me that this has all come about because of a failure of encapsulation. The dev team shouldn’t need to know that there is a server called Sauron. Their concern is that there are machines that fulfil the services of, and By expecting the dev team to know about Sauron, the Systems team are leaking details of the internal implementation to the outside world. Instead, they should be using the common vocabulary of the service names.

Furthermore, if all the references to these services use the proper service names, rather than referring directly to Sauron, then the responsibilities of these services can be moved across to other boxes by changing the DNS configuration, rather than doing a full release, which removes a significant amount of fragility.]

So, in a post-panic retrospective, the Systems and Dev teams at Brenda’s Badgers got together to discuss why the process was so fraught. They agreed on the following changes:

  • In future, all code should exclusively use service names, to remove the fragility of pointing directly to individual boxes.
  • In future communications, the Systems team would refer to these boxes by the service names that point to them, rather than by their externally irrelevant names.

And everyone lived happily ever after.

[Of course, this isn’t really the end of the story: the service names themselves are smelly, as they seem to include version numbers. Further investigation might reveal multiple servers with load balancing distributing calls amongst them; this in turn might prompt questions about whether the dev team should even know about the individual machines, or should just be able to treat the services as black boxes.]

So, the moral of this little fable is that the notion of encapsulation is not only useful when writing code, but can also have applications in wider contexts: communicate data that need to be shared with a consistent vocabulary, and hide everything that doesn’t need sharing.

What features of natural languages should programmers investigate?

We watched a presentation by Ola Bini on Expressing abstraction—Abstracting expression, and the language geek in me got very excited when he introduced some natural language examples of different modes of expression.

But I was soon disappointed by his choices:

  • simile
  • metaphor
  • redundancy

Now, these are perfectly worthy topics for consideration, but if we really want to explore alien paradigms, how about we venture further afield. Here are a few:


The State Pattern, explored through the medium of cake!

At 7digital, we are running a weekly discussion group on design patterns, and I have been creating some small projects to exlore the issues raised in these discussions. The other week my colleague Emily presented the State Pattern, and it soon became clear that this is one of the more controversial patterns, as it seems to ride roughshod over the SOLID principles.

In this post I will give a bit of background, and then discuss the various approaches illustrated in my project.

The State Pattern

The classic case for the State Pattern is when an object’s behaviour changes according to the state it is in. A vending machine can only sell when it contains products, a shopping basket can only be checked out when it has products in it, a person can only get divorced if they are already married.

This pattern is implemented as an extension of the Strategy Pattern: the context object (vending machine, shopping basket, person) has a dependency on a state object, which in turn implements the behaviour (sell, check out, divorce) in an appropriate manner. Unlike cardinal examples of the Strategy Pattern, in which the strategy is set once and left alone, the State Pattern lets the strategy change during execution — often as the result of a method call on the state itself (if the vending machine sells the last product, it then becomes empty; if the only item is removed from the basket, it cannot be checked out; if the person gets divorced, they are no longer married).


An immediate criticism of the State Pattern is that it leads to a system that is fairly tightly coupled. Not only is the context coupled to the state in order to make method calls, but a mechanism is then needed for updating the state, and a classical implementation of this is for the state to be coupled back to the context — a two-way coupling that is already a code smell.

Other implementations are possible: the context can update its own state after each state method call, or the state method can give the successor state as a return value; however, putting this question aside, it is the State Pattern’s lack of SOLIDity that makes it most problematic.


Consider a context object that exposes several methods and has several states. Now consider that only some of these methods will be appropriate for any given state of the object (just as in the examples given above). Under the classic State Pattern, each concrete state must implement a method for every delegated method of the context object, even if it is not appropriate for it to do so (often implemented by throwing an exception).

First, this is a violation of the Single Responsibility Principle, as each concrete state now has responsibilities that it is designed to shirk. If we consider the ‘reason to change’ criterion, each concrete state can change because a) it changes the implementation of the stuff it does do, and b) it changes the implementation of failing to do the stuff it does not do.

More graphically, this is a violation of the Liskov Substitution Principle. This principle requires interchangeable objects (eg, those with the same interface) to behave in the same way, ie, given the same input conditions, they produce the same output conditions. The problem with the State Pattern is that it requires each concrete state to have different behaviour from its peers: violation of the LSP is seemingly baked into this pattern.

Finally, this pattern can lead to a violation of the Interface Segregation principle, insofar as a multiplicity of methods on the context class (which may be justified as being appropriate to that object) can then delegate to a a multiplicity of methods on the state interface, which, given their state-dependence, will no longer form a coherent collection.


Let’s take a break from theory here, and look at my implementation.

I considered the stereotypical State Pattern examples, but let’s be honest: there are quite enough vending machines and shopping baskets in the world, and discussion of design patterns is not the context to be locking horns with Cardinal O’Brien.

So I thought I would use a culinary example instead.

Sharlotka (шарлотка) is a Russian apple cake. It is made by filling a cake tin with chopped apples, then pouring an eggy batter over the apples and baking in a warm oven till the batter has set. It is served dusted with sugar and cinnamon, and is really rather lovely.

It also makes a good, simple example of the State Pattern: each stage of cooking must be carried out in sequence, but there’s a nice loop in the middle, where you have to keep checking the cake until it’s cooked through before turning it out.

Classic Implementation

My first implementation follows the classic pattern.

You can see that there is one ISharlotkaState interface, which contains a method for each of calls that the Sharlotka context object delegates to its state. Each of these methods takes a reference to the Sharlotka as a parameter, so it can alter its state if need be. (The Sharlotka is passed with the interface IHasState<ISharlotkaState> to avoid making the State part of the public interface of the Sharlotka class itself.)

If you look at any of the concrete states (eg, ReadyToAddApplesState), you will see that most of the methods are left throwing a WrongStateException, as well as the mechanism for updating the context’s state. The _successor state is injected in the constructor, as this makes it possible to leave the mechanics of wiring everything together to the IoC container, rather than having to new up states within other states. In a small way this is reminiscent of the Chain of Resposibility Pattern, and does something to alleviate the tight coupling smell.

If you want to unit test one of the concrete states (eg, ReadyToAddApplesStateTests) then you are again left with a lot of boilerplate code.

This implementation highlights some of the deficiencies of the classic State Pattern implementation; however, it does still work, and may be appropriate for cases where it is not so much the behaviour of the context that is dependent on state, but rather its internal implementation.

Segregating the Interfaces

A more parsimonious approach has been suggested by Jan van Ryswyck, and I have implemented a version of this as the Small Interface project.

The key difference here is that rather than having a single, monolithic interface for all the states, the individual behaviours are broken into their own interfaces. When the context comes to call each state method, it first checks whether it can cast the state to the appropriate interface, and only then makes the call.

This implementation makes no further efforts to remedy the tight coupling, but does makes some improvements to the SOLID violations:

The Single Responsibility Principle is better supported, as each state is now only responsible for implementing the interfaces that are actually appropriate; we no longer have states also taking responsibility for throwing WrongStateExceptions, as this is done by the context object.

The Liskov Substitution Principle is better supported, as we no longer have sets of defective implementations of interface methods; instead we have used the Interface Segregation Principle to split out a set of atomic interfaces, each of which can more easily support the LSP. There are still opportunities for violation of the LSP, as it is entirely possible to implement the same interface in several states, and for each implementation to be different, but this problem is no longer inherent in the implementation.

This implementation makes good steps towards SOLIDity, but still has a major flaw, which is intrinsic to the state pattern: you can call methods on the stateful object that are simply not appropriate to its state; to continue the sharlotka example, you can call Serve before you have called Bake. Whilst doing this will throw an exception, it should arguably not be possible to do so in the first place.

A Fluent Implementation

A third possibility came up in our discussion — I think it was Greg who raised it —: you can implement your stateful class with a fluent interface.  You can see my version of this in the Fluent project.

Like the Small Interface implementation, this uses several atomic interfaces instead of one monolithic one; unlike that implementation however the interfaces are directly implemented by the Sharlotka class, rather than being delegated to a state object, and at any point in its lifecycle the Sharlotka is only considered as an implementation of the interface that is appropriate to its current state, ie, you never have access to methods that are not currently supported.

The trick in implementing this is in the fluent idiom: rather than calling


we want to chain the methods like this:


We can do this by making each method return this, but with a return type of the appropriate next interface.

For example, TurnOut is implemented like this:

ICanDustWithSugar ICanTurnOut.TurnOut() {
	return this;

which means that the next call must be a method of ICanDustWithSugar.

This implementation does away with the smelly tight coupling of the State-Pattern examples, as state is represented by the return type interface of each method, rather than as a property of the Sharlotka object. The application of the Single Responsibility Principle is rather less clear in this example, as the methods are stubbed out, rather than being implemented in any meaningful way. It is quite likely that in a real system the implementation of each method would be delegated to another object in order to honour this principle; this would look rather similar to the Small Interface implementation, with the crucial distinction that the implementation would not be responsible for updating the Sharlotka’s state.  The Liskov Substitution Principle ceases to be a question here, as we have moved to small interfaces with single implementations, and this fact also supports the Interface Segregation Principle.

Where this implementation is not so suitable is in cases where the state of the object after a method call is not clearly determined. For instance, withdrawing money from a bank account can leave the account in credit or overdrawn; in such a case the trick of returning a particular interface is not sufficient. A small example of this in my implementation is the Bake loop, and I have overcome the problem in this particular case by checking the return type until it is not null. However, this technique is already a significant departure from the fluent idiom, as it relies on runtime checking to make sure that the code actually works.

There is another danger with this implementation, in that it relies on the consumer knowing to use it fluently. There is no protection against storing the return value of any method in a variable, and calling the same method more than once (indeed, this is what is done during the Bake loop), and any person coding with this interface needs to know that methods must be chained. However, the fluent idiom is fairly commonplace now, and neither of these considerations is one of code quality.

& Messe it Forth

These three implementations illustrate different approaches to the same problem. The Classic implementation is unlikely to be the best in most circumstances because of its tendency to produce Liskov-violating defective methods; the Small Interface implementation overcomes these problems, and is probably most suitable for situations where the state of the object does no change in easily predictable ways; the Fluent implementation is handy when a particular sequence of methods should be called, but less idiomatic when the sequence can branch at runtime.

There are also tantalising prospects for implementing this type of system with monads, but I’m going to leave that for another day.

Dodecaphony and JavaScript

Serial technique is highly algorithmic way of generating musical ideas, and lends itself well to programming. In this post I’m going to sketch out a few bits of JavaScript I’ve been using to explore some of the possibilities of this technique.

(For those unfamiliar with serialism, and the 12-tone technique in particular, it is a method for producing musical — typically pitch — material by taking a list of values as a starting point. Various transformations can then be applied to generate new sequences. The multiplicity of different sequences provides variation, whilst their relationships back to the original series can give a sense of repetition and unity in the resulting music.)

Let’s consider 12-tone serialism.

A prime row can be considered as a mapping function P from a position to a pitch.

If we have a row [0, 3, 5, 8, 11, 1, 6, 10, 4, 9, 2, 7], then we can define P as follows:

P(0) => 0 
P(1) => 3 
P(2) => 5 
P(3) => 8 
P(4) => 11 
P(5) => 1 
P(6) => 6 
P(7) => 10 
P(8) => 4 
P(9) => 9 
P(10) => 2 
P(11) => 7

On this basis, we can write a JavaScript function prime which will take an argument n and return the appropriate pitch class name. (You can play along by pasting the code samples into the JS console in your browser).

var prime = (function () {
  var p = [0, 3, 5, 8, 11, 1, 6, 10, 4, 9, 2, 7];
  return function(n) {
    return p[n % 12]; // if a value of 12 or above is given, start from 0 again
prime(1); // => 3

prime(4); // => 11

(I’ve used a module pattern here to instantiate the variable p just once, and keep it hidden from the external scope.)

I can also write another function getPitches which will take a mapping function and apply it to an array [0..11]:

var getPitches = (function () {
  var d = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11];
  return function (callback) {
    return {
      return callback(n) % 12; // if the result is 12 or above, transpose down an octave

// => [0, 3, 5, 8, 11, 1, 6, 10, 4, 9, 2, 7]
// or
getPitches(function(n) { return prime(n); }); 
// => [0, 3, 5, 8, 11, 1, 6, 10, 4, 9, 2, 7]

We can now retrieve transformations of the row.

To get transpositions, re feed in prime(n) + x:

getPitches(function(n) { return prime(n) + 1; }); 
// => [1, 4, 6, 9, 0, 2, 7, 11, 5, 10, 3, 8]

getPitches(function(n) { return prime(n) + 4; }); 
// => [4, 7, 9, 0, 3, 5, 10, 2, 8, 1, 6, 11]

To get the Inversion row, we use 12 - prime(n):

getPitches(function(n) { return 12 - prime(n); }); 
// => [0, 9, 7, 4, 1, 11, 6, 2, 8, 3, 10, 5]

getPitches(function(n) { return 12 - prime(n) + 3; }); 
// => [3, 0, 10, 7, 4, 2, 9, 5, 11, 6, 1, 8]

To form the Retrograde we use prime(11 - n):

getPitches(function(n) { return prime(11 - n); }); 
// => [7, 2, 9, 4, 10, 6, 1, 11, 8, 5, 3, 0]

getPitches(function(n) { return prime(11 - n) + 3; }); 
// => [10, 5, 0, 7, 1, 9, 4, 2, 11, 8, 6, 3]

(The discrepancy between subtracting from 12 for the Inversion and 11 for the Retrograde is because we expect the Inversion to start on the same pitch, while the Retrograde moves the starting pitch to the end.)

And of course we can get the Retrograde Inversion with 12 - prime(11 - n):

getPitches(function(n) { return 12 - prime(11 - n); }); 
// => [5, 10, 3, 8, 2, 6, 11, 1, 4, 7, 9, 0]

getPitches(function(n) { return 12 - prime(11 - n) + 3; }); 
// => [8, 1, 6, 11, 5, 9, 2, 4, 7, 10, 0, 3]

In addition to these transformations, we can rotate the row (so, for instance, we start with the third value in the series):

getPitches(function(n) { return prime(n + 2); });
// => [5, 8, 11, 1, 6, 10, 4, 9, 2, 7, 0, 3]

And we can also experiment with multiplication:

Multiplying the argument passed into prime viz prime(n * x will jumpt between elements in the original series. If the multiplier is a factor of 12, then we will get a result that repeats a subsequence of the original; if it’s not a factor, then we will get a reordering:

getPitches(function(n) { return prime(n * 2); });
// => [0, 5, 11, 6, 4, 2, 0, 5, 11, 6, 4, 2]

getPitches(function(n) { return prime(n * 3); });
// => [0, 8, 6, 9, 0, 8, 6, 9, 0, 8, 6, 9]

getPitches(function(n) { return prime(n * 4); });
// => [0, 11, 4, 0, 11, 4, 0, 11, 4, 0, 11, 4]

getPitches(function(n) { return prime(n * 5); });
// => [0, 1, 2, 8, 4, 3, 6, 7, 11, 9, 5, 10]

getPitches(function(n) { return prime(n * 6); });
// => [0, 6, 0, 6, 0, 6, 0, 6, 0, 6, 0, 6]

getPitches(function(n) { return prime(n * 7); });
// => [0, 10, 5, 9, 11, 7, 6, 3, 4, 8, 2, 1]

getPitches(function(n) { return prime(n * 8); });
// => [0, 4, 11, 0, 4, 11, 0, 4, 11, 0, 4, 11]

getPitches(function(n) { return prime(n * 9); });
// => [0, 9, 6, 8, 0, 9, 6, 8, 0, 9, 6, 8]

getPitches(function(n) { return prime(n * 10); });
// => [0, 2, 4, 6, 11, 5, 0, 2, 4, 6, 11, 5]

getPitches(function(n) { return prime(n * 11); });
// => [0, 7, 2, 9, 4, 10, 6, 1, 11, 8, 5, 3]

getPitches(function(n) { return prime(n * 12); });
// => [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

Note the following properties of these series:

  1. Maps to 6 pitches, repeated twice
  2. Maps to 4 pitches, repeated thrice
  3. Maps to 3 pitches, repeated four times
  4. Remaps all pitches
  5. Maps to 2 pitches (the fact that they are 0 and 6 is a propert of the Prime row, rather than this ordering)
  6. Remaps all pitches, 0-based Retrograde of 5
  7. Maps to 3 pitches, 0-based Retrograde of 4
  8. Maps to 4 pitches, 0-based Retrograde of 3
  9. Maps to 6 pitches, 0-based Retrograde of 2
  10. 0-based Retrograde of Prime
  11. Degenerate case: maps to just one pitch

Feeding in prime(n) * x remaps the pitches, rather than their ordering:

getPitches(function(n) { return prime(n) * 2; });
// => [0, 6, 10, 4, 10, 2, 0, 8, 8, 6, 4, 2]

getPitches(function(n) { return prime(n) * 3; });
// => [0, 9, 3, 0, 9, 3, 6, 6, 0, 3, 6, 9]

getPitches(function(n) { return prime(n) * 4; });
// => [0, 0, 8, 8, 8, 4, 0, 4, 4, 0, 8, 4]

getPitches(function(n) { return prime(n) * 5; });
// => [0, 3, 1, 4, 7, 5, 6, 2, 8, 9, 10, 11]

getPitches(function(n) { return prime(n) * 6; });
// => [0, 6, 6, 0, 6, 6, 0, 0, 0, 6, 0, 6]

getPitches(function(n) { return prime(n) * 7; });
// => [0, 9, 11, 8, 5, 7, 6, 10, 4, 3, 2, 1]

getPitches(function(n) { return prime(n) * 8; });
// => [0, 0, 4, 4, 4, 8, 0, 8, 8, 0, 4, 8]

getPitches(function(n) { return prime(n) * 9; });
// => [0, 3, 9, 0, 3, 9, 6, 6, 0, 9, 6, 3]

getPitches(function(n) { return prime(n) * 10; });
// => [0, 6, 2, 8, 2, 10, 0, 4, 4, 6, 8, 10]

getPitches(function(n) { return prime(n) * 11; });
// => [0, 9, 7, 4, 1, 11, 6, 2, 8, 3, 10, 5]

getPitches(function(n) { return prime(n) * 12; });
// => [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

Note the pitch-sets that these values of x map to:

  1. Whole-tone scale
  2. Diminished 7th
  3. Augmented triad
  4. Full chromatic remapping
  5. Tritone
  6. Full chromatic remapping, Inversion of 5
  7. Augmented triad, Inversion of 4
  8. Diminished 7th, Inversion of 3
  9. Whole-tone scale, Inversion of 2
  10. Chromatic scale, Inversion of Prime
  11. Degenerate case: unison

From these two multiplicative transformations, we have new ways to define Inversion and Retrogression, and also a realm of other possible transformations. We can write something like this:

getPitches(function(n) { return prime(n * 5 + 8) * 7 + 4; });
// => [8, 1, 10, 5, 9, 7, 3, 2, 4, 11, 6, 0]

and know that the material is still explicitly derived from the original series.

These examples are just a few first examples of the generative power of a few simple lines of JavaScript. Next time I want to investigate what happens when you reflect the position/pitch axis, and introduce a way to deal with negative numbers, but this is enough for one evening!

Constructivism Part III

Today’s study is the cover of Generation X’s 1977 single, Your Generation:

Black, red and grey construction with 'Generation X' in bold white text

Here is my version:

Please forgive any inconsistencies in the font rendering: I couldn’t find an open-licensed font with quite the right geometry, so I fell back on Helvetica, which may render unpredictably on different platforms.

This composition uses a handful of blocks with multiple backgrounds:

  • The upper part of the image, which looks a little like the Polydor logo, is a pseudo-element with four backgrounds, from top to bottom:
    • The small red circle
    • A white masking area, which covers the lower halves of:
    • The large black circle
    • The red block on the side.
  • The title strip is actually an h1 (‘Generation’) with a span inside (‘X’). The stripes are hand-coded linear gradients, and the cross-hatching in the small square is two repeating gradient stripes laid on top of each other.
  • The red triangle is another pseudo-element, with a single, angled gradient background.

The key lesson from this exercise was how tricky it can be to get webkit gradients right. The -webkit-gradient syntax is much less intuitive than the -moz-xxx-gradient syntaxes, and the repeated gradient declaration is also something of a fiddle. As to angling the red triangle, I couldn’t be bothered with more trig, so I just used trial and error.



CSS Seesaw

It’s Friday night, so not a big one tonight.

I thought I would have a quick play with CSS transitions, and nothing seemed a better demonstration than a seesaw.

Here it is (warning: this only works in Webkit browsers at the time of writing):

A red seesaw with a black ball on it, all balanced on a black pivot

This little animation uses two transitions: one to tip the seesaw back and forth, and the other to roll the ball from one end to the other. There were only two slightly complex matters here: I needed to brush up on my A-level trig to get all the distances right, and a little sleight of hand was in order to create the triangular pivot (fire up Firebug if you want to see how I did it).

There are a few small bugs: poor aliasing, and a ghost top border on the seesaw element, but they can wait for another day.

And now, goodnight!

Constructivism Part II

Following on from yesterday’s work, another experiment today, and this time I chose a real piece of constructivist design to copy:

Image composed of text, semicircular blocks of colour, and diagonally placed squares and strips
The simple lines of this image make it fairly straightforward to lay out in HTML and CSS:

Here’s how I did it:

My h1 contains the text ‘WHY?’. The h1 is a 200px square absolutely positioned on the page.

To create the semi-circle, I’ve given this element two background gradients: the first uses a linear gradient to draw two blocks of colour on the page: the upper block is solid parchment colour, whilst the lower block is transparent.

In Mozilla this is marked up like this:

-moz-linear-gradient(top, #F1ECD9 100px, rgba(241, 236, 217, 0) 100px)

In Webkit is takes this form:

-webkit-gradient(linear, 0 100, 0 200, from(#F1ECD9), color-stop(0, rgba(241, 236, 217, 0)), to(rgba(241, 236, 217, 0)))

You’ll see that I’ve defined the transparent colour as rgba(241, 236, 217, 0) rather than simply transparent; I found that using simple transparent gave me some ghosting, which is clearly not the intention.

Underneath this gradient is a second background, which this time defines a radial gradient:

-moz-radial-gradient(100px 100px, circle cover, black 100px, #F1ECD9 100px, #F1ECD9 200px)

-webkit-gradient(radial, 100 100, 100, 100 100, 200, from(black), color-stop(0, #F1ECD9), to(#F1ECD9))

In each case, this draws a 100px-radius black circle, followed by an area of parchment colour.

The lipsum content with the inverted red semicircle is coded in a similar way, although I could do away with the linear gradient, as my p tags give me the perfect hooks for a parchment-coloured background without worrying about gradients. The text is shifted down with a simple padding-top rule, and the red line down the side is an equally straightforward border-right.

The large red square doesn’t actually exist: it’s an :after pseudo-element on the body tag, which is then sized, positioned and rotated. I had to give it content(' ') to get it to appear, but otherwise it’s pure smoke and mirrors.

Finally, the three 45° links were interesting to position:

They start off as three li elements arranged normally on the screen:

Three blocks, one above the other, with space between

Next, I rotate the containing ul by 90° widdershins around its top right corner:

Three blocks next to each other with the text running bottom to top

Finally, I rotate each li by 45° clockwise around what was originally its middle-right and is now its top-centre:

Three diagonal blocks next to each other

These are then positioned absolutely on the page.

And that’s my piece of constructivist CSS for the day.

I have one outstanding problem: the edges of my semi-circles are unpleasantly aliased. I’ve tried leaving a small gap between the colour stop points in the gradient to see if that helps, but the effect is pretty unsatisfactory. Any suggestions would be welcome!