Analysis and Synthesis in Software Production

A dry stone wall

I’ve been thinking about some unproductive discussions I’ve had recently about software production methodologies, discussions where we’ve seemed to be talking across each other, rather than settling on a clear statement of our differences. In cases like this, it’s often the case that we agree on the structure of our arguments, but that there is a fundamental difference in our assumptions or values.

I’ve also been thinking about (apparent) dichotomies, inspired in part by Stephen Jay Gould’s fascinating book, Time’s Arrow, Time’s Cycle: myth and metaphor in the discovery of geological time. In this book he investigates two concepts of time have shaped the science of geology. I write ‘apparent’ in parentheses, as it becomes clear that these concepts are not in opposition, but rather creative tension with each other.

This led me to look at my conversations about software production through the lens of another apparent dichotomy: analysis versus synthesis. As in Gould’s example, I don’t consider these concepts to be in opposition, but rather our decision which of them to emphasise, which to give primacy, has a profound impact on the way we approach software production.

Analysis

Analysis, ἀνάλυσις, is the breaking (λῦσις, from λύω, to untie, detach) up (ἀνά) of a problem. In software production, this describes taking a set of requirements or a system, and breaking it into smaller pieces, each of which is easier to reason with. This is an essential tool for creating software of any complexity, as reasoning with the entirety of the system is beyond most human abilities.

Synthesis

Synthesis, σύνθεσις, is the putting (θέσις, from τίθημι, to put, place) together (σύν) of a solution. In software production, this describes assembling parts, whether they are method calls, classes, packages or deployable components, into a system. This is an essential tool for creating software of any complexity, as the many moving parts need to interact with each other.

The Analyst Doctrine

There is a school of thought in software production that strongly favours analysis over synthesis. According to this school, software can be successfully produced by analysing the solution into various components, which can then be developed in isolation. Bringing these components back together (synthesis) should be trivial if the analysis has been adequate; if problems arise, then this is a consequence of either inadequate analysis or incorrect implementation.

This doctrine goes hand-in-hand with loosely coordinated development teams. After all, with adequate analysis, the software developers should have all the information they need to write the software, and the interdependencies between the constituent components have been taken care of during the analysis phase. The majority of coordination will be about scheduling, so alert and energetic project management is important.

Testing is focused on the components, ensuring that they conform to their requirements. Where components have dependencies on each other, these can be abstracted away with the use of system mocks, which are straightforward to create, as the contracts have been established.

As combining the components is considered trivial, this can be delayed until development and testing on the individual components has been completed. There can then be a short phase of testing across the entire system to demonstrate that it works as planned, before release to customers.

In my opinion, this is a recipe for disaster. In particular, the system testing phase is rarely trivial, and often takes a dedicated Quality Assurance (QA) team significant amounts of effort, as they find themselves testing all possible routes through the system and uncovering plenty of unexpected behaviour, which is then reported to development teams as bugs.

If the development teams are attempting to work in an ‘Agile’ way and deliver features incrementally, then the work of the project manager becomes even more important, as the delivery of capabilities in the constituent systems needs the be coordinated for each testing phase.

The Synthesist Doctrine

There is also a school of thought in software production that favours synthesis over analysis. According to this school, software can only be successfully produced by bringing together the constituent parts as early and often as possible. This school sees the greatest potential for complexity and uncertainty in the interactions of the components, and seeks to minimise risk by testing the underlying assumptions continuously.

This doctrine goes hand-in-hand with highly networked teams. It is expected that the complexities of the components’ interactions will only become apparent over time, and it is important that these any simplifying assumptions are revised as soon as possible. Scheduling becomes a global rather than local question, and it’s much more important to ensure that requirements evolve and priorities are revisited, rather than focusing on meeting delivery dates.

Testing occurs at all levels, but particular value is given to testing that the entire system works as expected. These tests are the ultimate proof that the customers’ needs have been met and that the software is fit for purpose. Mocks are fundamental practices, but they are most suitable for lower-level tests, and whole-system tests try to exercise all integration points.

As getting the interactions right is prioritised over the detail of the individual components, they may start as broad sketches of the expected behaviour, and complexities and edge cases are added as they become necessary. Indeed, some of the initially desired behaviour may never make it into the final system. As combining the components into a whole system and exercising it with tests happens continuously, there is often no need for a final testing phase.

In my opinion, this methodology gives us the best chance of success.

Continuous Integration

In the preceding sections I have avoided the word ‘integration’, but I’ve skirted around this issue many times.

We refer to the act of combining any parts of software as integration, to the extent that it’s almost a synonym for ‘synthesis’. (Curiously, the word comes from the Latin ‘integer’, meaning un- (in-) touched (from tango, to touch) and implies unity and atomicity, rather than acknowledging the analysis–synthesis cycle). Whether we leave it to a final phase or do it continuously, all non-trivial software needs to be integrated at some point.

A commonly attempted practice is Continuous Integration (CI). I find it interesting that most discussion of CI focuses on integrating code changes into a central codebase, when the practice also enables us to integrate behavioural changes into a complete system. Needless to say, I believe that the pursuit of CI is a key tool in a synthetic approach to software production.

Integration Tests

Ask n developers to define integration tests, and you’ll get n+1 answers. I try to avoid the term altogether, using more specific phrases to capture specific types of test.

I talk about adapter tests, where we test the (ideally very thin) parts of our code that interact with other components and systems. I see these are developer-facing tests, and consider them an important part of a developer’s toolkit.

I’ve seen teams in an Analytic context omit adapter testing altogether, as these tests blur the boundary between an isolated, tightly specified system and the messy outside world. When this happens, a debt of uncertainty is incurred that must be paid off with interest during the integration phase.

I also talk about various types of cross-system and whole-system tests. These can be thought of as integrated tests, as they exercise an integrated system. This is the level at which we prove that the desired behaviour has been implemented, and these are often customer-facing tests, which form an important part of the team’s delivery.

It’s worth observing that integrated tests form part of both approaches, but that the tendency under the analyst doctrine is to do extensive manual testing during an integration phase, whereas under the synthesist doctrine we perform lightweight automated testing after every integration. J.B. Rainsberger argues that Integrated Tests are a Scam, and I have some sympathy for this point, so it’s worth noting that the extensive QA performed under the analyst doctrine fits Rainsberger’s critique much closer than the lightweight whole-system testing of the analyst doctrine.

London and Chicago

It’s interesting to notice in passing that these approaches appear to have parallels in the two styles of Test Driven Development (TDD). London-Style TDD characterises the behaviour of a system in broad brushes in its entirety, and then digs down into the details to write just enough code to implement that behaviour. Chicago-Style TDD (certainly when practised at scale) focuses on evolving well-characterised components, and then assembles them into whole systems. We can see that the London Style responds to Synthetic thinking, whilst the Chicago Style responds to Analytic thinking.

Waterfall versus Agile

The Analytic process I described above looks very much like the delivery pipeline of a Waterfall project. Indeed, the Poppendiecks in their book Lean Software Development characterise the traditional approach to software production as implementing an Analytic mindset, whilst they see a Synthetic mindset in the Lean and Agile ethos of Seeing the Whole, and building entire systems in rapid iterations.

It’s interesting to look at the projects that fall between these two approaches. As I mention above, many organisations maintain an Analytic approach, even though they attempt to introduce Agile concepts by delivering functionality in increments. The tensions created by this mixed approach can lead to more work for project managers as they attempt to coordinate teams’ priorities.

It’s Not All or Nothing

Having said all this, we must remind ourselves that Analysis and Synthesis are not in opposition to each other, but are two sides of the same coin. You cannot reassemble something that hasn’t been broken apart. Even the pure Analytic process includes a phase of synthesis, and even the most radically Synthetic project demands analysis of each increment. What is important is that these two concepts give bias our decisions and working practices, and whereas the Synthetic approach demands frequent small acts of analysis, the Analytic approach puts off synthesis to the final phases. It’s clear to me which approach I find safer.

Thoughts on Trauma

A computer monitor with a fragmented image

I’ve been thinking today about trauma. Here I’m going to explore the topic a little, discuss the case of survivors of war, and think a bit about how in public and institutional context trauma can be precipitated and weaponised.

At my therapist’s the other day the topic of trauma came up, and he described a model of trauma that was arresting in its difference from my naive understanding, but profound in its implications.

Trauma, he said, occurs when our feelings are at odds with what we are told we are feeling. This can happen not only when our suffering is denied by those around us (this is gaslighting), but also when we are told we are suffering but are not.

As an example, he discussed the experience of survivors of war arriving in camps for refugees or internally displaced persons. Under previous practice, the camp workers would assume that people arriving at the camp would be traumatised, and would start their treatment on this basis. They found that rather than decreasing the levels of trauma, this actually increased them. Many of the arrivals didn’t consider themselves victims, and to have victimhood forced on them was itself traumatising.

A new approach was to attempt to understand each camp arrival on their own terms. To be ready to appreciate and value their resilience in escaping a war zone and making it to relative safety. To be ready to treat trauma if it became apparent, but all the more to recognise and celebrate the strength, experience and knowledge that each person brought.

As I look around our social and political discourse, I wonder how often we see the traumatisation of the previously untraumatised. How often do we tell people ‘you are suffering’, ‘you are traumatised’, ‘you deserve pity’, ‘you need help’, without first asking them ‘how do you feel?’, ‘what do you need?’ And by doing so, how often do we propagate traumas not rooted in primary experiences, but rather in our imposed, secondary perception of them?

And how often are these patterns of behaviour part of a self-perpetuating system? How often are these ripples of trauma deliberately propagated? How often are the newly traumatised collateral damage in our toxic political discourse?

SoCraTes BE 2019

Floréal La Roche

I’ve spent the last few days at the SoCraTes BE Unconference. Here is a brief report.

SoCraTes takes place at a holiday camp in a socialist realist château among the sheer wooded slopes of a Belgian Ardennes.

It’s an unconference, which means that rather than having a predetermined schedule, the participants apply the Open Space Principles to create each day’s schedule.

Here are some of the sessions I attended:

A group exercise creating a Wardley Map of a fictional shop. I’ve heard lots of people talking about Wardley Maps, but this was the first time I got to try them out.

A workshop practising a couple of Liberating Structures. I didn’t think I was familiar with Liberating Structures, but it turns out I’ve been using a few of them for a while! I particularly liked the Troika Consulting structure, and the problem we encountered enabled me to talk through a few techniques I’ve recently learnt from Goldratt’s Thinking Processes.

A nice group discussion on What makes a good stand-up? It turns out lots of us have encountered similar problems and found similar solutions. Hurray!

Talk like Sandi. We watched Sandi’s talk, Get a Whiff of This, and then had a group discussion about what makes her such a compelling speaker.

Code Smells quiz show. This was based on an activity I recently ran with my team, and as luck would have it, my friend Pedro, who wrote the source code for the exercise, was also at SoCraTes, so we ran this session together. It was really popular, and definitely worth repeating.

The Transport Tycoon exercise: modelling a delivery network. This gave us a good chance to compare Classic TDD and Domain Modelling techniques to understand this problem.

Making illegal state transitions impossible. A fairly involved look at modelling state machines in functional languages. We spent rather too long struggling with the language, but it was good to discuss the basic concepts.

Powerpoint Karaoke with each other’s presentations. This is always an entertaining evening activity. It’s usually played with random slide decks from the internet, but this time we challenged each other to improvise talks to presentations that other members of the group had once given.

I had a casual discussion with my friend Pedro Santos about how often personal and professional development is accompanied by pain (an idea that goes back to Aristotle: μετὰ λύπης γὰρ ἡ μάθησις), and whether we can find ways to teach and coach that break this dynamic. The use of games seems to be one approach, as they can offer a safe context for failure.

An introduction to Aikido, out by the river under the hornbeams as the sun went down.

Quick fixes to stop ignoring your builds

A printout of a team dashboard on a team board

In software development, quick feedback is fundamental, and continuously building, testing and deploying is part of our standard toolkit.

But it isn’t enough to set up and run these processes; we need to pay attention to the results. If we can’t successfully build, test and deploy our software, then we have no idea whether we’re building it right, and we risk releasing broken components. It saddens me how often I see teams that have become completely inured to failures, and the result is a predictable decline in quality and drop in team effectiveness.

Here then are a few quick and easy techniques to start paying attention to these failures.

Continue reading “Quick fixes to stop ignoring your builds”

A taste of graph theory

Postcard of buildings by a river. Caption reads ‘Königsberg Schlossteich’

I’ve recently been working with graph databases, which give us a powerful idiom for modelling and reasoning with highly interconnected data. Before I share some of my experiences, I would like to set the scene with a basic introduction to graph theory.

Simple Graphs

Graph theory is a relatively young branch of mathematics, traced back to 1736 and the work of Leonhard Euler.

A simple graph is defined as a non-empty set of vertices, e.g. V = {1,2,3}, and a set of edges, each of which is a 2-member subset of the vertex set, e.g. E = {{1,2},{1,3},{2,3}}.

We can visualise a graph by drawing a diagram:

simple three-vertex graph drawn as a triangle

It’s tempting, particularly for the etymologically minded, to think of a graph as drawing. It’s certainly easier to think about graphs by visualising them, but drawings as representations of graphs, have the potential to be misleading, so we need to exercise caution.

For example, the same graph can be drawn like this:

simple three-vertex graph drawn with curved edges and four crossings

Notice how this diagram shows four crossings, but is in fact equal to the above graph, which shows none.

Because simple graphs are defined in terms of sets, we can note some key characteristics:

  • Each vertex only appears once. V = {1,1,2,3} is not a valid set because a=a.
  • An edge cannot join a vertex to itself. {1,1} is not a valid set, so it cannot be a valid element in the edge set.
  • There can only be one edge between two particular vertices. E = {{1,2},{1,2},{2,3},{1,3}} is not a valid set because {1,2} = {1,2}.
  • The edges have no direction. E = {{1,2},{2,1},{2,3},{1,3}} is not a valid set because {1,2} = {b,a}.

Useful variations

These restrictions are great for pure mathematics, but somewhat restrict our ability to model real-world situations. For this reason, a typical graph database relaxes the rules in various ways:

Digraphs

It introduces a notion of direction in the edges. This means we are now dealing with directed graphs or digraphs.

We can now model a graph V = {1,2,3}, E = {(1,2),(1,3),(2,3)}:

three-vertex digraph drawn as triangle

Multigraphs

We can allow skeins: multiple edges between vertices. If we replace any edge of a graph with a skein, then we have a multigraph, and our edge set becomes a multiset, as it may contain duplicated elements.

Here we expand the basic graph V = {1,2}, E = {{1,2}} by replacing the edge {1,2} with a skein of three edges, giving V = {1,2}, E = {{1,2},{1,2},{1,2}}:

two-vertex simple graph drawn as line

becomes

two-vertex multigraph with three edges

We can also model loops by allowing single-member sets as elements of EV = {1}, E = {{1}}:

one-vertex loop

Quivers

We can also allow directed multigraphs, also known as multidigraphs or quivers.

Degrees of vertices

The degree of a vertex, deg v, is the number of edges attached to it. In a simple graph deg v = |{e : e ∈ E, ve}|. Visually you can find the degree of a vertex by counting the edges that connect to it.

The indegree of a vertex, deg– v, is the number of edges leaving it, and the outdegree of a vertex, deg+ v, is the number of edges reaching it. If we model the directed edges as tuples, then deg v = |{e : e ∈ E, ∃v e = (v,y)}| and degv = |{e : e ∈ E, ∃x e = (x,v)}|. Visually you can find the indegree of a vertex by counting the arrows that leave it, and the outdegree by counting the arrows that point to it.

Walks

The power of graphs to model connected data arises when we start walking our graphs.

Mathematically, a walk is a sequence of vertices (v1,v… vn-1,vn) where each vertex vx is a member of the graph’s vertex set, and each pair of vertices (vx,vx+1) in the sequence is a member of the graph’s edge set.

Visually, a walk is found by placing a pencil on one of the dots on a graph diagram, and tracing along a line to another dot, then repeating the process.

When we come to look at graph databases, we will focus on ‘traversing’ them by walking along their edges.

Adjacency

As well as a diagram, we can represent a graph with an adjacency matrix. Consider the graph V = {1,2,3,4}, E = {{1,2},{1,3},{1,4},{2,3},{3,4}}. We can draw this graph like this:

four-vertex simple graph drawn as a square with one diagonal edge

We can also create an adjacency matrix where Aij is 1 if {i,j} ∈ E and 0 otherwise.

    ⎛0 1 1 1⎞
A = ⎜1 0 0 1⎟
    ⎜1 0 0 1⎟
    ⎝1 1 1 0⎠

The top row shows how many edges there are between 1 and each vertex: none to itself, one to 2, one to 3 and one to 4.

We can easily find the deg v by taking the sum of the corresponding row or column. We can see at a glance that the deg 1 = 3.

By multiplying an adjacency matrix by itself, we can find how many two-edge walks exist between any two vertices:

     ⎛0 1 1 1⎞   ⎛0 1 1 1⎞   ⎛3 1 1 2⎞
A² = ⎜1 0 0 1⎟ x ⎜1 0 0 1⎟ = ⎜1 2 2 1⎟
     ⎜1 0 0 1⎟   ⎜1 0 0 1⎟   ⎜1 2 2 1⎟
     ⎝1 1 1 0⎠   ⎝1 1 1 0⎠   ⎝2 1 1 3⎠

We can check that there are three two-edge walks between 1 and 1 {(1,2,1),(1,3,1),(1,4,1)}, one between 1 and 2 {(1,4,2)}, one between 1 and 3 {(1,4,3)} and two between 1 and 4 {(1,2,4),(1,3,4)}.

We can continue this trick for three-edge walks:

     ⎛0 1 1 1⎞   ⎛0 1 1 1⎞   ⎛0 1 1 1⎞   ⎛4 5 5 5⎞
A³ = ⎜1 0 0 1⎟ x ⎜1 0 0 1⎟ x ⎜1 0 0 1⎟ = ⎜5 2 2 5⎟
     ⎜1 0 0 1⎟   ⎜1 0 0 1⎟   ⎜1 0 0 1⎟   ⎜5 2 2 5⎟
     ⎝1 1 1 0⎠   ⎝1 1 1 0⎠   ⎝1 1 1 0⎠   ⎝5 5 5 4⎠

Again we can check that there are four three-edge walks between 1 and 1: {(1,2,4,1),(1,3,4,1),(1,4,2,1),(1,4,3,1)}, five between 1 and 2: {(1,2,1,2),(1,2,4,2),(1,3,1,2),(1,3,4,2),(1,4,1,2)}, five between 1 and 3: {(1,2,1,3),(1,2,4,3),(1,3,1,3),(1,3,4,3),(1,4,1,3)} and five between 1 and 4: {(1,2,1,4),(1,3,1,4),(1,4,1,4),(1,4,2,4),(1,4,3,4)}.

In general the matrix Aⁿ shows us how many n-edge walks there are between each pair of vertices.

We can perform the same trick for multigraphs and digraphs:

Here is the quiver V = {1,2,3,4}, = {(1,2),(1,3),(1,3),(1,4),(2,4),(2,4),(3,4),(4,4)}:

four-vertex quiver drawn roughly as a square

Here is its adjacency matrix:

    ⎛0 1 2 1⎞
A = ⎜0 0 0 2⎟
    ⎜0 0 0 1⎟
    ⎝0 0 0 1⎠

Here are the next two n-edge walk matrices:

     ⎛0 0 0 5⎞
A² = ⎜0 0 0 2⎟
     ⎜0 0 0 1⎟
     ⎝0 0 0 1⎠
     ⎛0 0 0 5⎞
A³ = ⎜0 0 0 2⎟
     ⎜0 0 0 1⎟
     ⎝0 0 0 1⎠

We can also add these matrices:

              ⎛0 1 2 11⎞
A + A² + A³ = ⎜0 0 0  6⎟
              ⎜0 0 0  3⎟
              ⎝0 0 0  3⎠

This matrix tells us that from 1 to 4 there are eleven walks of no more than three edges.

Adjacency matrices can give us a useful way to reason about graphs without having to traverse every walk.

Conclusion

These concepts give us the basic tools for working with graph databases. In future posts I will look at how we can put them to work to model domains.

On Creation

The Creation of Adam

I wrote recently about creative thinking, but my discussions with friends and colleagues often strayed beyond creative thinking to discuss creativity and creation in general. Here are some thoughts on the nature of creation*.

Creation defined

We can say:

An act is an act of creation if something exists after the act that did not exist before it, and would not exist if the act had not occurred.

Destruction defined

We can also define destruction:

An act is an act of destruction if something does not exist after the act that did exist before it, and would continue to exist if the act had not occurred.

Equivalence of creation and destruction

We can notice how creation sometimes involves taking something away:

  • Digging a hole is an act of creation, just as making a pile of earth.
  • Carving a bowl from a block of wood is an act of creation, just as forming a bowl as a coil pot.
  • Resist printing uses wax to prevent dye taking in certain areas. Discharge printing uses bleach to remove colour from previously dyed fabrics. Applying either of these is an act of creation, just as direct printing, where the pattern comes from the application of the dye.

If we define the notion of absence:

There is an absence of something if that thing does not exist. If that thing exists, there is no absence.

Then we can draw an equivalence between the two, as a act of destruction creates an absence, and an act of creation destroys an absence.

Entropic bias

If acts of creation and destruction are logically equivalent, why do we tend to make a distinction? Why do we focus on the creative aspect of bowl carving, rather than its destructive aspect?

We seem have a bias to see acts that decrease entropy as creative, and acts that increase entropy as destructive.

This bias goes back far beyond the formulation of the Second Law of Thermodynamics. Here, for example, is the beginning of the Judaeo-Christian creation story:

בְּרֵאשִׁית בָּרָא אֱלֹהִים אֵת הַשָּׁמַיִם וְאֵת הָאָרֶץ. וְהָאָרֶץ הָיְתָה תֹהוּ וָבֹהוּ וְחֹשֶׁךְ עַל פְּנֵי תְהוֹם, וְרוּחַ אֱלֹהִים מְרַחֶפֶת עַל פְּנֵי הַמָּיִם. וַיֹּאמֶר אֱלֹהִים: “יְהִי אוֹר”, וַיְהִי אוֹר.

In the beginning God created the heaven and the earth. And the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit of God moved upon the face of the waters. And God said, Let there be light: and there was light.

That phrase ‘תֹהוּ וָבֹהוּ’, ‘ṯōhū wāḇōhū’ describes a high entropy system (‘תֹהוּ’ translates as ‘waste, emptiness, vanity’, ‘בֹהוּ’ is an emphatic reduplication) and the next six days’ action is the process of decreasing its entropy.

(It’s interesting to note how the creation of order entails the reduction of entropy. The very notion of entropy inverts our bias towards order.)

Creation and Creativity

We’ve characterised acts of creation and observed their logical, if not psychological equivalence. Are all acts of creation creative acts?

It seems to me that we can place acts of creation on a continuum, from those that we would seldom characterise as ‘creative’ to those we would unhesitatingly characterise that way.

If we revisit our earlier examples, then it’s hard to see how digging a hole could be considered to be a creative act, although it is indeed an act of creation. Carving a bowl and textile printing seem to be further along the scale, though I suspect most of us would choose our words depending on whether we saw this act of creation as something purely mechanistic, or as bringing something additional to the act.

It seems to me that this something additional may be what we look for when we distinguish a creative act from a simple act of creation. I wonder whether this is an act of conceptual creation, a reduction in conceptual entropy, and whether this is what we mean when we speak of creative thinking. This is something for me to explore further.


* I was a hopeless philosophy student, so these thoughts come from a position of wide-eyed ignorance. I’m confident my arguments are both invalid and unsound, and that whatever value I have touched on here has been expressed with more precision and insight elsewhere. If you notice my ignorance, please tell me, so I can be wiser.

Nevertheless, there is a certain pleasure in casting a wide net over my synapses and displaying the catch, whatever worth it may have.

Creativity in Software Development

I shared yesterday’s post with some friends, who were keen to explore what we mean when we talk about creativity in software development.

Alastair made an interesting comment:

…it made me reconsider software dev as a creative endeavour, but I think I came to the conclusion that it is. For me, I think there is a gap between a creative art like writing, especially one which has an expressive mirror like acting, and a purely creative activity like, e.g., whittling a stick or constructing a building.

I think there is value in disentangling our concepts of creativity, and I find Alastair’s distinction between the creative arts and simpler forms of creation very useful.

There’s also an ambiguity in the word ‘create’, as it can refer simply to making things, as well as to the creative endeavours we would like to characterise.

So rather than ask ‘Is software development a creative activity?’, I tend to consider a narrower question: ‘Is there a place for creative thinking in software development?’

As the most basic level, I see creative thinking as making new links between concepts. Once you have made the link, you can engage other thought processes, for example deductive thinking, to explore the consequences and implications of that link.

But because the link isn’t already there, you can’t find it by rational thought; you need a leap of imagination to reach it.

There are some sorts of problem that I can tackle best once I’ve slept. On a few lucky occasions I’ve been able to take an afternoon nap, and woken up with a new idea to investigate, but this usually means taking the idea home with me and letting it brew overnight.

Here are a few examples of problems in software development that can be tackled with creative thinking:

  • How should we name this element?
  • What is the appropriate metaphor for this system?
  • Has a similar problem already been solved? Is there a pattern we can apply here?
  • What test should we write first? What test should we write next?
  • What is the best way to split this system into smaller parts?

And of course, because software development in an organisation is a social activity, the need for creative thinking extends far beyond the design of the software.