Deliberate Technical Debt: an example

Bank notes and coins

In a team discussion this morning we decided to incur some technical debt. This particular example is a nice illustration of the concept, so I thought I would share it.

The Context

We’re working on a proof-of-concept project that will extract some data from a file input, transform this dataset by adding data from various other places, and then load the enhanced data into an output file. (The eagle-eyed may have spotted that this is a classic extract–transform–load (ETL) pattern.)

The input data will be coming to us in a single, large file of comma-separated values (CSV). To deal with the volumes of data, we will be reading this file into a database for processing.

The additional data are extremely heterogeneous: some sources give us a single numeric value per record, one gives us more than a thousand datapoints. We expect to preserve the rough shape of the data throughout the process, but there may be some translation between the domain languages of the systems we interact with. These datasets are delivered to us in JavaScript Object Notation (JSON) format.

The output data format will also be CSV, but as the data will be significantly larger and more complex, we haven’t yet agreed how many files will be produced, or exactly how they will be structured.

There are currently plans for just one successful run of this process. As soon as we have an output file we are happy with, the project will be complete, and no further use of this code is planned. For this reason, we are not building this as a production system, but rather as a tool that can be run on a development machine. However, there’s a strong possibility that, if this project is successful, we’ll be asked to carry out similar work in future, so we want to create decently designed, adequately tested code so we can pick it up again. We also hope that this proof-of-concept will create demand for a production system with these responsibilities, so we would like to use this project as a learning exercise.

The Question

As I mentioned, one of the external datasets contains more than a thousand data points. We receive the data as a set of code-value pairs, with the same set returned for every record. We needed to decide how to store this dataset in our database.

If this dataset had had five data points, then this wouldn’t have needed discussion: we would have decided to create five new fields in the appropriate table. However, maintaining even a dozen additional fields increases the risk of errors, as you need to be able to guarantee consistency in data models, SQL creation scripts, object-to-SQL mappings, test data, and so on. Adding another data point presents further work, as you may need to make updates in all of these locations. (If our axis of changes is the set of supported data points, then this design breaks the open-closed principle, as extension of the system’s capabilities entails modification of existing implementation.)

If these are problems for a dataset with a dozen data points, then a dataset with over a thousand data points doesn’t bear thinking about. We needed to weigh up our options.

The Options

We identified three options, including our default choice for comparison:

  1. Create a field in the appropriate table for each code.
  2. Create a linked table with foreign key, code and value fields.
  3. Store the data in a blob.

We weighed these options by both their Cost to Implement, and the potential Cost of Change. We know we will incur the cost to implement, whereas the cost of change is more speculative. There is a chance that we will revisit this code in future, and that new data points may be added to the dataset, but we don’t know when or whether this will happen.

As I’ve already explained, Option 0 has a high cost to implement and a high cost of change.

Option 1 has a moderate cost to implement, as it means creating an additional table, and code for mapping code-value pairs from the application model into this database schema. On the other hand, the cost of change is fairly small: we might want to add a value or two to our test data sets. (This design observes the open-closed principle, extending the behaviour of the system to support another code can be accommodated without modifying any other code.)

Option 2 has low cost to implement, as we can create a single field and simply dump the data in there. Its cost of change seems to be low as well, as any additional data points would automatically get passed through. (It’s worth noting that this might be a terrible design for a long-running production system, as any previously stored data would be missing the new data points, and it would be pretty messy to update them; for a one-off run this isn’t a problem, and the design is perfectly acceptable.)

Based on these criteria, Option 3 seems to be a winner: it has low cost to implement, and low cost of change. What’s not to like?

Mixing Concerns

Option 2 manifests a significant design smell: it mixes the concerns of data retrieval, data storage and data presentation — the Transform and Load phases of the ETL pipeline. This smell is not shared by options 0 and 1, as they don’t make any reference to the output structure.

We receive the raw dataset as a JSON file, and will be outputting is as CSV. However, we haven’t yet agreed the exact structure of this CSV, and a huge dataset like this may raise another set of questions about file structure and normalisation. We don’t want to delay this work while these conversations continue.

Furthermore, the raw data we receive use a set of short codes that only make sense if you refer to the documentation. The recipients of our output data may be happy to receive the data in this format, but they may want us to use more explicit codes.

This has various implications:

We know for certain that our output format is different from the raw format of the dataset, so some mapping will have to take place.

We have three choices for mapping the data:

  • We could map the raw dataset to the output format before we store it, and then add it verbatim to the output. This keeps the Load phase of the ETL job very lightweight, but introduces output logic to the Transform phase. It also means the Transform phase becomes aware of the output model, which is a mixing of concerns
  • We could store the raw dataset, and then map it before adding it to the output. This keeps the Transform phase lightweight, and introduces mapping logic into the Load phase. It also means the Load phase becomes aware of the raw format, which is another mixing of concerns.
  • We could map the raw dataset to a third format before storing it, and then map it from that format before outputting it. This avoids the mixing of concerns, but at the expense of creating two sets of mapping.

All of this mapping is an extra layer on top of the object-to-SQL mapping we would already be doing, and so it adds to the cost of change for any alterations to the output structure. As we can only guess at the planned output structure, the likelihood we incur this cost is close to 100%.

Taking out the Loan

We had quite a conversation about these risks. Was this guaranteed cost of change enough to swing us towards another option?

We decided it wasn’t. Even with the additional cost of adapting this implementation to our preferred output structure, the overall cost of option 2 is less than that of the other options.

This doesn’t remove the fact that we’re incurring technical debt: we can guarantee that we will have to change this code, and that there will be rework — waste — when we do so; but we’re confident that what we gain now is worth the small additional cost we’ll pay in a couple of weeks’ time.

Tools we used to take our stand-up online

I’ve posted several times on here about team stand-ups (Reinvigorating a daily stand-up by walking the board, Quick fixes to stop ignoring your builds, Be more transparent about your meetings: put them on the board, How we collect team achievements with kudos cards). If you read these posts, you may notice that I’m a big fan of offline techniques: a physical whiteboard, index cards, sticky notes, and, if necessary, print-outs of online content. A certain global pandemic and the ensuing move to remote working made me confront this preference and forced me to find new, online, ways to visualise our work and environment in our stand-ups.

I work for a very Microsoft-centred company, and this influences our choice of tools. I hope that the spirit, if not the detail of these techniques can be easily transferred to whatever tools you have available.

Start with Coffee

We hold our stand-up every day at 11:45. Yes, this was at my instigation, as I’m not a morning person, so I like to run our stand-up at a time when I’m awake and alert. We chose a spot just before lunch (at 12:00!) so it would close off the morning’s work, rather than interrupting anything.

When we were working in the office together, we would often hold our stand-up and then drift off to lunch in small groups. Now we’re remote that’s not possible, so we’ve instituted a team coffee break at 11:30. We open an online meeting and have fifteen minutes of informal chat before we switch over to the day’s stand-up discussions.

When the time comes, I share my screen and we get started on the first discussion point.

Open an online meeting for an informal chat before stand-up.

Web Pages for Everything

I learnt long ago that it’s very easy to forget to discuss anything that’s not immediately visible on your team board. I apply this same principle now we’ve move online.

Whilst physical boards can be configured any way you choose, they only have a finite amount of space. Online sprint boards tend to be somewhat less flexible, but we can compensate for this by creating additional web pages with further discussion points on them.

Create additional web pages for further discussion points.

Use the Widgets

We track our work items in Azure DevOps, and the platform comes with a pretty flexible dashboard system, with a good selection of widgets. You can create whizzy graphs and visualise the status of your Build and Release pipelines.

However, my favourite two widgets are perhaps the most mundane: Markdown and Embedded Web Page.

We use the Markdown widget to add custom text and status messages to many of our dashboards, and I particularly enjoy the way you can create checklists and lists of links to other dashboards.

We use the Embedded Web Page widget for something even more important: a clock! I couldn’t a widget that would allow me to embed a clock on a dashboard, and I find it rather useful to timestamp the page, particularly if we’re going to screenshot it for sharing. To solve this problem, I created an Azure Function that returns a snippet HTML formatted with the same font used on the dashboard, and I embedded it on the page with an Embedded Web Page widget.

If there’s no widget support for what you want, create a web page elsewhere and use the Embedded Web Page widget.

Bookmark Everything

We currently use six separate pages for our stand-up. In Chrome I’ve created a folder in my Bookmark bar for all these tabs (to bookmark several pages at once, type Ctrl + Shift + D), and I can open them all at once by wheel-clicking on the folder.

Our six pages are:

  • Stand-up Overview
  • Team Stories Board
  • DevOps Stories Board
  • Pairing Staircase
  • End-to-End Test Status Dashboard
  • Calendar

Create a folder for all your stand-up pages, so you can open them all at once.

Stand-up Overview

This is the starting and finishing point for our stand-up discussion. It’s implemented as a dashboard with the following sections:

  • A panel with the sprint number, sprint name, and a checklist of our goals. This is implemented as a Markdown widget, and we edit the content as we complete the goals.
  • A date/time clock, implemented as an Embedded Web Page.
  • A panel with links to the other pages we use in our stand-up, and links to personal recognition pages on our company performance system. This is another Markdown widget.
  • A Sprint Overview widget.
  • A Cycle Time widget.
  • A Burndown Chart widget.
  • A Velocity widget.

We often find ourselves looking at the Burndown chart, which gives us a sense of whether our sprint is playing out as we hoped it would, and our Cycle Time chart, which is helpful for assessing our pace and flow of work.

Sometimes simple is best. Use a Markdown widget to track your goals.

Team Stories Board

This page comes the closest to a recognisable team board. We use the Boards feature of Azure DevOps, with some customisations.

  • We set our board to display stories, as this is the level at which we track our work.
  • We don’t filter by sprint, as we use this board to visualise our Product Backlog and Sprint Candidates as well as the items in our current sprint.
  • We use custom styles to indicate which stories are part of this sprint, which are overdue, and which are in our backlog.
  • We use tags and tag styles to give a quick indication of blocked stories (we also link these stories to their blockers).
  • We organise our board into seven columns, just as we did on a physical board:
    • Product Backlog. This corresponds to the New Status.
    • Candidates. A story becomes a candidate when it is refined and pointed.
    • Sprint Backlog. These are the stories chosen at sprint planning, plus any extras.
    • In Development.
    • Ready for Demo. These stories are essentially development-complete, but we demonstrate them to each other before closing them.
    • Done. This is a holding area so we can celebrate the completion of each story at stand-up.
    • Closed.
  • In addition, we split the central five columns into five swim lanes, and use these to track various other types of work:
    • Development contains our standard work items.
    • Meetings and Discussions visualises the many meetings I participate in.
    • Retro Actions
    • Kudos. As we can no longer hand each other kudos cards, we put them on the board.
    • To discuss. This swimlane is a hack. I drop an item in here if there’s something extra I want to discuss at stand-up.

Use swimlanes to visualise non-development work and topics for discussion

DevOps Stories Board

We have a DevOps team and rely on them to provide and configure infrastructure. Many of our stories are dependent on work being done by the DevOps team. We work closely with a member of the DevOps team, and he regularly attends our stand-ups.

Every day we look at the DevOps board, filtered to show only those tasks raised by our team. This gives us a change to check that we’ve provided all the information needed to complete any tasks we’ve raised.

You can filter other teams’ boards to see just those tasks raised by your team.

Pairing Staircase

In the office I would print out a templated pairing staircase for each sprint, and we used a bingo dabber to mark who had paired with whom.

When we moved online, I spruced up the Excel spreadsheet I had made for printing. I added a sum to the right of each person’s initials, and used Conditional Formatting Colour Scales to make it pretty.

A pairing staircase implemented as an Excel Spreadsheet

End-to-End Test Status Dashboard

We’ve written several suites of tests that run typical client journeys against our system. Failures in these tests can indicate instabilities across our ecosystem.

These tests run against each of our environments, and are build using Azure DevOps Release Pipelines, as these make it straightforward to run the same steps against several environments.

We use the Deployment Status widget to show the results of each of these suites of tests, and gather all these widgets on one dashboard, along with a clock (in an Embedded Web Page widget) and a notes panel (in a Markdown widget) to give details of any ongoing investigations and outstanding bugs.

We also send a screenshot of this dashboard to our colleagues every day so they can see how well we’re doing at keeping our environment stable.

Use the Markdown widget to give details of ongoing investigations and outstanding bugs.

Calendar

Our final page is our team Calendar in Azure DevOps, which automatically shows us our sprint diary. To this we add:

  • Holidays. Whenever I sign off someone’s leave, I add it to this calendar as well.
  • Office closed days and other events.
  • Our weekly build duty rota, which determines who investigates any issues.
  • Days we dedicate to working on our goals.
  • Planned releases.

By discussing this daily, we make sure it’s up to date and there are no nasty surprises.

Check your calendar daily to make sure there are no nasty surprises.

Back to the Overview

We end the stand-up by returning to the Overview dashboard.

Any stores we’ve closed will now be reflected in our charts (after a quick refresh of the page), and we take a moment to update the checklist of our goals.

We put out a final call for any other business, and wish each other a happy continuation of the day.

“Any more for any more? … Happy Wednesday!”

Me, at the end of every stand-up (give or take the name of the day!)

Analysis and Synthesis in Software Production

A dry stone wall

I’ve been thinking about some unproductive discussions I’ve had recently about software production methodologies, discussions where we’ve seemed to be talking across each other, rather than settling on a clear statement of our differences. In cases like this, it’s often the case that we agree on the structure of our arguments, but that there is a fundamental difference in our assumptions or values.

I’ve also been thinking about (apparent) dichotomies, inspired in part by Stephen Jay Gould’s fascinating book, Time’s Arrow, Time’s Cycle: myth and metaphor in the discovery of geological time. In this book he investigates two concepts of time have shaped the science of geology. I write ‘apparent’ in parentheses, as it becomes clear that these concepts are not in opposition, but rather creative tension with each other.

This led me to look at my conversations about software production through the lens of another apparent dichotomy: analysis versus synthesis. As in Gould’s example, I don’t consider these concepts to be in opposition, but rather our decision which of them to emphasise, which to give primacy, has a profound impact on the way we approach software production.

Analysis

Analysis, ἀνάλυσις, is the breaking (λῦσις, from λύω, to untie, detach) up (ἀνά) of a problem. In software production, this describes taking a set of requirements or a system, and breaking it into smaller pieces, each of which is easier to reason with. This is an essential tool for creating software of any complexity, as reasoning with the entirety of the system is beyond most human abilities.

Synthesis

Synthesis, σύνθεσις, is the putting (θέσις, from τίθημι, to put, place) together (σύν) of a solution. In software production, this describes assembling parts, whether they are method calls, classes, packages or deployable components, into a system. This is an essential tool for creating software of any complexity, as the many moving parts need to interact with each other.

The Analyst Doctrine

There is a school of thought in software production that strongly favours analysis over synthesis. According to this school, software can be successfully produced by analysing the solution into various components, which can then be developed in isolation. Bringing these components back together (synthesis) should be trivial if the analysis has been adequate; if problems arise, then this is a consequence of either inadequate analysis or incorrect implementation.

This doctrine goes hand-in-hand with loosely coordinated development teams. After all, with adequate analysis, the software developers should have all the information they need to write the software, and the interdependencies between the constituent components have been taken care of during the analysis phase. The majority of coordination will be about scheduling, so alert and energetic project management is important.

Testing is focused on the components, ensuring that they conform to their requirements. Where components have dependencies on each other, these can be abstracted away with the use of system mocks, which are straightforward to create, as the contracts have been established.

As combining the components is considered trivial, this can be delayed until development and testing on the individual components has been completed. There can then be a short phase of testing across the entire system to demonstrate that it works as planned, before release to customers.

In my opinion, this is a recipe for disaster. In particular, the system testing phase is rarely trivial, and often takes a dedicated Quality Assurance (QA) team significant amounts of effort, as they find themselves testing all possible routes through the system and uncovering plenty of unexpected behaviour, which is then reported to development teams as bugs.

If the development teams are attempting to work in an ‘Agile’ way and deliver features incrementally, then the work of the project manager becomes even more important, as the delivery of capabilities in the constituent systems needs the be coordinated for each testing phase.

The Synthesist Doctrine

There is also a school of thought in software production that favours synthesis over analysis. According to this school, software can only be successfully produced by bringing together the constituent parts as early and often as possible. This school sees the greatest potential for complexity and uncertainty in the interactions of the components, and seeks to minimise risk by testing the underlying assumptions continuously.

This doctrine goes hand-in-hand with highly networked teams. It is expected that the complexities of the components’ interactions will only become apparent over time, and it is important that these any simplifying assumptions are revised as soon as possible. Scheduling becomes a global rather than local question, and it’s much more important to ensure that requirements evolve and priorities are revisited, rather than focusing on meeting delivery dates.

Testing occurs at all levels, but particular value is given to testing that the entire system works as expected. These tests are the ultimate proof that the customers’ needs have been met and that the software is fit for purpose. Mocks are fundamental practices, but they are most suitable for lower-level tests, and whole-system tests try to exercise all integration points.

As getting the interactions right is prioritised over the detail of the individual components, they may start as broad sketches of the expected behaviour, and complexities and edge cases are added as they become necessary. Indeed, some of the initially desired behaviour may never make it into the final system. As combining the components into a whole system and exercising it with tests happens continuously, there is often no need for a final testing phase.

In my opinion, this methodology gives us the best chance of success.

Continuous Integration

In the preceding sections I have avoided the word ‘integration’, but I’ve skirted around this issue many times.

We refer to the act of combining any parts of software as integration, to the extent that it’s almost a synonym for ‘synthesis’. (Curiously, the word comes from the Latin ‘integer’, meaning un- (in-) touched (from tango, to touch) and implies unity and atomicity, rather than acknowledging the analysis–synthesis cycle). Whether we leave it to a final phase or do it continuously, all non-trivial software needs to be integrated at some point.

A commonly attempted practice is Continuous Integration (CI). I find it interesting that most discussion of CI focuses on integrating code changes into a central codebase, when the practice also enables us to integrate behavioural changes into a complete system. Needless to say, I believe that the pursuit of CI is a key tool in a synthetic approach to software production.

Integration Tests

Ask n developers to define integration tests, and you’ll get n+1 answers. I try to avoid the term altogether, using more specific phrases to capture specific types of test.

I talk about adapter tests, where we test the (ideally very thin) parts of our code that interact with other components and systems. I see these are developer-facing tests, and consider them an important part of a developer’s toolkit.

I’ve seen teams in an Analytic context omit adapter testing altogether, as these tests blur the boundary between an isolated, tightly specified system and the messy outside world. When this happens, a debt of uncertainty is incurred that must be paid off with interest during the integration phase.

I also talk about various types of cross-system and whole-system tests. These can be thought of as integrated tests, as they exercise an integrated system. This is the level at which we prove that the desired behaviour has been implemented, and these are often customer-facing tests, which form an important part of the team’s delivery.

It’s worth observing that integrated tests form part of both approaches, but that the tendency under the analyst doctrine is to do extensive manual testing during an integration phase, whereas under the synthesist doctrine we perform lightweight automated testing after every integration. J.B. Rainsberger argues that Integrated Tests are a Scam, and I have some sympathy for this point, so it’s worth noting that the extensive QA performed under the analyst doctrine fits Rainsberger’s critique much closer than the lightweight whole-system testing of the analyst doctrine.

London and Chicago

It’s interesting to notice in passing that these approaches appear to have parallels in the two styles of Test Driven Development (TDD). London-Style TDD characterises the behaviour of a system in broad brushes in its entirety, and then digs down into the details to write just enough code to implement that behaviour. Chicago-Style TDD (certainly when practised at scale) focuses on evolving well-characterised components, and then assembles them into whole systems. We can see that the London Style responds to Synthetic thinking, whilst the Chicago Style responds to Analytic thinking.

Waterfall versus Agile

The Analytic process I described above looks very much like the delivery pipeline of a Waterfall project. Indeed, the Poppendiecks in their book Lean Software Development characterise the traditional approach to software production as implementing an Analytic mindset, whilst they see a Synthetic mindset in the Lean and Agile ethos of Seeing the Whole, and building entire systems in rapid iterations.

It’s interesting to look at the projects that fall between these two approaches. As I mention above, many organisations maintain an Analytic approach, even though they attempt to introduce Agile concepts by delivering functionality in increments. The tensions created by this mixed approach can lead to more work for project managers as they attempt to coordinate teams’ priorities.

It’s Not All or Nothing

Having said all this, we must remind ourselves that Analysis and Synthesis are not in opposition to each other, but are two sides of the same coin. You cannot reassemble something that hasn’t been broken apart. Even the pure Analytic process includes a phase of synthesis, and even the most radically Synthetic project demands analysis of each increment. What is important is that these two concepts give bias our decisions and working practices, and whereas the Synthetic approach demands frequent small acts of analysis, the Analytic approach puts off synthesis to the final phases. It’s clear to me which approach I find safer.

How applying Theory of Constraints helped us optimise our code

The neck of a bottle of prosecco in front of a fire.

My team have been working on improving the performance our API, and identified a database call as the cause of some problems.

The team suggested three ways to tackle this problem:

  • Scale up the database till it can meet our requirements.
  • Introduce some light-weight caching in the application to reduce load on the database.
  • Examine the query plan for this database call to find out whether the query can be optimised.

Which of these should we attempt first? There was some intense discussion about this, with arguments made in favour of each approach. What we needed was a simple framework for making decisions about how to improve our system.

This is where the Theory of Constraints (ToC) can help. Originally expounded as a paradigm for improving manufacturing systems, ToC is really useful in software engineering, both when managing projects and when improving the performance of the systems we create.

Theory of Constraints

The preliminary step in applying ToC is to identify the Goal of your system. In the case of this API, the Goal is to supply accurate data to consumers.

Now we understand the Goal of the system, we can define the Throughput of the system as the rate at which it can deliver units of that goal, in our case API responses. We can also define the Operating Expenses of the system (the cost of servers) and its Inventory (requests waiting for responses).

The next step is to identify the Constraint of the system. This is the element in the system that dictates the system’s Throughput. In a physical system, a useful heuristic is a build-up of Inventory in front of this element. In our API, our monitoring helped us pinpoint the bottleneck.

The next three steps give us a sequence of approaches for tackling the Constraint:

  • First, Exploit the Constraint by finding local changes you can make to improve its performance.
  • Second, Subordinate the rest of the system to the Constraint by finding ways to reduce pressure on it so it can perform more smoothly.
  • Third, Elevate the Constraint by increasing the resources available to it, committing to additional Operating Expenses if necessary.

Exploitation comes first because it’s quick, cheap and local. To Subordinate you need to consider the effects on the rest of the system, but there shouldn’t be significant costs involved. Elevating the Constraint may well cost a fair amount, so it comes last on the list.

Once you have applied these steps you will either find that the Constraint has moved elsewhere (you’ve ‘broken’ the original Constraint), or it has remained in place. In either case, you should repeat the steps as part of a culture of continuous improvement. Eventually you want to see the constraint move outside your system and become a matter of consumer demand.

Applying ToC to our question

If we look at the team’s three suggestions, we can see that each corresponds to one of these techniques:

  • Scaling up the database is Elevation: there’s a clear financial cost in using larger servers.
  • Introducing caching is Subordination: we’re changing the rest of the system to reduce pressure on the Constraint, and need to consider questions such as cache invalidation before we make this change.
  • Optimising the query is Exploitation: we’re making local changes to the Constraint to improve its performance.

Applying ToC tells us which of these approaches to consider first, namely optimising the query. We can look at caching if an optimised query is still not sufficient, and scaling should be a last resort.

In our case, query optimisation was sufficient. We managed to meet our performance target without introducing additional complexity to the system or incurring further cost.

Further Reading

Goldratt, Eliyahu M.; Jeff Cox. The Goal: A Process of Ongoing Improvement. Great Barrington, MA.: North River Press.

Reinvigorating a daily stand-up by walking the board

I’ve been working with a team who had a problem with focus: when I joined them, they seemed to be busy all the time, but they were frustrated that they weren’t making progress towards their sprint goals.

This situation could be seen in microcosm at the daily stand-up meeting. In this post I’m going to describe how a simple adjustment to this meeting helped us start to improve focus, morale and productivity.

The Scrum Guide gives a template for the daily stand-up meeting (which it calls the Daily Scrum):

The Daily Scrum is held at the same time and place each day to reduce complexity. During the meeting, the Development Team members explain:

  • What did I do yesterday that helped the Development Team meet the Sprint Goal?
  • What will I do today to help the Development Team meet the Sprint Goal?
  • Do I see any impediment that prevents me or the Development Team from meeting the Sprint Goal?

My team was indeed practising this technique, but it seemed that they often forgot the discipline of focusing on ‘helping the Development Team meet the Sprint Goal’, and the meeting often descended into yesterday-I-diddery, with each team member recounting all the things they did the day before in trivial detail.

It seems to me that in a stand-up of this format, each team member’s incentive becomes having something to say, rather than showing progress towards the Sprint Goal, and this produces an incentive to be busy, no matter how irrelevant or frankly counterproductive the tasks might be. In this team, I saw a lot of effort spent on support tasks—whether or not the issue was pressing—, a significant amount of aimless ‘refactoring’, which was essentially yak shaving, and a tendency for team members to interrupt each other for help with lower priority work. In effect, everyone starts prioritising busy work, rather than focusing on the team’s goals.

The other consequence of this approach was that the team’s board was a poor representation of our work: people would be working on tasks that weren’t visible on the board, and the stories that were on the board often didn’t move anywhere. The Scrum Master and I tried various approaches to coordinate the board and the stand-up reports, but the focus was still lacking.

I’ve previously worked in a Kanban environment, and the format of a Kanban standup is significantly different:

Standup meetings have evolved differently with Kanban. The need to go around the room and ask the three questions is obviated by the card wall. … The focus is on flow of work. The facilitator … will “walk the board.” The convention has developed work backward—from right to left (in the direction of pull)—through the tickets on the board. The facilitator might solicit a status update on a ticket or simply ask if there is any additional information that is not on the board and may not be known to the team.

(Anderson, David J. Kanban, Successful evolutionary change for your technology business)

I suggested that we try this approach for a week, and see whether it helped give us more focus. As people were concerned that we might lose sight of important work, we agreed that we would walk the board first, and then run quickly round the team to see if anything was missing.

The initial results were encouraging, and several weeks later we are still walking the board, rather than going round the team. In particular, our board now contains a great deal more information on the tasks in play, and the team have got really good at carding up even small tasks so they are visible at the next stand-up. The amount of off-plan and busy work has also dropped, and this also be a result of the focus on the tasks on the board. Perhaps my favourite development is that the tasks on the board have become much smaller: the drive to get things done is now focused on pulling small, focused tasks across the board, rather than doing busy work.

Of course this technique is no cure-all, and it took a while for the team to acquire the discipline of walking the board in order, rather than jumping in to discuss whichever task particularly excites them. However, as an incremental adaptation, I’m very pleased with its results.

A Retrospective in the Park

The other day, I facilitated a sprint retrospective in the park. The sun was shining, and we had all been working hard to complete our backlog, so it felt like a nice reward for everyone’s efforts. Holding a retrospective outdoors can also give it an energy and sense of enthusiasm that is harder to find in a small room.

I’ve run outdoor retrospectives before, and have previously followed fairly classic plans, with much arranging of index cards. This has never been a great success, as the slightest breath of a breeze can make a mess of your planning. For this retrospective, I designed a plan to avoid these problems, drawing some ideas from the Appreciative Retrospective plan.

This retrospective took an hour for a team of nine. You’ll need a pile of index cards or sticky notes, and a pen per person. Here’s how to do it:

1. Choose your location

Some people are sun lovers, whilst others, like me, burn easily and need some shade, so find a location that will work for everyone. Don’t worry too much about the state of the grass, as I suggest you conduct the retrospective standing up, if possible.

Get everyone to stand in a circle, with enough personal space for everyone, but close enough that you can hear everyone speak.

Observations: It’s nice if you can find a location where there won’t be too many distractions. We weren’t entirely successful: there was a hen party in another corner of the park, whose popping of prosecco corks and parading of an anatomically exaggerated blow-up mannequin was hard to ignore; there was also a group of male models sunning themselves noisily behind us (I’m working at a fashion company, so this is less unusual than it may sound), and three young women were smoking some interesting cigarettes upwind of us. Nonetheless, our retrospective was a success despite occasional distractions.

2. Characterise the sprint

Ask everyone to spend a couple of minutes coming up with three words to characterise the sprint; then go clockwise round the circle and ask each person to tell the team their three words.

Observations: It’s surprising how much difficulty people have sticking to three words; the important focus of this task is not the three-word limit, but getting a concise summary of the sprint.

3. Thank your neighbour

Moving anticlockwise around the team, ask each team member to thank their neighbour for something they have done during this sprint.

Observations: Our Scrum Master broke the rules by thanking the whole team for their efforts. The focus of this task is to generate a positive mood across the team, and it’s important that no one misses out on individual thanks, so I asked him to thank his neighbour for something as well.

4. Describe what went well

Hand each person three cards, and give them three minutes to write down three things that went well during the sprint. Going clockwise round the circle from a different starting point, ask each person to read out their three successes.

Observations: It’s useful for the facilitator to observe and comment on common themes, as this can help reinforce good practice.

5. Describe what could improve

Hand each person three more cards, and give them another three minutes to write down three things that could have gone even better during the sprint. Then go anticlockwise round the circle and ask each person to read out their three improvements.

6. Group the improvements

Instead of arranging the cards on a whiteboard (which isn’t practical in the park), appoint a champion for each improvement. Ask the first person to choose one of the improvements they suggested, and then get everyone else to hand this person any cards that describe a similar improvement. Keep running round the team until each team member has just one group of cards.

Observations: There’s a chance that you’ll end up with more themes than team members, in which case you’ll have to make a decision to drop some of these themes; in our case we had fewer common themes than team members, so we didn’t have to do this.

7. Select and discuss the most common themes

Rather than dot-voting, which again is impractical in the park, select the commonest themes. Ask everyone with any cards to take a step into the circle. Then ask everyone with just one card to take a step out again; then everyone with just two cards; then three, and so on until just three people are left in the inner circle.

Then have a three-minute discussion of each of these suggested improvements, with the focus of identifying at least one action per theme for the next sprint. Ensure someone is assigned to each action.

Observations: We ran slightly over the three minutes assigned to each theme, but this wasn’t a problem; if we hadn’t had a time limit, I suspect the conversation would have been much less focused.

8. Round off the retrospective

Finally, going round the circle clockwise, ask everyone to describe how they felt about the retrospective itself.

Observations: The feedback was very positive. The team had clearly enjoyed the opportunity to get out of the office, and they felt that the session had been successful: everyone was engaged and we came up with some good actions.