Tools we used to take our stand-up online

I’ve posted several times on here about team stand-ups (Reinvigorating a daily stand-up by walking the board, Quick fixes to stop ignoring your builds, Be more transparent about your meetings: put them on the board, How we collect team achievements with kudos cards). If you read these posts, you may notice that I’m a big fan of offline techniques: a physical whiteboard, index cards, sticky notes, and, if necessary, print-outs of online content. A certain global pandemic and the ensuing move to remote working made me confront this preference and forced me to find new, online, ways to visualise our work and environment in our stand-ups.

I work for a very Microsoft-centred company, and this influences our choice of tools. I hope that the spirit, if not the detail of these techniques can be easily transferred to whatever tools you have available.

Start with Coffee

We hold our stand-up every day at 11:45. Yes, this was at my instigation, as I’m not a morning person, so I like to run our stand-up at a time when I’m awake and alert. We chose a spot just before lunch (at 12:00!) so it would close off the morning’s work, rather than interrupting anything.

When we were working in the office together, we would often hold our stand-up and then drift off to lunch in small groups. Now we’re remote that’s not possible, so we’ve instituted a team coffee break at 11:30. We open an online meeting and have fifteen minutes of informal chat before we switch over to the day’s stand-up discussions.

When the time comes, I share my screen and we get started on the first discussion point.

Open an online meeting for an informal chat before stand-up.

Web Pages for Everything

I learnt long ago that it’s very easy to forget to discuss anything that’s not immediately visible on your team board. I apply this same principle now we’ve move online.

Whilst physical boards can be configured any way you choose, they only have a finite amount of space. Online sprint boards tend to be somewhat less flexible, but we can compensate for this by creating additional web pages with further discussion points on them.

Create additional web pages for further discussion points.

Use the Widgets

We track our work items in Azure DevOps, and the platform comes with a pretty flexible dashboard system, with a good selection of widgets. You can create whizzy graphs and visualise the status of your Build and Release pipelines.

However, my favourite two widgets are perhaps the most mundane: Markdown and Embedded Web Page.

We use the Markdown widget to add custom text and status messages to many of our dashboards, and I particularly enjoy the way you can create checklists and lists of links to other dashboards.

We use the Embedded Web Page widget for something even more important: a clock! I couldn’t a widget that would allow me to embed a clock on a dashboard, and I find it rather useful to timestamp the page, particularly if we’re going to screenshot it for sharing. To solve this problem, I created an Azure Function that returns a snippet HTML formatted with the same font used on the dashboard, and I embedded it on the page with an Embedded Web Page widget.

If there’s no widget support for what you want, create a web page elsewhere and use the Embedded Web Page widget.

Bookmark Everything

We currently use six separate pages for our stand-up. In Chrome I’ve created a folder in my Bookmark bar for all these tabs (to bookmark several pages at once, type Ctrl + Shift + D), and I can open them all at once by wheel-clicking on the folder.

Our six pages are:

  • Stand-up Overview
  • Team Stories Board
  • DevOps Stories Board
  • Pairing Staircase
  • End-to-End Test Status Dashboard
  • Calendar

Create a folder for all your stand-up pages, so you can open them all at once.

Stand-up Overview

This is the starting and finishing point for our stand-up discussion. It’s implemented as a dashboard with the following sections:

  • A panel with the sprint number, sprint name, and a checklist of our goals. This is implemented as a Markdown widget, and we edit the content as we complete the goals.
  • A date/time clock, implemented as an Embedded Web Page.
  • A panel with links to the other pages we use in our stand-up, and links to personal recognition pages on our company performance system. This is another Markdown widget.
  • A Sprint Overview widget.
  • A Cycle Time widget.
  • A Burndown Chart widget.
  • A Velocity widget.

We often find ourselves looking at the Burndown chart, which gives us a sense of whether our sprint is playing out as we hoped it would, and our Cycle Time chart, which is helpful for assessing our pace and flow of work.

Sometimes simple is best. Use a Markdown widget to track your goals.

Team Stories Board

This page comes the closest to a recognisable team board. We use the Boards feature of Azure DevOps, with some customisations.

  • We set our board to display stories, as this is the level at which we track our work.
  • We don’t filter by sprint, as we use this board to visualise our Product Backlog and Sprint Candidates as well as the items in our current sprint.
  • We use custom styles to indicate which stories are part of this sprint, which are overdue, and which are in our backlog.
  • We use tags and tag styles to give a quick indication of blocked stories (we also link these stories to their blockers).
  • We organise our board into seven columns, just as we did on a physical board:
    • Product Backlog. This corresponds to the New Status.
    • Candidates. A story becomes a candidate when it is refined and pointed.
    • Sprint Backlog. These are the stories chosen at sprint planning, plus any extras.
    • In Development.
    • Ready for Demo. These stories are essentially development-complete, but we demonstrate them to each other before closing them.
    • Done. This is a holding area so we can celebrate the completion of each story at stand-up.
    • Closed.
  • In addition, we split the central five columns into five swim lanes, and use these to track various other types of work:
    • Development contains our standard work items.
    • Meetings and Discussions visualises the many meetings I participate in.
    • Retro Actions
    • Kudos. As we can no longer hand each other kudos cards, we put them on the board.
    • To discuss. This swimlane is a hack. I drop an item in here if there’s something extra I want to discuss at stand-up.

Use swimlanes to visualise non-development work and topics for discussion

DevOps Stories Board

We have a DevOps team and rely on them to provide and configure infrastructure. Many of our stories are dependent on work being done by the DevOps team. We work closely with a member of the DevOps team, and he regularly attends our stand-ups.

Every day we look at the DevOps board, filtered to show only those tasks raised by our team. This gives us a change to check that we’ve provided all the information needed to complete any tasks we’ve raised.

You can filter other teams’ boards to see just those tasks raised by your team.

Pairing Staircase

In the office I would print out a templated pairing staircase for each sprint, and we used a bingo dabber to mark who had paired with whom.

When we moved online, I spruced up the Excel spreadsheet I had made for printing. I added a sum to the right of each person’s initials, and used Conditional Formatting Colour Scales to make it pretty.

A pairing staircase implemented as an Excel Spreadsheet

End-to-End Test Status Dashboard

We’ve written several suites of tests that run typical client journeys against our system. Failures in these tests can indicate instabilities across our ecosystem.

These tests run against each of our environments, and are build using Azure DevOps Release Pipelines, as these make it straightforward to run the same steps against several environments.

We use the Deployment Status widget to show the results of each of these suites of tests, and gather all these widgets on one dashboard, along with a clock (in an Embedded Web Page widget) and a notes panel (in a Markdown widget) to give details of any ongoing investigations and outstanding bugs.

We also send a screenshot of this dashboard to our colleagues every day so they can see how well we’re doing at keeping our environment stable.

Use the Markdown widget to give details of ongoing investigations and outstanding bugs.

Calendar

Our final page is our team Calendar in Azure DevOps, which automatically shows us our sprint diary. To this we add:

  • Holidays. Whenever I sign off someone’s leave, I add it to this calendar as well.
  • Office closed days and other events.
  • Our weekly build duty rota, which determines who investigates any issues.
  • Days we dedicate to working on our goals.
  • Planned releases.

By discussing this daily, we make sure it’s up to date and there are no nasty surprises.

Check your calendar daily to make sure there are no nasty surprises.

Back to the Overview

We end the stand-up by returning to the Overview dashboard.

Any stores we’ve closed will now be reflected in our charts (after a quick refresh of the page), and we take a moment to update the checklist of our goals.

We put out a final call for any other business, and wish each other a happy continuation of the day.

“Any more for any more? … Happy Wednesday!”

Me, at the end of every stand-up (give or take the name of the day!)

Analysis and Synthesis in Software Production

A dry stone wall

I’ve been thinking about some unproductive discussions I’ve had recently about software production methodologies, discussions where we’ve seemed to be talking across each other, rather than settling on a clear statement of our differences. In cases like this, it’s often the case that we agree on the structure of our arguments, but that there is a fundamental difference in our assumptions or values.

I’ve also been thinking about (apparent) dichotomies, inspired in part by Stephen Jay Gould’s fascinating book, Time’s Arrow, Time’s Cycle: myth and metaphor in the discovery of geological time. In this book he investigates two concepts of time have shaped the science of geology. I write ‘apparent’ in parentheses, as it becomes clear that these concepts are not in opposition, but rather creative tension with each other.

This led me to look at my conversations about software production through the lens of another apparent dichotomy: analysis versus synthesis. As in Gould’s example, I don’t consider these concepts to be in opposition, but rather our decision which of them to emphasise, which to give primacy, has a profound impact on the way we approach software production.

Analysis

Analysis, ἀνάλυσις, is the breaking (λῦσις, from λύω, to untie, detach) up (ἀνά) of a problem. In software production, this describes taking a set of requirements or a system, and breaking it into smaller pieces, each of which is easier to reason with. This is an essential tool for creating software of any complexity, as reasoning with the entirety of the system is beyond most human abilities.

Synthesis

Synthesis, σύνθεσις, is the putting (θέσις, from τίθημι, to put, place) together (σύν) of a solution. In software production, this describes assembling parts, whether they are method calls, classes, packages or deployable components, into a system. This is an essential tool for creating software of any complexity, as the many moving parts need to interact with each other.

The Analyst Doctrine

There is a school of thought in software production that strongly favours analysis over synthesis. According to this school, software can be successfully produced by analysing the solution into various components, which can then be developed in isolation. Bringing these components back together (synthesis) should be trivial if the analysis has been adequate; if problems arise, then this is a consequence of either inadequate analysis or incorrect implementation.

This doctrine goes hand-in-hand with loosely coordinated development teams. After all, with adequate analysis, the software developers should have all the information they need to write the software, and the interdependencies between the constituent components have been taken care of during the analysis phase. The majority of coordination will be about scheduling, so alert and energetic project management is important.

Testing is focused on the components, ensuring that they conform to their requirements. Where components have dependencies on each other, these can be abstracted away with the use of system mocks, which are straightforward to create, as the contracts have been established.

As combining the components is considered trivial, this can be delayed until development and testing on the individual components has been completed. There can then be a short phase of testing across the entire system to demonstrate that it works as planned, before release to customers.

In my opinion, this is a recipe for disaster. In particular, the system testing phase is rarely trivial, and often takes a dedicated Quality Assurance (QA) team significant amounts of effort, as they find themselves testing all possible routes through the system and uncovering plenty of unexpected behaviour, which is then reported to development teams as bugs.

If the development teams are attempting to work in an ‘Agile’ way and deliver features incrementally, then the work of the project manager becomes even more important, as the delivery of capabilities in the constituent systems needs the be coordinated for each testing phase.

The Synthesist Doctrine

There is also a school of thought in software production that favours synthesis over analysis. According to this school, software can only be successfully produced by bringing together the constituent parts as early and often as possible. This school sees the greatest potential for complexity and uncertainty in the interactions of the components, and seeks to minimise risk by testing the underlying assumptions continuously.

This doctrine goes hand-in-hand with highly networked teams. It is expected that the complexities of the components’ interactions will only become apparent over time, and it is important that these any simplifying assumptions are revised as soon as possible. Scheduling becomes a global rather than local question, and it’s much more important to ensure that requirements evolve and priorities are revisited, rather than focusing on meeting delivery dates.

Testing occurs at all levels, but particular value is given to testing that the entire system works as expected. These tests are the ultimate proof that the customers’ needs have been met and that the software is fit for purpose. Mocks are fundamental practices, but they are most suitable for lower-level tests, and whole-system tests try to exercise all integration points.

As getting the interactions right is prioritised over the detail of the individual components, they may start as broad sketches of the expected behaviour, and complexities and edge cases are added as they become necessary. Indeed, some of the initially desired behaviour may never make it into the final system. As combining the components into a whole system and exercising it with tests happens continuously, there is often no need for a final testing phase.

In my opinion, this methodology gives us the best chance of success.

Continuous Integration

In the preceding sections I have avoided the word ‘integration’, but I’ve skirted around this issue many times.

We refer to the act of combining any parts of software as integration, to the extent that it’s almost a synonym for ‘synthesis’. (Curiously, the word comes from the Latin ‘integer’, meaning un- (in-) touched (from tango, to touch) and implies unity and atomicity, rather than acknowledging the analysis–synthesis cycle). Whether we leave it to a final phase or do it continuously, all non-trivial software needs to be integrated at some point.

A commonly attempted practice is Continuous Integration (CI). I find it interesting that most discussion of CI focuses on integrating code changes into a central codebase, when the practice also enables us to integrate behavioural changes into a complete system. Needless to say, I believe that the pursuit of CI is a key tool in a synthetic approach to software production.

Integration Tests

Ask n developers to define integration tests, and you’ll get n+1 answers. I try to avoid the term altogether, using more specific phrases to capture specific types of test.

I talk about adapter tests, where we test the (ideally very thin) parts of our code that interact with other components and systems. I see these are developer-facing tests, and consider them an important part of a developer’s toolkit.

I’ve seen teams in an Analytic context omit adapter testing altogether, as these tests blur the boundary between an isolated, tightly specified system and the messy outside world. When this happens, a debt of uncertainty is incurred that must be paid off with interest during the integration phase.

I also talk about various types of cross-system and whole-system tests. These can be thought of as integrated tests, as they exercise an integrated system. This is the level at which we prove that the desired behaviour has been implemented, and these are often customer-facing tests, which form an important part of the team’s delivery.

It’s worth observing that integrated tests form part of both approaches, but that the tendency under the analyst doctrine is to do extensive manual testing during an integration phase, whereas under the synthesist doctrine we perform lightweight automated testing after every integration. J.B. Rainsberger argues that Integrated Tests are a Scam, and I have some sympathy for this point, so it’s worth noting that the extensive QA performed under the analyst doctrine fits Rainsberger’s critique much closer than the lightweight whole-system testing of the analyst doctrine.

London and Chicago

It’s interesting to notice in passing that these approaches appear to have parallels in the two styles of Test Driven Development (TDD). London-Style TDD characterises the behaviour of a system in broad brushes in its entirety, and then digs down into the details to write just enough code to implement that behaviour. Chicago-Style TDD (certainly when practised at scale) focuses on evolving well-characterised components, and then assembles them into whole systems. We can see that the London Style responds to Synthetic thinking, whilst the Chicago Style responds to Analytic thinking.

Waterfall versus Agile

The Analytic process I described above looks very much like the delivery pipeline of a Waterfall project. Indeed, the Poppendiecks in their book Lean Software Development characterise the traditional approach to software production as implementing an Analytic mindset, whilst they see a Synthetic mindset in the Lean and Agile ethos of Seeing the Whole, and building entire systems in rapid iterations.

It’s interesting to look at the projects that fall between these two approaches. As I mention above, many organisations maintain an Analytic approach, even though they attempt to introduce Agile concepts by delivering functionality in increments. The tensions created by this mixed approach can lead to more work for project managers as they attempt to coordinate teams’ priorities.

It’s Not All or Nothing

Having said all this, we must remind ourselves that Analysis and Synthesis are not in opposition to each other, but are two sides of the same coin. You cannot reassemble something that hasn’t been broken apart. Even the pure Analytic process includes a phase of synthesis, and even the most radically Synthetic project demands analysis of each increment. What is important is that these two concepts give bias our decisions and working practices, and whereas the Synthetic approach demands frequent small acts of analysis, the Analytic approach puts off synthesis to the final phases. It’s clear to me which approach I find safer.

SoCraTes BE 2019

Floréal La Roche

I’ve spent the last few days at the SoCraTes BE Unconference. Here is a brief report.

SoCraTes takes place at a holiday camp in a socialist realist château among the sheer wooded slopes of a Belgian Ardennes.

It’s an unconference, which means that rather than having a predetermined schedule, the participants apply the Open Space Principles to create each day’s schedule.

Here are some of the sessions I attended:

A group exercise creating a Wardley Map of a fictional shop. I’ve heard lots of people talking about Wardley Maps, but this was the first time I got to try them out.

A workshop practising a couple of Liberating Structures. I didn’t think I was familiar with Liberating Structures, but it turns out I’ve been using a few of them for a while! I particularly liked the Troika Consulting structure, and the problem we encountered enabled me to talk through a few techniques I’ve recently learnt from Goldratt’s Thinking Processes.

A nice group discussion on What makes a good stand-up? It turns out lots of us have encountered similar problems and found similar solutions. Hurray!

Talk like Sandi. We watched Sandi’s talk, Get a Whiff of This, and then had a group discussion about what makes her such a compelling speaker.

Code Smells quiz show. This was based on an activity I recently ran with my team, and as luck would have it, my friend Pedro, who wrote the source code for the exercise, was also at SoCraTes, so we ran this session together. It was really popular, and definitely worth repeating.

The Transport Tycoon exercise: modelling a delivery network. This gave us a good chance to compare Classic TDD and Domain Modelling techniques to understand this problem.

Making illegal state transitions impossible. A fairly involved look at modelling state machines in functional languages. We spent rather too long struggling with the language, but it was good to discuss the basic concepts.

Powerpoint Karaoke with each other’s presentations. This is always an entertaining evening activity. It’s usually played with random slide decks from the internet, but this time we challenged each other to improvise talks to presentations that other members of the group had once given.

I had a casual discussion with my friend Pedro Santos about how often personal and professional development is accompanied by pain (an idea that goes back to Aristotle: μετὰ λύπης γὰρ ἡ μάθησις), and whether we can find ways to teach and coach that break this dynamic. The use of games seems to be one approach, as they can offer a safe context for failure.

An introduction to Aikido, out by the river under the hornbeams as the sun went down.

Quick fixes to stop ignoring your builds

A printout of a team dashboard on a team board

In software development, quick feedback is fundamental, and continuously building, testing and deploying is part of our standard toolkit.

But it isn’t enough to set up and run these processes; we need to pay attention to the results. If we can’t successfully build, test and deploy our software, then we have no idea whether we’re building it right, and we risk releasing broken components. It saddens me how often I see teams that have become completely inured to failures, and the result is a predictable decline in quality and drop in team effectiveness.

Here then are a few quick and easy techniques to start paying attention to these failures.

Continue reading “Quick fixes to stop ignoring your builds”

A taste of graph theory

Postcard of buildings by a river. Caption reads ‘Königsberg Schlossteich’

I’ve recently been working with graph databases, which give us a powerful idiom for modelling and reasoning with highly interconnected data. Before I share some of my experiences, I would like to set the scene with a basic introduction to graph theory.

Simple Graphs

Graph theory is a relatively young branch of mathematics, traced back to 1736 and the work of Leonhard Euler.

A simple graph is defined as a non-empty set of vertices, e.g. V = {1,2,3}, and a set of edges, each of which is a 2-member subset of the vertex set, e.g. E = {{1,2},{1,3},{2,3}}.

We can visualise a graph by drawing a diagram:

simple three-vertex graph drawn as a triangle

It’s tempting, particularly for the etymologically minded, to think of a graph as drawing. It’s certainly easier to think about graphs by visualising them, but drawings as representations of graphs, have the potential to be misleading, so we need to exercise caution.

For example, the same graph can be drawn like this:

simple three-vertex graph drawn with curved edges and four crossings

Notice how this diagram shows four crossings, but is in fact equal to the above graph, which shows none.

Because simple graphs are defined in terms of sets, we can note some key characteristics:

  • Each vertex only appears once. V = {1,1,2,3} is not a valid set because a=a.
  • An edge cannot join a vertex to itself. {1,1} is not a valid set, so it cannot be a valid element in the edge set.
  • There can only be one edge between two particular vertices. E = {{1,2},{1,2},{2,3},{1,3}} is not a valid set because {1,2} = {1,2}.
  • The edges have no direction. E = {{1,2},{2,1},{2,3},{1,3}} is not a valid set because {1,2} = {b,a}.

Useful variations

These restrictions are great for pure mathematics, but somewhat restrict our ability to model real-world situations. For this reason, a typical graph database relaxes the rules in various ways:

Digraphs

It introduces a notion of direction in the edges. This means we are now dealing with directed graphs or digraphs.

We can now model a graph V = {1,2,3}, E = {(1,2),(1,3),(2,3)}:

three-vertex digraph drawn as triangle

Multigraphs

We can allow skeins: multiple edges between vertices. If we replace any edge of a graph with a skein, then we have a multigraph, and our edge set becomes a multiset, as it may contain duplicated elements.

Here we expand the basic graph V = {1,2}, E = {{1,2}} by replacing the edge {1,2} with a skein of three edges, giving V = {1,2}, E = {{1,2},{1,2},{1,2}}:

two-vertex simple graph drawn as line

becomes

two-vertex multigraph with three edges

We can also model loops by allowing single-member sets as elements of EV = {1}, E = {{1}}:

one-vertex loop

Quivers

We can also allow directed multigraphs, also known as multidigraphs or quivers.

Degrees of vertices

The degree of a vertex, deg v, is the number of edges attached to it. In a simple graph deg v = |{e : e ∈ E, ve}|. Visually you can find the degree of a vertex by counting the edges that connect to it.

The indegree of a vertex, deg– v, is the number of edges leaving it, and the outdegree of a vertex, deg+ v, is the number of edges reaching it. If we model the directed edges as tuples, then deg v = |{e : e ∈ E, ∃v e = (v,y)}| and degv = |{e : e ∈ E, ∃x e = (x,v)}|. Visually you can find the indegree of a vertex by counting the arrows that leave it, and the outdegree by counting the arrows that point to it.

Walks

The power of graphs to model connected data arises when we start walking our graphs.

Mathematically, a walk is a sequence of vertices (v1,v… vn-1,vn) where each vertex vx is a member of the graph’s vertex set, and each pair of vertices (vx,vx+1) in the sequence is a member of the graph’s edge set.

Visually, a walk is found by placing a pencil on one of the dots on a graph diagram, and tracing along a line to another dot, then repeating the process.

When we come to look at graph databases, we will focus on ‘traversing’ them by walking along their edges.

Adjacency

As well as a diagram, we can represent a graph with an adjacency matrix. Consider the graph V = {1,2,3,4}, E = {{1,2},{1,3},{1,4},{2,3},{3,4}}. We can draw this graph like this:

four-vertex simple graph drawn as a square with one diagonal edge

We can also create an adjacency matrix where Aij is 1 if {i,j} ∈ E and 0 otherwise.

    ⎛0 1 1 1⎞
A = ⎜1 0 0 1⎟
    ⎜1 0 0 1⎟
    ⎝1 1 1 0⎠

The top row shows how many edges there are between 1 and each vertex: none to itself, one to 2, one to 3 and one to 4.

We can easily find the deg v by taking the sum of the corresponding row or column. We can see at a glance that the deg 1 = 3.

By multiplying an adjacency matrix by itself, we can find how many two-edge walks exist between any two vertices:

     ⎛0 1 1 1⎞   ⎛0 1 1 1⎞   ⎛3 1 1 2⎞
A² = ⎜1 0 0 1⎟ x ⎜1 0 0 1⎟ = ⎜1 2 2 1⎟
     ⎜1 0 0 1⎟   ⎜1 0 0 1⎟   ⎜1 2 2 1⎟
     ⎝1 1 1 0⎠   ⎝1 1 1 0⎠   ⎝2 1 1 3⎠

We can check that there are three two-edge walks between 1 and 1 {(1,2,1),(1,3,1),(1,4,1)}, one between 1 and 2 {(1,4,2)}, one between 1 and 3 {(1,4,3)} and two between 1 and 4 {(1,2,4),(1,3,4)}.

We can continue this trick for three-edge walks:

     ⎛0 1 1 1⎞   ⎛0 1 1 1⎞   ⎛0 1 1 1⎞   ⎛4 5 5 5⎞
A³ = ⎜1 0 0 1⎟ x ⎜1 0 0 1⎟ x ⎜1 0 0 1⎟ = ⎜5 2 2 5⎟
     ⎜1 0 0 1⎟   ⎜1 0 0 1⎟   ⎜1 0 0 1⎟   ⎜5 2 2 5⎟
     ⎝1 1 1 0⎠   ⎝1 1 1 0⎠   ⎝1 1 1 0⎠   ⎝5 5 5 4⎠

Again we can check that there are four three-edge walks between 1 and 1: {(1,2,4,1),(1,3,4,1),(1,4,2,1),(1,4,3,1)}, five between 1 and 2: {(1,2,1,2),(1,2,4,2),(1,3,1,2),(1,3,4,2),(1,4,1,2)}, five between 1 and 3: {(1,2,1,3),(1,2,4,3),(1,3,1,3),(1,3,4,3),(1,4,1,3)} and five between 1 and 4: {(1,2,1,4),(1,3,1,4),(1,4,1,4),(1,4,2,4),(1,4,3,4)}.

In general the matrix Aⁿ shows us how many n-edge walks there are between each pair of vertices.

We can perform the same trick for multigraphs and digraphs:

Here is the quiver V = {1,2,3,4}, = {(1,2),(1,3),(1,3),(1,4),(2,4),(2,4),(3,4),(4,4)}:

four-vertex quiver drawn roughly as a square

Here is its adjacency matrix:

    ⎛0 1 2 1⎞
A = ⎜0 0 0 2⎟
    ⎜0 0 0 1⎟
    ⎝0 0 0 1⎠

Here are the next two n-edge walk matrices:

     ⎛0 0 0 5⎞
A² = ⎜0 0 0 2⎟
     ⎜0 0 0 1⎟
     ⎝0 0 0 1⎠
     ⎛0 0 0 5⎞
A³ = ⎜0 0 0 2⎟
     ⎜0 0 0 1⎟
     ⎝0 0 0 1⎠

We can also add these matrices:

              ⎛0 1 2 11⎞
A + A² + A³ = ⎜0 0 0  6⎟
              ⎜0 0 0  3⎟
              ⎝0 0 0  3⎠

This matrix tells us that from 1 to 4 there are eleven walks of no more than three edges.

Adjacency matrices can give us a useful way to reason about graphs without having to traverse every walk.

Conclusion

These concepts give us the basic tools for working with graph databases. In future posts I will look at how we can put them to work to model domains.

Creativity in Software Development

I shared yesterday’s post with some friends, who were keen to explore what we mean when we talk about creativity in software development.

Alastair made an interesting comment:

…it made me reconsider software dev as a creative endeavour, but I think I came to the conclusion that it is. For me, I think there is a gap between a creative art like writing, especially one which has an expressive mirror like acting, and a purely creative activity like, e.g., whittling a stick or constructing a building.

I think there is value in disentangling our concepts of creativity, and I find Alastair’s distinction between the creative arts and simpler forms of creation very useful.

There’s also an ambiguity in the word ‘create’, as it can refer simply to making things, as well as to the creative endeavours we would like to characterise.

So rather than ask ‘Is software development a creative activity?’, I tend to consider a narrower question: ‘Is there a place for creative thinking in software development?’

As the most basic level, I see creative thinking as making new links between concepts. Once you have made the link, you can engage other thought processes, for example deductive thinking, to explore the consequences and implications of that link.

But because the link isn’t already there, you can’t find it by rational thought; you need a leap of imagination to reach it.

There are some sorts of problem that I can tackle best once I’ve slept. On a few lucky occasions I’ve been able to take an afternoon nap, and woken up with a new idea to investigate, but this usually means taking the idea home with me and letting it brew overnight.

Here are a few examples of problems in software development that can be tackled with creative thinking:

  • How should we name this element?
  • What is the appropriate metaphor for this system?
  • Has a similar problem already been solved? Is there a pattern we can apply here?
  • What test should we write first? What test should we write next?
  • What is the best way to split this system into smaller parts?

And of course, because software development in an organisation is a social activity, the need for creative thinking extends far beyond the design of the software.

Test code needn’t be defensive

In a code review I encountered some test code that looked a bit like this:

 var result = await _controller.Resources() as ViewResult;
 result.Should().NotBeNull();
 // ReSharper disable once PossibleNullReferenceException
 result.Model.Should.BlahBlahBlah();
 

This is a typical defensive coding pattern whose reasoning goes like this:

  • The return type of _controller.Resources() is Task<ActionResult>.
  • I need to cast the inner Result of this Task to a ViewResult, as I want to inspect its Model attribute.
  • But the Result could be a different subclass of ActionResult, so I had better use a safe cast, just in case.
  • As I’m using a safe cast, I can’t guarantee that I’ll get any instance back, so I had better do a null check.
  • Oh look! ReSharper is complaining when I try to access properties of this object. As I’ve already performed a null check, I’ll turn off the warnings with a comment.

Now, defensive coding styles are valuable when we don’t know what data we’ll be handling, but this is most likely to happen at the boundaries of a system, where it interacts with other systems or, even more importantly, humans.

But in the context of a unit test, things are different:

  • We are in control of both sides of the contract: the test and class under test have an intimate and interdependent existence. A different type of response would be unexpected and invalid.
  • An attempt to directly cast to an invalid type will throw a runtime error, and a runtime error is a meaningful event within a test. If _controller.Resources() returns any other subclass of ActionResult, then the fact that it cannot be cast to ViewResult is the very information I want to receive, as it tells me how my code is defective.

This means I can rewrite the code like this:

 
var result = (ViewResult) await _controller.Resources(); 
result.Model.Should.BlahBlahBlah();

By setting aside the defensive idiom, I’ve made the test clearer and more precise, without losing any of its value.