A taste of graph theory

Postcard of buildings by a river. Caption reads ‘Königsberg Schlossteich’

I’ve recently been working with graph databases, which give us a powerful idiom for modelling and reasoning with highly interconnected data. Before I share some of my experiences, I would like to set the scene with a basic introduction to graph theory.

Simple Graphs

Graph theory is a relatively young branch of mathematics, traced back to 1736 and the work of Leonhard Euler.

A simple graph is defined as a non-empty set of vertices, e.g. V = {1,2,3}, and a set of edges, each of which is a 2-member subset of the vertex set, e.g. E = {{1,2},{1,3},{2,3}}.

We can visualise a graph by drawing a diagram:

simple three-vertex graph drawn as a triangle

It’s tempting, particularly for the etymologically minded, to think of a graph as drawing. It’s certainly easier to think about graphs by visualising them, but drawings as representations of graphs, have the potential to be misleading, so we need to exercise caution.

For example, the same graph can be drawn like this:

simple three-vertex graph drawn with curved edges and four crossings

Notice how this diagram shows four crossings, but is in fact equal to the above graph, which shows none.

Because simple graphs are defined in terms of sets, we can note some key characteristics:

  • Each vertex only appears once. V = {1,1,2,3} is not a valid set because a=a.
  • An edge cannot join a vertex to itself. {1,1} is not a valid set, so it cannot be a valid element in the edge set.
  • There can only be one edge between two particular vertices. E = {{1,2},{1,2},{2,3},{1,3}} is not a valid set because {1,2} = {1,2}.
  • The edges have no direction. E = {{1,2},{2,1},{2,3},{1,3}} is not a valid set because {1,2} = {b,a}.

Useful variations

These restrictions are great for pure mathematics, but somewhat restrict our ability to model real-world situations. For this reason, a typical graph database relaxes the rules in various ways:

Digraphs

It introduces a notion of direction in the edges. This means we are now dealing with directed graphs or digraphs.

We can now model a graph V = {1,2,3}, E = {(1,2),(1,3),(2,3)}:

three-vertex digraph drawn as triangle

Multigraphs

We can allow skeins: multiple edges between vertices. If we replace any edge of a graph with a skein, then we have a multigraph, and our edge set becomes a multiset, as it may contain duplicated elements.

Here we expand the basic graph V = {1,2}, E = {{1,2}} by replacing the edge {1,2} with a skein of three edges, giving V = {1,2}, E = {{1,2},{1,2},{1,2}}:

two-vertex simple graph drawn as line

becomes

two-vertex multigraph with three edges

We can also model loops by allowing single-member sets as elements of EV = {1}, E = {{1}}:

one-vertex loop

Quivers

We can also allow directed multigraphs, also known as multidigraphs or quivers.

Degrees of vertices

The degree of a vertex, deg v, is the number of edges attached to it. In a simple graph deg v = |{e : e ∈ E, ve}|. Visually you can find the degree of a vertex by counting the edges that connect to it.

The indegree of a vertex, deg– v, is the number of edges leaving it, and the outdegree of a vertex, deg+ v, is the number of edges reaching it. If we model the directed edges as tuples, then deg v = |{e : e ∈ E, ∃v e = (v,y)}| and degv = |{e : e ∈ E, ∃x e = (x,v)}|. Visually you can find the indegree of a vertex by counting the arrows that leave it, and the outdegree by counting the arrows that point to it.

Walks

The power of graphs to model connected data arises when we start walking our graphs.

Mathematically, a walk is a sequence of vertices (v1,v… vn-1,vn) where each vertex vx is a member of the graph’s vertex set, and each pair of vertices (vx,vx+1) in the sequence is a member of the graph’s edge set.

Visually, a walk is found by placing a pencil on one of the dots on a graph diagram, and tracing along a line to another dot, then repeating the process.

When we come to look at graph databases, we will focus on ‘traversing’ them by walking along their edges.

Adjacency

As well as a diagram, we can represent a graph with an adjacency matrix. Consider the graph V = {1,2,3,4}, E = {{1,2},{1,3},{1,4},{2,3},{3,4}}. We can draw this graph like this:

four-vertex simple graph drawn as a square with one diagonal edge

We can also create an adjacency matrix where Aij is 1 if {i,j} ∈ E and 0 otherwise.

    ⎛0 1 1 1⎞
A = ⎜1 0 0 1⎟
    ⎜1 0 0 1⎟
    ⎝1 1 1 0⎠

The top row shows how many edges there are between 1 and each vertex: none to itself, one to 2, one to 3 and one to 4.

We can easily find the deg v by taking the sum of the corresponding row or column. We can see at a glance that the deg 1 = 3.

By multiplying an adjacency matrix by itself, we can find how many two-edge walks exist between any two vertices:

     ⎛0 1 1 1⎞   ⎛0 1 1 1⎞   ⎛3 1 1 2⎞
A² = ⎜1 0 0 1⎟ x ⎜1 0 0 1⎟ = ⎜1 2 2 1⎟
     ⎜1 0 0 1⎟   ⎜1 0 0 1⎟   ⎜1 2 2 1⎟
     ⎝1 1 1 0⎠   ⎝1 1 1 0⎠   ⎝2 1 1 3⎠

We can check that there are three two-edge walks between 1 and 1 {(1,2,1),(1,3,1),(1,4,1)}, one between 1 and 2 {(1,4,2)}, one between 1 and 3 {(1,4,3)} and two between 1 and 4 {(1,2,4),(1,3,4)}.

We can continue this trick for three-edge walks:

     ⎛0 1 1 1⎞   ⎛0 1 1 1⎞   ⎛0 1 1 1⎞   ⎛4 5 5 5⎞
A³ = ⎜1 0 0 1⎟ x ⎜1 0 0 1⎟ x ⎜1 0 0 1⎟ = ⎜5 2 2 5⎟
     ⎜1 0 0 1⎟   ⎜1 0 0 1⎟   ⎜1 0 0 1⎟   ⎜5 2 2 5⎟
     ⎝1 1 1 0⎠   ⎝1 1 1 0⎠   ⎝1 1 1 0⎠   ⎝5 5 5 4⎠

Again we can check that there are four three-edge walks between 1 and 1: {(1,2,4,1),(1,3,4,1),(1,4,2,1),(1,4,3,1)}, five between 1 and 2: {(1,2,1,2),(1,2,4,2),(1,3,1,2),(1,3,4,2),(1,4,1,2)}, five between 1 and 3: {(1,2,1,3),(1,2,4,3),(1,3,1,3),(1,3,4,3),(1,4,1,3)} and five between 1 and 4: {(1,2,1,4),(1,3,1,4),(1,4,1,4),(1,4,2,4),(1,4,3,4)}.

In general the matrix Aⁿ shows us how many n-edge walks there are between each pair of vertices.

We can perform the same trick for multigraphs and digraphs:

Here is the quiver V = {1,2,3,4}, = {(1,2),(1,3),(1,3),(1,4),(2,4),(2,4),(3,4),(4,4)}:

four-vertex quiver drawn roughly as a square

Here is its adjacency matrix:

    ⎛0 1 2 1⎞
A = ⎜0 0 0 2⎟
    ⎜0 0 0 1⎟
    ⎝0 0 0 1⎠

Here are the next two n-edge walk matrices:

     ⎛0 0 0 5⎞
A² = ⎜0 0 0 2⎟
     ⎜0 0 0 1⎟
     ⎝0 0 0 1⎠
     ⎛0 0 0 5⎞
A³ = ⎜0 0 0 2⎟
     ⎜0 0 0 1⎟
     ⎝0 0 0 1⎠

We can also add these matrices:

              ⎛0 1 2 11⎞
A + A² + A³ = ⎜0 0 0  6⎟
              ⎜0 0 0  3⎟
              ⎝0 0 0  3⎠

This matrix tells us that from 1 to 4 there are eleven walks of no more than three edges.

Adjacency matrices can give us a useful way to reason about graphs without having to traverse every walk.

Conclusion

These concepts give us the basic tools for working with graph databases. In future posts I will look at how we can put them to work to model domains.

Creativity in Software Development

I shared yesterday’s post with some friends, who were keen to explore what we mean when we talk about creativity in software development.

Alastair made an interesting comment:

…it made me reconsider software dev as a creative endeavour, but I think I came to the conclusion that it is. For me, I think there is a gap between a creative art like writing, especially one which has an expressive mirror like acting, and a purely creative activity like, e.g., whittling a stick or constructing a building.

I think there is value in disentangling our concepts of creativity, and I find Alastair’s distinction between the creative arts and simpler forms of creation very useful.

There’s also an ambiguity in the word ‘create’, as it can refer simply to making things, as well as to the creative endeavours we would like to characterise.

So rather than ask ‘Is software development a creative activity?’, I tend to consider a narrower question: ‘Is there a place for creative thinking in software development?’

As the most basic level, I see creative thinking as making new links between concepts. Once you have made the link, you can engage other thought processes, for example deductive thinking, to explore the consequences and implications of that link.

But because the link isn’t already there, you can’t find it by rational thought; you need a leap of imagination to reach it.

There are some sorts of problem that I can tackle best once I’ve slept. On a few lucky occasions I’ve been able to take an afternoon nap, and woken up with a new idea to investigate, but this usually means taking the idea home with me and letting it brew overnight.

Here are a few examples of problems in software development that can be tackled with creative thinking:

  • How should we name this element?
  • What is the appropriate metaphor for this system?
  • Has a similar problem already been solved? Is there a pattern we can apply here?
  • What test should we write first? What test should we write next?
  • What is the best way to split this system into smaller parts?

And of course, because software development in an organisation is a social activity, the need for creative thinking extends far beyond the design of the software.

Test code needn’t be defensive

In a code review I encountered some test code that looked a bit like this:

 var result = await _controller.Resources() as ViewResult;
 result.Should().NotBeNull();
 // ReSharper disable once PossibleNullReferenceException
 result.Model.Should.BlahBlahBlah();
 

This is a typical defensive coding pattern whose reasoning goes like this:

  • The return type of _controller.Resources() is Task<ActionResult>.
  • I need to cast the inner Result of this Task to a ViewResult, as I want to inspect its Model attribute.
  • But the Result could be a different subclass of ActionResult, so I had better use a safe cast, just in case.
  • As I’m using a safe cast, I can’t guarantee that I’ll get any instance back, so I had better do a null check.
  • Oh look! ReSharper is complaining when I try to access properties of this object. As I’ve already performed a null check, I’ll turn off the warnings with a comment.

Now, defensive coding styles are valuable when we don’t know what data we’ll be handling, but this is most likely to happen at the boundaries of a system, where it interacts with other systems or, even more importantly, humans.

But in the context of a unit test, things are different:

  • We are in control of both sides of the contract: the test and class under test have an intimate and interdependent existence. A different type of response would be unexpected and invalid.
  • An attempt to directly cast to an invalid type will throw a runtime error, and a runtime error is a meaningful event within a test. If _controller.Resources() returns any other subclass of ActionResult, then the fact that it cannot be cast to ViewResult is the very information I want to receive, as it tells me how my code is defective.

This means I can rewrite the code like this:

 
var result = (ViewResult) await _controller.Resources(); 
result.Model.Should.BlahBlahBlah();

By setting aside the defensive idiom, I’ve made the test clearer and more precise, without losing any of its value.

How applying Theory of Constraints helped us optimise our code

The neck of a bottle of prosecco in front of a fire.

My team have been working on improving the performance our API, and identified a database call as the cause of some problems.

The team suggested three ways to tackle this problem:

  • Scale up the database till it can meet our requirements.
  • Introduce some light-weight caching in the application to reduce load on the database.
  • Examine the query plan for this database call to find out whether the query can be optimised.

Which of these should we attempt first? There was some intense discussion about this, with arguments made in favour of each approach. What we needed was a simple framework for making decisions about how to improve our system.

This is where the Theory of Constraints (ToC) can help. Originally expounded as a paradigm for improving manufacturing systems, ToC is really useful in software engineering, both when managing projects and when improving the performance of the systems we create.

Theory of Constraints

The preliminary step in applying ToC is to identify the Goal of your system. In the case of this API, the Goal is to supply accurate data to consumers.

Now we understand the Goal of the system, we can define the Throughput of the system as the rate at which it can deliver units of that goal, in our case API responses. We can also define the Operating Expenses of the system (the cost of servers) and its Inventory (requests waiting for responses).

The next step is to identify the Constraint of the system. This is the element in the system that dictates the system’s Throughput. In a physical system, a useful heuristic is a build-up of Inventory in front of this element. In our API, our monitoring helped us pinpoint the bottleneck.

The next three steps give us a sequence of approaches for tackling the Constraint:

  • First, Exploit the Constraint by finding local changes you can make to improve its performance.
  • Second, Subordinate the rest of the system to the Constraint by finding ways to reduce pressure on it so it can perform more smoothly.
  • Third, Elevate the Constraint by increasing the resources available to it, committing to additional Operating Expenses if necessary.

Exploitation comes first because it’s quick, cheap and local. To Subordinate you need to consider the effects on the rest of the system, but there shouldn’t be significant costs involved. Elevating the Constraint may well cost a fair amount, so it comes last on the list.

Once you have applied these steps you will either find that the Constraint has moved elsewhere (you’ve ‘broken’ the original Constraint), or it has remained in place. In either case, you should repeat the steps as part of a culture of continuous improvement. Eventually you want to see the constraint move outside your system and become a matter of consumer demand.

Applying ToC to our question

If we look at the team’s three suggestions, we can see that each corresponds to one of these techniques:

  • Scaling up the database is Elevation: there’s a clear financial cost in using larger servers.
  • Introducing caching is Subordination: we’re changing the rest of the system to reduce pressure on the Constraint, and need to consider questions such as cache invalidation before we make this change.
  • Optimising the query is Exploitation: we’re making local changes to the Constraint to improve its performance.

Applying ToC tells us which of these approaches to consider first, namely optimising the query. We can look at caching if an optimised query is still not sufficient, and scaling should be a last resort.

In our case, query optimisation was sufficient. We managed to meet our performance target without introducing additional complexity to the system or incurring further cost.

Further Reading

Goldratt, Eliyahu M.; Jeff Cox. The Goal: A Process of Ongoing Improvement. Great Barrington, MA.: North River Press.

Concrete Inheritance and the Dependency Inversion Principle

Collaboration and DIP

We know that this code violates the Dependency Inversion Principle (DIP):

public class Atm
{
    private readonly InMemoryTransactions _inMemoryTransactions;

    public Atm()
    {
        _inMemoryTransactions = new InMemoryTransactions(new List<Transaction>());
    }

    public void Deposit(int amount)
    {
        _inMemoryTransactions.Deposit(amount);
    }
}

In particular, line 3 violates the principle by creating a dependency on a specific implementation, and line 7 violates it by knowing how to construct that dependency.

We can rewrite this code to apply the DIP:

public class Atm
{
    private Transactions _transactions;

    public Atm(Transactions transactions)
    {
        _transactions = transactions;
    }

    public void Deposit(int amount)
    {
        _transactions.Deposit(amount);
    }
}

Line 3 now depends on an abstraction, and we use constructor injection to supply this dependency.

Inheritance and DIP

So how about this code:

public class LoggingInMemoryTransactions : InMemoryTransactions
{
    private readonly Logger _logger;

    public LoggingInMemoryTransactions(List<Transaction> initialTransactions, Logger logger) : base(initialTransactions)
    {
        _logger = logger;
    }

    public override void Deposit(int amount)
    {
        _logger.Log($"Deposit {amount}");
        base.Deposit(amount);
    }
}

Well, this class depends on a Logger abstraction, which is injected in the constructor, so that dependency seems to have been satisfactorily inverted.

However, in line 1 we can see that this class inherits from InMemoryTransactions. This inheritance is also a dependency, and we’re inheriting from a concrete instance, not an abstraction.

If we then look at the constructor on line 5, we can see that it calls the base constructor; in other words, it has to know how its base class is instantiated.

There is a direct parallel between these two cases: each of them has a dependency on a specific concrete class, which is instantiated in a specific way.

We can rewrite this code to use composition rather than inheritance:

public class LoggingInMemoryTransactions : Transactions
{
    private readonly Logger _logger;
    private readonly Transactions _transactions;

    public LoggingInMemoryTransactions(Logger logger, Transactions transactions)
    {
        _logger = logger;
        _transactions = transactions;
    }

    public void Deposit(int amount)
    {
        _logger.Log($"Deposit {amount}");
        _transactions.Deposit(amount);
    }
}

This simple example of the Decorator pattern behaves in exactly the same way as the previous implementation, but is now only dependent on abstractions.

Unit Testing

If we try to write unit tests around our code, we will soon see the benefit of DIP.

If we try to test our first implementation of Atm, we find that our test boundary includes InMemoryTransactions, as that class is a hard-wired dependency. We can’t test the behaviour of   Atm without also testing the behaviour of  InMemoryTransactions. If we also have tests of InMemoryTransactions, then we may end up duplicated test scenarios.

In this case we have an additional problem: we have only defined read access to the data, so we have no way of testing this class without violating encapsulation:

using NUnit.Framework;

[TestFixture]
public class AtmShould
{
    [Test]
    public void Perform_deposit_transaction()
    {
        const int amount = 100;
        var atm = new Atm();

        atm.Deposit(amount);

        // HELP! WHAT CAN I ASSERT AGAINST?
    }
}

In the second implementation, we can write collaboration tests between Atm and Transactions, bringing our test boundary in much closer, and restricting the behaviour under test to that of  Atm.

using NSubstitute;
using NUnit.Framework;

[TestFixture]
public class AtmShould
{
    [Test]
    public void Perform_deposit_transaction()
    {
        const int amount = 100;
        var transactions = Substitute.For<Transactions>();
        var atm = new Atm(transactions);

        atm.Deposit(amount);

        transactions.Received().Deposit(amount);
    }
}

Similarly, to test the first implementation of LoggingInMemoryTransactions, we also have to test the behaviour it inherits from InMemoryTransactions, as one is inseparable from the other. Again, if we also have tests of the base class, then we may end up with duplicated test scenarios.

using NSubstitute;
using NUnit.Framework;

[TestFixture]
public class LoggingInMemoryTransactionsShould
{
    private LoggingInMemoryTransactions _loggingInMemoryTransactions;
    private Logger _logger;

    private const int Amount = 100;

    [SetUp]
    public void SetUp()
    {
        _logger = Substitute.For<Logger>();
        _loggingInMemoryTransactions = new LoggingInMemoryTransactions(new List<Transaction>(), _logger);
    }

    [Test]
    public void Log_transaction()
    {
        _loggingInMemoryTransactions.Deposit(Amount);

        _logger.Received().Log($"Deposit {Amount}");
    }

    [Test]
    public void Store_deposit()
    {
        _loggingInMemoryTransactions.Deposit(Amount);

        // HELP! HOW CAN I CHECK THIS WAS SAVED?
        // I might be tempted to assert against the new List<Transaction>(), but this would leak implementation details.
    }
}

In the second implementation, the specific behaviour of LoggingInMemoryTransactions is separated, and we can write collaboration tests independently of the behaviour of the inner implementation.

using NSubstitute;
using NUnit.Framework;

[TestFixture]
public class LoggingInMemoryTransactionsShould
{
    private LoggingInMemoryTransactions _loggingInMemoryTransactions;
    private Logger _logger;
    private Transactions _transactions;

    private const int Amount = 100;

    [SetUp]
    public void SetUp()
    {
        _logger = Substitute.For<Logger>();
        _transactions = Substitute.For<Transactions>();
        _loggingInMemoryTransactions = new LoggingInMemoryTransactions(_logger, _transactions);
    }

    [Test]
    public void Log_transaction()
    {
        _loggingInMemoryTransactions.Deposit(Amount);

        _logger.Received().Log($"Deposit {Amount}");
    }

    [Test]
    public void Store_deposit()
    {
        _loggingInMemoryTransactions.Deposit(Amount);

        _transactions.Received().Deposit(Amount);
    }
}

 

Updated 26 May 2016: I’ve given code examples for the unit tests we might write in each case, and have removed DateTime createdOn from the example to avoid confusing the issue.

Updated 26 September 2017: I’ve removed a reference to an individual whose political views are not consistent with the spirit of this blog.

A strange behaviour in ReSharper’s Move Instance Method refactoring

My colleague Pedro and I were puzzling over some bizarre behaviour with ReSharper’s Move Instance Method refactoring move. This is a fairly complex move, and it makes various changes to the source and destination classes; however, in many cases, it generates code that does not compile, and seems to include wrongly qualified references.

Here is an example drawn from Martin Fowler’s Refactoring: Improving the Design of Existing Code:

Employee class before refactoring

using System;

namespace RefactoringExamples.ReplaceConditionalWithPolymorphism
{
    public class Employee
    {
        private EmployeeType _employeeType;
        private readonly int _monthlySalary;
        private readonly int _commission;
        private readonly int _bonus;

        public Employee(int type)
        {
            Type = type;
            _monthlySalary = 100;
            _commission = 10;
            _bonus = 20;
        }

        public int Type
        {
            get { return _employeeType.TypeCode; }
            set { _employeeType = EmployeeType.TypeFrom(value); }
        }

        public int PayAmount()
        {
            return Pay();
        }

        private int Pay()
        {
            switch (_employeeType.TypeCode)
            {
                case EmployeeType.Engineer:
                    return _monthlySalary;
                case EmployeeType.Salesperson:
                    return _monthlySalary + _commission;
                case EmployeeType.Manager:
                    return _monthlySalary + _bonus;
                default:
                    throw new Exception("Incorrect Employee");
            }
        }
    }
}

 EmployeeType class before refactoring:

using System;

namespace RefactoringExamples.ReplaceConditionalWithPolymorphism
{
    public abstract class EmployeeType
    {
        public abstract int TypeCode { get; }

        public static EmployeeType TypeFrom(int value)
        {
            switch (value)
            {
                case Engineer:
                    return new Engineer();
                case Salesperson:
                    return new Salesperson();
                case Manager:
                    return new Manager();
                default:
                    throw new Exception("Incorrect Employee Code");
            }
        }

        public const int Engineer = 0;
        public const int Salesperson = 1;
        public const int Manager = 2;
    }

    class Engineer : EmployeeType
    {
        public override int TypeCode => Engineer;
    }

    class Salesperson : EmployeeType
    {
        public override int TypeCode => Salesperson;
    }

    class Manager : EmployeeType
    {
        public override int TypeCode => Manager;
    }
}

Employee class after refactoring

using System;

namespace RefactoringExamples.ReplaceConditionalWithPolymorphism
{
    public class Employee
    {
        private EmployeeType _employeeType;
        private readonly int _monthlySalary;
        private readonly int _commission;
        private readonly int _bonus;

        public Employee(int type)
        {
            Type = type;
            _monthlySalary = 100;
            _commission = 10;
            _bonus = 20;
        }

        public int Type
        {
            get { return _employeeType.TypeCode; }
            set { _employeeType = EmployeeType.TypeFrom(value); }
        }

        public int MonthlySalary
        {
            get { return _monthlySalary; }
        }

        public int Commission
        {
            get { return _commission; }
        }

        public int Bonus
        {
            get { return _bonus; }
        }

        public int PayAmount()
        {
            return _employeeType.Pay(this);
        }
    }
}

EmployeeType class after refactoring

using System;

namespace RefactoringExamples.ReplaceConditionalWithPolymorphism
{
    public abstract class EmployeeType
    {
        public abstract int TypeCode { get; }

        public static EmployeeType TypeFrom(int value)
        {
            switch (value)
            {
                case Engineer:
                    return new Engineer();
                case Salesperson:
                    return new Salesperson();
                case Manager:
                    return new Manager();
                default:
                    throw new Exception("Incorrect Employee Code");
            }
        }

        public const int Engineer = 0;
        public const int Salesperson = 1;
        public const int Manager = 2;

        public int Pay(Employee employee)
        {
            switch (System.TypeCode)
            {
                case EmployeeType.Engineer:
                    return employee.MonthlySalary;
                case EmployeeType.Salesperson:
                    return employee.MonthlySalary + employee.Commission;
                case EmployeeType.Manager:
                    return employee.MonthlySalary + employee.Bonus;
                default:
                    throw new Exception("Incorrect Employee");
            }
        }
    }

    class Engineer : EmployeeType
    {
        public override int TypeCode => Engineer;
    }

    class Salesperson : EmployeeType
    {
        public override int TypeCode => Salesperson;
    }

    class Manager : EmployeeType
    {
        public override int TypeCode => Manager;
    }
}

Notice the reference to System.TypeCode in the EmployeeType.Pay method on line 30 above; this doesn’t compile, and is most definitely not what we wanted. My hypothesis is that once ReSharper has rewritten the code, it attempts to resolve and properly qualify any symbol names; in this case, it spots TypeCode and, because this is the name of a type known to the compiler, it decides that the name needs to be appropriately qualified, adding the System namespace. It’s failing to recognise that TypeCode is actually the name of a property on the EmployeeType class, and that no qualification is necessary.

I have found three ways to work round this:

First, you can give the property a name that not already used by a type known to the compiler. If I rename TypeCode to Code, then the refactoring move works perfectly. In this case, this is probably a nice idea, as the name EmployeeType.TypeCode contains redundancy. However, in many cases the most appropriate name for a property may coincide with a type name, and renaming it will not be a good option.

The second option is to recognise that this happens, and fix it. In this case, all I need to do is remove the System namespace on line 30, and everything works correctly. I’m not a huge fan of this solution, as I like my automatic refactorings to be safe and reliable, but it may be a pragmatic choice.

The third option is to perform this refactoring by composing smaller, safe steps. This can give you finer grained control over the outcome, but at the expense of complexity.

I would be interested to hear if anyone has any further insight into what’s going on here, or if there are other techniques for overcoming it. Do let me know if you do!

Update (25 May 2016): I sent JetBrains a link to this post, and within an hour they raised a bug report. Thanks to Daria at JetBrains for the speedy response!