ActionInvoker method sequence

In one of the projects I’m playing with, I’m doing a bit of a hack over .NET MVC 3.

I’m providing my own implementation of IActionInvoker, currently by extending ControllerActionInvoker, and as part of this work, I’ve done a quick audit of the methods of this class, and the order they are called in.

I’m reproducing them here, in case it’s useful to anyone else:

  • InvokeAction
  • GetControllerAction
  • FindAction
  • GetFilters
  • InvokeAuthorizationFilters
  • GetParameterValues
  • InvokeActionMethodWithFilters
  • InvokeActionMethod
  • CreateActionResult
  • InvokeActionResultWithFilters
  • InvokeActionResult

HATEOAS Console: the beginning

At 7digital we have 2 days’ innovation time every month. During this time we can work on our own pet projects. This post is about my current project.

You can find the source of this project at https://github.com/bnathyuw/Hateoas-Console

Introduction

RESTful web architecture is becoming increasingly influential in the design of both web services and web sites, but it is still very easy to produce half-hearted implementations of it, and the tools that exist don’t always help.

In this project, I want to address this problem by building a new REST console that will:

  • Reward good implementations by making it easy to take advantage of all their RESTful features;
  • Help improve less good implementations by exposing their shortcomings.

Basic principles of a RESTful interface

Richardson and Ruby (2007 pp. 79 ff.) present a good analysis of RESTful interface design. Drawing on Fielding (2000 s. 5), but with a focus on actual practice, they identify four key principles or Resource-Oriented Architecture:

  1. Addressability;
  2. Statelessness;
  3. Connectedness;
  4. Uniform Interface.

Addressability means that any resource in the application that a consumer could want to know about has at least one URI. This criterion is fairly coextensive with Fielding’s Identification of Resources requirement.

Statelessness means that every request should contain all the information needed for its processing. This overlaps with Fielding’s requirement that messages be self-descriptive, and that hypermedia be the representation of application state.

Connectedness means that each resource representation should give addresses of all related resources. The most effective way to ensure connectedness will often be to produce an entry-point resource, from which it is possible to navigate to all other resources. This furnishes the other part of Fielding’s requirement for hypermedia as the engine of application state.

Uniform Interface means that all resources can be manipulated in the same way. For web services, this almost invariably means using the HTTP verbs, viz DELETE, HEAD, GET, OPTIONS, POST, PUT &c. This principle supports Fielding’s self-description criterion, and specifies the means of manipulation of resources.

Most REST consoles are fairly successful in accommodating principles 1, 2 and 4, but fail significantly in accommodating principle 3. Under Fielding’s terminology, existing REST consoles give little support for hypermedia as the engine of application state (HATEOAS).

Existing REST consoles

There exist several good consoles for manually consuming RESTful services. These include:

Screenshot: Simple REST Client for Chrome
Simple REST Client for Chrome
Screen shot of REST Client for Firefox
REST Client for Firefox
Screenshot: apigee
apigee

All of these clients work on a similar model: you enter a URI in the address box, choose an HTTP verb and click a button to send the request. You also have the option of adding headers and a request body. The headers and content of the response are then displayed on screen for the user to inspect.

How these consoles support the REST principles

Addressability

Addressability is a core notion in these consoles: the address box is a primary part of the UI, and you have to enter something here in order to make a request.

Statelessness

Statelessness is perhaps the easiest of the four principles to achieve, as the consoles operate on a request-response model.

In fact, what is useful in a console is the very opposite of statelessness: the console should be able to remember your preferences so that you do not have to enter them for each request.

With a significant exception discussed below, all three consoles do a fair job of remembering your choice of headers from one request to another, which takes some of the burden off the user. Apigee and REST Client for Firefox are also able handle OAuth authentication, which is a nice feature.

Connectedness

None of the consoles deals successfully with connectedness. If you want to follow a link from the response, you have to copy the resource URI into the address box and submit another request.

Apigee differs from the other two consoles in having a side panel which lists the principle URI schemata for the service under test. This initially seems like a helpful feature, but has several unfortunate consequences:

  • Apigee uses WADL to create its directory of links. This encourages a return to the RPC-stle of service architecture, which thinks of a web service as being made up of a limited set of discrete endpoints, each with a particular purpose, rather than an unlimited network of interconnected resources which can be manipulated through a uniform interface.
  • As the endpoints are listed in the directory panel, it is less obvious when a resource does not contain links to related resources.
  • Apigee has no way of filling in variable parts of a URI. If, for instance, you click me/favourites/track_id (PUT), it enters https://api.soundcloud.com/me/favorites/{track_id}.json in the address box. You then have to replace {track_id} with the specific track ID you are interested in. This is of course no help if you don’t know which track you want to add to your favourites!
  • Each endpoint is listed with a .json suffix, no matter what format you have just requested. Also, any request headers you have filled in are forgotten when you click on a new endpoint.

These shortcomings not only make the console frustrating to use, but also encourage non-connected, RPC-style architectural decisions.

Uniform Interface

As with Addressability, the Uniform Interface is at the core of these consoles. The HTTP verb selector is prominent in each UI, and it is easy to switch from one to another.

Apigee supports GET, POST, DELETE and PUT, Simple REST Client for Chrome adds support for HEAD and OPTIONS, and REST Client for Firefox adds support for TRACE, as well as several more obscure verbs.

What none of these consoles does is make any attempt to figure out what representation of a resource should be submitted in a POST or PUT request body. This is particularly surprising in Apigee, as this information should be available in the API WADL document.

Conclusion

There are close points of comparison between a REST console and a web browser: each is designed to make requests from a particular URI using one of a small number of HTTP verbs, and then display a representation of that resource to the user. What makes a web browser so powerful — and indeed was one of the founding principles of the internet — is that the user can click on links to get from one page to another. When you the primacy of the clickable link to the success of browsers it becomes all the more puzzling that REST consoles do not implement this functionality.

The Project

Basic principles

The purpose of this project is to attempt to address some of the shortcomings of the currently available REST consoles, while retaining their good features:

  • The basic format of the existing consoles is successful: an address box, and verb chooser, and a send button;
  • Rendering all details of the response is also vital; REST Client for Firefox gives you choice of viewing raw and rendered data, which is a nice additional feature;
  • The client should support as wide as possible a range of HTTP verbs, encompassing at least GET, POST, PUT, DELETE, OPTIONS, HEAD;
  • The ability to remember headers is very useful and should be kept, especially when clicking on a link;
  • OAuth integration is a nice feature and worth implementing if possible;
  • It would be very useful for the console to make a reasonable attempt at figuring out the response body format for PUT and POST requests;
  • Reliance on a WADL document encourages unRESTful thinking and should be avoided.
  • All appropriate links in the response body should be identified, and it should be simple to make further requests and to explore the API by clicking on them.

Implementation decisions

I decided to implement this project in HTML and JavaScript, as this seemed the most portable platform. I am working on the assumption that the finished product will be a chrome extension, as this lets me make some simplifying assumptions about the capabilities of the browser environment, and may also help solve some security issues.

References

Site scans: a RESTful case study

I’ve been thinking a lot recently about REST, resource design and addressability, so I was interested to read an article by Troy Hunt on the particular challenges of creating URLs that refer to other URLs.

Scenario

The scenario Troy describes is for a site that will perform a security scan of your website, and then show a report of the result. Addressability is an important concern, as once you have the results of the scan, you may want to forward them to other people.

Troy discusses two methods for including the URL to scan:

  1. Include it in the hierarchical part of the URL (ie, the part after http: and before ?, eg http://asafaweb.com/scan/troyhunt.com/Search)
  2. Include it in the query string, eg http://asafaweb.com/scan?url=troyhunt.com/Search.

He favours the second approach for practical reasons.

A RESTful approach

Troy’s article approaches this question from the point of view of addressability, rather than resource design, and makes a sensible recommendation given the basic premisses; however, the scenario he outlines presents a good opportunity to do a bit of resource design analysis, which can lead us to an even better answer.

First then, let’s think about the type of resource we’re dealing with.

I think it is fair to make a few assumptions about the scan resource and the application that generates it:

  • It will take some time to perform the scan;
  • It will take a fair amount of computing resources to perform the scan;
  • The scan will be accurate for the point in time at which it was created; a subsequent scan of the same URL may generate a different result;
  • It may be interesting to compare scans over time.

From these assumptions we can draw a few conclusions:

  • A scan is an expensive resource to create, and will have a series of different statuses over its lifetime; this means we are looking at a transactional resource model here;
  • As a URL can be scanned more than once, it is not on its own a sufficient identifier of any scan.

If we follow these conclusions, we can make a sketch of the process of performing a scan:

1. Trigger the scan

We make a POST request with the details of the scan we want:

POST http://asafaweb.com/scans
{ "url": "http://troyhunt.com/Search", "options": {…} }

Note that we are POSTing to /scans; this request will create resource subordinate to that collection. Also, as we are making a POST request, we can include further information about the scan we want, perhaps indicating which criteria we are interested in and what level of detail we require; I have indicated this possibility by including an options parameter.

The server responds not by showing us the results of the scan — they haven’t been produced yet —, but by telling us where to look for the results:

201 Created
http://asafaweb.com/scans/{xyz}

2. Check the scan URL

We can go and check this URL straight away by performing a GET:

GET http://asafaweb.com/scans/{xyz}

As the scan is still running, we don’t see the results, but rather a representation of a scan that is still in progress:

200 OK 
{ "scan": {
  "status": "in progress", 
  "url": "http://troyhunt.com/Search", 
  "created": "2011-09-18 11:57:00.000", 
  "options": {…} 
} }

Indeed, if there are a large requests for scans, Troy may have to implement a queueing system, and our scan may have a status of "queued" until it can be processed; we could even cancel a scan by PUTting a representation with a status of "cancelled" to its URL, or perhaps simply by issuing a DELETE request.

3. Retrieve the scan

A little while later, the scan has completed. Perhaps we keep performing a GET on its URL until it’s done; perhaps we have submitted an email address in the initial POST, and have now received email notification that it is ready.

We perform another GET to see the results:

GET http://asafaweb.com/scans/{xyz}

And the server responds:

200 OK 
{ "scan": {
  "status": "complete", 
  "url": "http://troyhunt.com/Search", 
  "created": "2011-09-18 11:57:00.000", 
  "options": {…}, 
  "results": {…} 
} }

We can now send the URL http://asafaweb.com/scans/{xyz} to  other people, and they will see the same results. The server doesn’t have to rescan the site, so retrieving these results can be a quick, inexpensive operation.

4. Search for scans

Throughout this example, I have used {xyz} to indicate the unique identifier of the scan. I have deliberately not given details of what this identifier might be. However, as I said earlier, the URL to scan is not a sufficient identifier, as we want to allow the possibility of scanning the same URL more than once. This identifier could include the URL, but this may not be the ideal solution, both for the technical reasons that Troy indicates in his article, and because this will produce very long identifiers, where we could probably make do with short strings of characters.

The result of this is that we have a system that is eminently addressable, but which uses identifiers that bear an opaque relationship to the scanned URL, and fails the findability criterion. I can easily send a URL like http://asafaweb.com/scans/f72bw8 to a friend, but if they do not have that address, they have no way of guessing that this is the address of a scan of my site.

To remedy this, we can implement a search interface. We already have an address for the entire collection of scans, viz /scans, so  now we can just refine the response to a request of this URL with a query parameter:

GET http://asafaweb.com/scans?url=http://troyhunt.com/Search

The server can then respond with a listing of all the scans that meet these criteria:

200 OK
{ "scans": {
  "url": "http://troyhunt.com/Search",
  "results": [
    {
      "status": "complete",
      "created": "2011-09-18 11:57:00.000",
      "link": {
        href: "http://asafaweb.com/scans/f72bw8"
      }
    }, 
    {
      "status": "complete",
      "created": "2011-08-28 17:36:24.023",
      "link": {
        href: "http://asafaweb.com/scans/89ew2p"
      }
    }, 
  ]
} }

Each result has a link to its location, so all results can be retrieved at any time.

By recognising that our scans are resources that can be searched, and by providing a means to perform that search, we have restored findability to our interface. Also, the fact that our search criterion is a refinement of a more general request, and thus appears in the query string, means that we arrive at a very similar conclusion the one Troy reaches in his article, but this time for reasons based on resource design, rather than practicality.

Applicability to web sites

I have given all my examples in JSON, as it offers a concise and easy-to-read format for technical discussions. However, all of this discussion would work equally well with HTML web pages:

  1. The user fills in and submits a form to request a scan, and is redirected to the scan page;
  2. The scan page shows a message telling the user the scan is in progress;
  3. After a while a page refresh reveals the results of the scan, and the user can send the page URL to their contacts;
  4. The /scans page offers a search form into which the user can enter their URL and retrieve a list of all the dated scans for that location.

A couple of the implementation details will differ, because of the limited number of HTTP verbs and status codes that browsers can deal with, but the principles are exactly the same.

TPT mapping in Entity Framework CTP 5

I’ve been working on a little project with MVC 3 and Entity Framework Code-First CTP (community technology preview) 5, and have been implementing TPT (type-per-table) inheritance for the various types of user in the system: Customers, Administrators &c.

Yesterday, I added an Administrator class to my model, and this happened:

Yellow screen of death: “The configured property 'Forename' is not a declared property on the entity 'Administrator'. Verify that it has not been explicitly excluded from the model and that it is a valid primitive property.”

The exceptions message reads: The configured property ‘Forename’ is not a declared property on the entity ‘Administrator’. Verify that it has not been explicitly excluded from the model and that it is a valid primitive property.

All very puzzling, as here is my User class:

public class User
{
	public int Id { get; set; }
	[Display(Name = "Forename"), Required]
	public string Forename { get; set; }
	[Display(Name = "Surname"), Required]
	public string Surname { get; set; }
}

and here is my Administrator class:

public class Administrator:User
{
}

(OK, the Administrator class isn’t terribly useful right now, but I’m just fleshing out the schema at the moment.)

Here, finally, is my Repository class, somewhat abbreviated:

public class Repository : DbContext
{
	protected override void OnModelCreating(ModelBuilder modelBuilder)
	{
		modelBuilder.Entity().ToTable("Administrators");
		modelBuilder.Entity().ToTable("Customers");
	}
	public DbSet Administrators { get; set; }
	public DbSet Customers { get; set; }
	public DbSet Users { get; set; }
}

After some futile reading around, I finally decided to roll back to my previous working version, at which point the yellow screen of death disappeared. And it was at this point that the revelation came to me: one of the changes I had made was to alphabetise my DbSet properties, moving Users past Administrators and Customers.

So I went about adding the Administrator class again, but this time keeping the Users property above those for its subclasses:

public class Repository : DbContext
{
	protected override void OnModelCreating(ModelBuilder modelBuilder)
	{
		modelBuilder.Entity().ToTable("Administrators");
		modelBuilder.Entity().ToTable("Customers");
	}
	// Declare superclass first
	public DbSet Users { get; set; }
	public DbSet Administrators { get; set; }
	public DbSet Customers { get; set; }
}

and as if by magic the application started working again.

Moral

When implementing TPT inheritance in CTP 5, declare the superclass DbSet property before its subclasses.

A clock

Peer and I have spent today building HTML / JavaScript clocks.

I thought I would create an analogue clock, but with a few alterations to make it an interesting project; I was also keen to avoid using any image files, so radial gradients seemed like a good option. Because we’re using the client-side time, this experiment does use JavaScript; also, as I’ve used gradient backgrounds, this will only work in Mozilla and Webkit browsers.

Here’s the result:

Cell clock screenshot at 17:34

http://playground.matthewbutt.com/clock.htm

Addendum: you can see Peer’s clock at http://www.curiouspixels.co.uk/peersclock/

i10

The other week, during some time we had off work, Peer and I went to did a cultural expedition to the two Tates to see the Henry Moore, Chris Ofili and van Doesburg exhibitions. The last of these gave us the most inspiration: Peer was transfixed by the animated films, and I found plenty of typography and print design to keep me excited.

So here’s a quick attempt to reproduce one of the designs; it’s César Domela’s cover for the 4th edition of i10:

And here’s my attempt:

i10 as a web page

http://playground.matthewbutt.com/i10.htm

No terribly exciting techniques in this one: just web fonts and some use of nth-child.

Acknowledgments

Books

Fonts

Constructivism Part III

Today’s study is the cover of Generation X’s 1977 single, Your Generation:

Black, red and grey construction with 'Generation X' in bold white text

Here is my version:

http://playground.matthewbutt.com/generation.htm

Please forgive any inconsistencies in the font rendering: I couldn’t find an open-licensed font with quite the right geometry, so I fell back on Helvetica, which may render unpredictably on different platforms.

This composition uses a handful of blocks with multiple backgrounds:

  • The upper part of the image, which looks a little like the Polydor logo, is a pseudo-element with four backgrounds, from top to bottom:
    • The small red circle
    • A white masking area, which covers the lower halves of:
    • The large black circle
    • The red block on the side.
  • The title strip is actually an h1 (‘Generation’) with a span inside (‘X’). The stripes are hand-coded linear gradients, and the cross-hatching in the small square is two repeating gradient stripes laid on top of each other.
  • The red triangle is another pseudo-element, with a single, angled gradient background.

The key lesson from this exercise was how tricky it can be to get webkit gradients right. The -webkit-gradient syntax is much less intuitive than the -moz-xxx-gradient syntaxes, and the repeated gradient declaration is also something of a fiddle. As to angling the red triangle, I couldn’t be bothered with more trig, so I just used trial and error.

Acknowledgments

Books

CSS Seesaw

It’s Friday night, so not a big one tonight.

I thought I would have a quick play with CSS transitions, and nothing seemed a better demonstration than a seesaw.

Here it is (warning: this only works in Webkit browsers at the time of writing):

A red seesaw with a black ball on it, all balanced on a black pivot

http://playground.matthewbutt.com/seesaw.htm

This little animation uses two transitions: one to tip the seesaw back and forth, and the other to roll the ball from one end to the other. There were only two slightly complex matters here: I needed to brush up on my A-level trig to get all the distances right, and a little sleight of hand was in order to create the triangular pivot (fire up Firebug if you want to see how I did it).

There are a few small bugs: poor aliasing, and a ghost top border on the seesaw element, but they can wait for another day.

And now, goodnight!

Constructivism Part II

Following on from yesterday’s work, another experiment today, and this time I chose a real piece of constructivist design to copy:

Image composed of text, semicircular blocks of colour, and diagonally placed squares and strips
The simple lines of this image make it fairly straightforward to lay out in HTML and CSS:

http://playground.matthewbutt.com/why.htm

Here’s how I did it:

My h1 contains the text ‘WHY?’. The h1 is a 200px square absolutely positioned on the page.

To create the semi-circle, I’ve given this element two background gradients: the first uses a linear gradient to draw two blocks of colour on the page: the upper block is solid parchment colour, whilst the lower block is transparent.

In Mozilla this is marked up like this:

-moz-linear-gradient(top, #F1ECD9 100px, rgba(241, 236, 217, 0) 100px)

In Webkit is takes this form:

-webkit-gradient(linear, 0 100, 0 200, from(#F1ECD9), color-stop(0, rgba(241, 236, 217, 0)), to(rgba(241, 236, 217, 0)))

You’ll see that I’ve defined the transparent colour as rgba(241, 236, 217, 0) rather than simply transparent; I found that using simple transparent gave me some ghosting, which is clearly not the intention.

Underneath this gradient is a second background, which this time defines a radial gradient:

-moz-radial-gradient(100px 100px, circle cover, black 100px, #F1ECD9 100px, #F1ECD9 200px)

-webkit-gradient(radial, 100 100, 100, 100 100, 200, from(black), color-stop(0, #F1ECD9), to(#F1ECD9))

In each case, this draws a 100px-radius black circle, followed by an area of parchment colour.

The lipsum content with the inverted red semicircle is coded in a similar way, although I could do away with the linear gradient, as my p tags give me the perfect hooks for a parchment-coloured background without worrying about gradients. The text is shifted down with a simple padding-top rule, and the red line down the side is an equally straightforward border-right.

The large red square doesn’t actually exist: it’s an :after pseudo-element on the body tag, which is then sized, positioned and rotated. I had to give it content(' ') to get it to appear, but otherwise it’s pure smoke and mirrors.

Finally, the three 45° links were interesting to position:

They start off as three li elements arranged normally on the screen:

Three blocks, one above the other, with space between

Next, I rotate the containing ul by 90° widdershins around its top right corner:

Three blocks next to each other with the text running bottom to top

Finally, I rotate each li by 45° clockwise around what was originally its middle-right and is now its top-centre:

Three diagonal blocks next to each other

These are then positioned absolutely on the page.

And that’s my piece of constructivist CSS for the day.

I have one outstanding problem: the edges of my semi-circles are unpleasantly aliased. I’ve tried leaving a small gap between the colour stop points in the gradient to see if that helps, but the effect is pretty unsatisfactory. Any suggestions would be welcome!

Acknowledgments

Fonts

Sites

Look mum: no images!

One of the downsides of specialising is leaving behind areas of knowledge you used to love. As I have specialised in the programming side of web development, I have found myself getting further and further away from working with HTML and CSS, and letting my markup skills get rather rusty.

Of course, over the last couple of years, all sorts of exciting things have been happening with HTML and CSS, as browsers implement more features of HTML 5 and CSS 3, and the skills and techniques I used to be able to boast of are now anything but cutting edge.

So I thought I should do something about it, and there’s no better way to deal with something like this than to have a good play around. The following pages are rendered entirely using HTML and CSS; no images were harmed in the preparation of these pages.

Note: these pages render correctly in Firefox 3.6 and Safari 4.0.4. They probably don’t work in older versions and are most definitely NSFIE.

Landscape

First, then, I thought I would have a bit of a play with web fonts, gradients and transformations. Here’s the result:

Pale blue sky with yellow sun made from the word 'sunshine' repeated 13 times; green earth fades to brown with 'earth' in large brown letters

http://playground.matthewbutt.com/landscape.htm

It took a little adjustment to get the rays of sunshine correctly positioned: the key was to position the transform origin at 50% of the height of the text.

Constructivism

Next I thought I would try my hand at a little faux-constructivist design. The simplicity and clear colours of constructivist design are perfect for online material, but the jaunty angles have always posed a major problem: either render the text as images, or give up. With CSS 3’s transformations, this is no longer a problem, and there’s scope for a Russian revival:

'Construction site' in red and black crosses at an angle with 'a page of experimental stuff'; six yellow-and-black blocks float above

http://playground.matthewbutt.com/construction.htm

Again, getting everything to line up took a little getting used to, and it boiled down to the same issue: getting the transform origin of every element in the same place, and then rotating around that point.

A real-world example: Magazine’s Touch and Go

Ok, my constructivist sketch isn’t exactly high design, so I thought I would find something that had already been done, and have a go at copying it.

Here is the cover of Magazine’s 1977 debut single, Touch and Go:

Five red and black blocks stand side by side but at different vertical positions; 'Magazine' runs through them; underneath 'touch and go' and 'goldfinger' are arranged asymmetrically

And here is a quick HTML version:

http://playground.matthewbutt.com/magazine.htm

The only advanced technique here is the use of web fonts, although I’ve made liberal use of the :nth-child pseudo-class to apply styles to the coloured panels. I have to confess to using a few non-essential spans for this one, but I think the result is pretty pleasing for an image-free page.

And that’s my lot for today. There’s plenty more excitement in the latest implementations of HTML and CSS, so I’ll post some more experiments when I have a moment.

Acknowledgments

Fonts

Books