DDD North 2015

Saturday, November 24th 2015 – the day which saw the fifth annual running of DDD North, this time hosted at Sunderland University in the North East of England. In case you’ve never been to a DDD event before, they free events for developers which run on a Saturday with an agenda that is voted on by the community. Generally they cover a wide range of topics, including Javascript, Asp.Net, machine learning, coded UI testing and Raspberry Pi. Not only do they feed you at lunch time, but there’s also a chance to win some pretty cool swag at the end!

I was lucky enough to get along to this years event again (last year was hosted in Leeds) and attend some of the sessions.



Read More

Testing against Azure Table Storage with Node.JS, Mocha and Sinon

If, like me, you wanted to be able to write a Node.js app and use Azure Table Storage but also wanted to be able to effectively test your app without actually connecting to Azure, then here’s how!

Note: This article assumes you have a grasp of node, how to run node tasks and how to install node packages.

The App

First of all, allow me to direct you to the sample application on GitHub that we’ll be using for this article. It’s a simple console application that simply:

  • Creates a new table service instance
  • Writes a new entity to the table
  • Reads the same entity back from the table

It also contains a full test suite that examines and verifies the use of these three operations. The great thing is, the test suite does not connect to Azure! It simply verifies that the correct calls were made and stubs out everything else, which makes your tests reliable, fast and robust.

The main script is index.js which actually does the work, and the test suite is located in spec/azureSpec.js. To run the app, simply execute npm start from the console, and npm test to run the tests.

You don’t need to run the app, nor even have an Azure storage account to run the tests and to follow the article, but in case you do have an account and want to run the app itself (which actually does connect to Azure and read/write data), then you’ll need to add a configuration file to the config folder containing your Azure account details. To do this, create a file called ‘default.json’ that looks like the following:

The Packages

The main npm packages at play here are the following. There are others, but these are the primary ones that are involved with accomplishing what we need to with regards to this article:

azure-storage: Not strictly used as part of the test suite, but this is the API for working with Azure Storage (tables, blobs and queues). This is what we want to mock when running our tests

mocha: The test runner – this is what executes our tests and displays the results

chai: The assertion library – this is what allows us to verify the results, to prove that 1 + 1 = 2, or that a specific method was called by the test subject

sinon: The mocking/stubbing library – this allows us to replace methods and objects with fake ones, and be able to verify operations that act on them

sinon-chai: A sort of ‘glue’ between sinon and chai. It adds some helpers that make writing assertions on stubs and mocks a bit more friendly

proxyquire: Allows us to replace modules with other modules when we do ‘require’ inside a test subject – useful for swapping out the real azure-storage library with our own fake one!

The Tests

Let’s get to it. I’ve put all my tests for this into one file called ‘azureSpec.js’ inside my ‘spec’ folder, and my npm test command is set to mocha spec/ –recursive, just so that it’ll run all my tests in all my sub-folders in case I decide to add more later. By default Mocha expects your test file names to end with ‘Spec’ (as in ‘azureSpec.js’) so if you find it’s randomly not picking up a test suite, check your file names.

Setting up

First, a bit of set up inside the test file to configure all the packages we’re using. Most of them are just standard requires, but there’s a couple of interesting points.

var chai = require('chai');
var sinon = require('sinon');
var sinonChai = require('sinon-chai');
var proxyquire = require('proxyquire').noCallThru();
var azure = require('azure-storage');

var expect = chai.expect;

Everything is included that we need in order to assert condititions using Chai, create stubs with Sinon and mock out other requires with proxyquire. A couple of things to note are:

  • I’ve set proxyquire’s noCallThru() flag, which tells it not to call into base module methods
  • I’ve included sinon-chai, which contains some helpers with asserting method calls and whatnot, and then wired it up to chai on line 8 using chai.use(sinonChai). This will make it very easy later when we come to verify that calls to methods were made and with the correct arguments

Creating Stubs

The next thing to do is start creating our stubs. Looking at our test subject, i.e. the module that we’re testing, there are two things we need to stub out: the azure library, and our custom configuration file (the uuid module is fine since all that does is create some Guids for us):

var azure = require('azure-storage');
var config = require('config').get('azure');
var uuid = require('node-uuid');

var tableService = azure.createTableService(config.accountName, config.accountKey);
var entityGen = azure.TableUtilities.entityGenerator;

As you can see, we use the azure package to create a table service instance, which we use to interact with Azure Storage. We need to swap this instance out with something we can control as part of the test and stop it from trying to connect to actual Azure. We also need to stub out the config file, mainly because I realised that if you don’t create one for this demo (which you might not, seeing as the configuration is not under source control) then the test will break. Besides, it’s easy to do.

Stubbing out the configuration looks like this. We create an object with a ‘get’ stub on it (representing the ‘get’ method) and have it just return an empty configuration object.

var config = {
	get: sinon.stub().returns({
		accountName: '',
		accountKey: ''

Next, we need to stub out the table service type. createTableService returns an object that has various methods on it for retrieving and updating data, so let’s stub out the three methods we need that our module works: createTableIfNotExists, insertEntity and retrieveEntity:

var tableServiceStub = {
	createTableIfNotExists: sinon.stub().callsArgWith(1, null, null),
	insertEntity: sinon.stub().callsArgWith(2, null, null),
	retrieveEntity: sinon.stub().callsArgWith(3, null, null)

Here we create a sinon stub for each method that the module will call, except this time we say “when insertEntity is  called, call the function at argument position 2 with a null and another null as arguments”. Why do we do this?

Look at the signature for insertEntity:

tableService.insertEntity(tableName, entity, callback)

It takes a table name, the entity to update and a callback for when it has finished. So if we get our stub to call the function at argument 2, we’re just telling it to return a response straight away, simulating what Azure would have done out in the wild. We’ve set the return arguments to null for now, but we’ll come back to that later. The other two methods, createTableIfNotExists and retrieveEntity have been set up exactly the same way, except with a different argument number to reflect the position that the callback appears in the signature.

Finally, let’s stub out the azure object:

var azureStub = {
	createTableService: sinon.stub().returns(tableServiceStub),
	TableUtilities: {
		entityGenerator: azure.TableUtilities.entityGenerator

The trick here is that we’re telling createTableService to return the stub that we created in the previous step. So now we have a complete chain of stubs for Azure, and actual Azure should not be touched. We also include the existing entity generator utility straight out of the azure package, as our module will be making use of that.

Setting up Proxyquire

Proxyquire is a package that helps us test modules in isolation by supplying the module as required when the test subject calls ‘require’. This is how we can tell our test subject to use our azure module instead of the actual azure module, without actually changing our test subject:

proxyquire('../index.js', {
	'azure-storage': azureStub,
	'config': config

The first argument is the module to require i.e. our test subject, and the second argument is an object hash containing the modules we want to replace. Note that the object keys are just the same name as required by the test subject.

Now, including this snippet of code has actually executed the module with our stubbed out modules. This means that my test subject has effectively executed. In your case, your test subject might be a class or function that you have to execute or call methods on in order to do anything, as in:

var subject = proxyquire('../index.js', {
	'azure-storage': azureStub,
	'config': config


Authoring the tests

At this point, the test has been executed and hopefully hasn’t produced any errors. So how can we verify what happened?

Line 11 of the example project inside index.js has a call to createTableIfNotExists, so let’s make sure that was done:

// index.js
tableService.createTableIfNotExists('TestData', function(err, result) { ... }

// azureSpec.js
it('creates the table', function() {

The syntax with sinon, chai and sinon-chai makes reading the test very self-explanatory. Here we simply verify that createTableIfNotExists was called with the argument ‘TestData’. If not, an exception will be thrown by the test runner and reported.

The next thing the test subject does is insert a new entity:

var id = uuid.v4();

var entity = {
	PartitionKey: entityGen.String('row'),
	RowKey: entityGen.String(id),
	message: entityGen.String('This is another row in the table')

tableService.insertEntity('TestData', entity, function(err, result) { ... }

Here we can test a few things; that the call to retrieveEntity was made, that the partition key was correct and that it had an id value for the row key. Here’s the test (inside its own ‘describe’ block):

var generatedId;

// Next test - make sure it inserted the right entity
describe('the insert entity operation', function() {
	var insertedEntity = tableServiceStub.insertEntity.args[0][1];
	// Store the id of the entity that was created, so that we can test that we retrieved it again later
	generatedId = insertedEntity.RowKey._;
	it('is called', function() {
	it('has the right partition key', function() {			
	it('has a row key', function() {

This is much like the earlier test, except it makes use of some additional Chai assert methods like ‘equal’ and ‘to.be.ok’. One thing I do here is to record the ID that the entity was given, so that I can verify the same entity was retrieved later. I can do this by retrieving the arguments that the stub was called with using the args collection on the stub, to get a hold of the entity object that the test subject created.

Finally, the test subject retrieves the entity it just created, with:

tableService.retrieveEntity('TestData', 'row', id, function(err, result) { ... }

So let’s test that it did that:

describe('the retrieve entity operation', function() {	
	it('gets the correct entity', function() {
		expect(tableServiceStub.retrieveEntity).to.have.been.calledWith('TestData', 'row', generatedId);

Exactly the same as before, except with different arguments. I’ve also verified it with the ID that the entity was created with, so I can make sure that the same entity was retrieved again.


Hopefully this has given you some insight into how to write test subjects that can use fairly complex libraries in practise, but can be stubbed out for testing. Again for reference, the example project that accompanies this article can be found on Github: https://github.com/elkdanger/blog-azuretesting. Feel free to clone/modify/tweak!

Read More

Grunt, NPM and Bower in Visual Studio! It’s awesome.. right?

The news that has come out of the Visual Studio team at VSConnect from the likes of Scott Guthrie (@scottgu), Scott Hanselman (@shanselman), Soma (@ssomasegar) and friends has been nothing short of blinding: Open-source .Net Framework? Visual Studio 2015 and the new Community Edition? Asp.Net vNext? A strong focus on cross-platform mobile development? First-class support for Apache CordovaRoslyn? C# Intellisense in text editors like Atom and SublimeA whole host of other delights, improvements, products and features? Yes please! And it’s all been very, very well received by the .Net community of developers.

Being a web applications developer through-and-through, I’ve been very interested in what is coming out of the Asp.Net team regarding the new vNext and MVC frameworks, and VS Connect didn’t disappoint in that area either. We’ve known about the changes coming with MVC 6 and the shift towards a more composable application framework for a while, but today really hit home the deep – and flexible – integration with client tools like Grunt, Bower, Gulp and Node Package Manager.

And it’s awesome!

For the uninitiated, Grunt is a JavaScript task runner that allows you to define a build script of sorts, which can be configured to run whatever tasks you need it to, which include copying files, compiling LESS files, compiling TypeScript, Linting JavaScript and cleaning directories. There’s a vast repository of tasks that can be installed through NPM, not to mention the ability to author your own with ease. I believe the release version will including support for Gulp, as it doesn’t seem to be included with the current VS 2015 November Preview, but I’m sure the tooling will offer a very similar experience.

Edit: As Mads Kristensen points out in the comments below, for 2015 Preview you must have Gulp installed globally through NPM in order for the respective tooling to light up. Use npm install -g gulp to install the task runner of your choice. He also says this won’t be needed in the release version!

Then there’s NPM (Node Package Manager) and Bower, which are like package managers for the client. Synonymous with Nuget, they provide node packages and client assets respectively, not to mention they work cross-platform. In the context of an Asp.Net website, NPM will provide you with your Grunt tasks, and Bower will provide you with your client-side runtime assets, such as jQuery, Bootstrap and AngularJS. One of the changes I expect to make in my own personal project development, is the shift away from using the built-in Bundling and Minification framework provided by Microsoft.AspNet.Web.Optmization, and instead using a system driven by Grunt and NPM.

All of this is music to my ears. As a .Net developer who has been delving into the world of Node JS  web development as of late, I had seen the light when it comes to node packages, grunt build tasks and all the flexibility that comes with it. Especially seeing as Azure can already host all of this stuff for you. Topping it all off with Git deployment, it’s a fantastic time to be a web developer, never mind an Asp.Net web developer.

But what does all this look like to a thoroughbred .Net developer, who hasn’t used all these tools before, or possibly never used MVC before? How will the changes coming with MVC 6 look to them? I fear that it’s going to just look.. strange.

Let’s look at what a brand new MVC site looks like. Here I’ve created a new blank MVC project and added some stuff to it, and I’ve gotten something we can run straight away. This the layout of the project:

MVC 6 Project

A few things to note:

  • There’s a wwwroot directory – this is where your static assets go, such as your images, JavaScript and CSS
  • There’s a dependencies folder with a weird icon, containing NPM and Bower packages
  • There’s loads of .json files in the root. They configure things
  • There’s a Gruntfile – these are your client build tasks
  • There’s a Startup.cs file that contains our bootstrap and setup code
  • It’s got our normal MVC stuff, like controllers and views
  • There’s no web.config

This is quite the departure from what a normal Asp.Net site looks like. On top of that, with it’s lack of a build step thanks to Roslyn, it also feels different, like it’s a shout back to the old days of Web Site projects, where you didn’t have to manually include things into the project file and you can just edit C# code without building manually. Roslyn takes care of the building process now, and we’re promised it will be faster by the time Visual Studio ships.

In terms of the future of MVC and web development with Asp.Net vNext, my gut feeling is that it’s going to take a little while for people to buy into this. There’s a lot of new here for someone coming just from MVC 5, and it’s a total rewrite for those coming straight from Web Forms to embrace the new stack – but at least they’d be on the path to the future. The fear is that they might feel that they have to step into the command line to complete whatever task they need in terms of building or downloading dependencies and assets for their website. Just looking at the root of this project, there are three .json files that I now have to manage (I still get confused as to which to open between project.json and package.json when I want to configure my site), and even more once I start getting into the new app configuration model.

Fortunately, VS does a very good job at keeping you from dipping into the command line. The tooling to manage your packages and running Grunt tasks is exceptional:

  • You can install bower and NPM packages by right-clicking and selecting “Restore”
  • You can author your Gruntfiles inside VS just like any other file
  • You can run Grunt tasks from the new Task Runner Explorer (see below), right down to the individual task
  • Furthermore, you can assign tasks to various build steps – PreBuild, PostBuild, Clean and Project Open

Here’s what my sample Gruntfile with the Task Runner Explorer open looks like:

Task Runner Explorer


Notice that it’s picked up the individual tasks and listed them in a tree. I can go and right-click on each of those and run them individually if I like. From the same menu, I can also assign them to one of the four steps in the build process! Furthermore, all this is doing is running the exact same commands that I would do myself from the command line – there’s no real magic here, just convenience.

One final gripe that I’ve forwarded to the VS team through the feedback tool is that this experience doesn’t seem to be enabled when working with Apache Cordova apps, which would be insanely handy. Given that they too are Html/JS/CSS apps (we really need an acronym to cover that) it feels like I could be using some kind of Grunt/Bower power there too – and there’s nothing to stop you doing it from the command line – and it would be nice if VS enabled that scenario.

Personally I’m really looking forward to being able to work with all this stuff out in the wild; there’s already a vast ecosystem with Grunt/Gulp, Bower and NPM and rather than shun it, Visual Studio has embraced it for the benefit of all involved, not just .Net web developers.

Read More

Fluent Test Subject Builder with Moq

Attending DDD North a couple of weeks ago, I was inspired by Alastair Smith‘s talk on “Refactoring out of Test Hell”, which covered a few things that a development team can do to make unit testing easier, run faster and generally cause less day-to-day pain than they probably should. One of those things was creating test subjects and how you can create and manage all of the dependencies that a test subject might require. I thought  I would write a short post on how we tackled this issue ourselves.

At Orchid, we created this idea of a builder class using a fluent syntax in order to create our test subjects. We required something which would create an instance of whatever type you specify. However, generally it needs to be smarter than that – it needed to be able to provide dependencies, and it needed to be able to provide defaults for dependencies that we’re not really interested in for a given test.

This is what we came up with:

  • A TestSubjectBuilder<T>, where T is the subject under test
  • A Bind<U> method which binds a dependency to the builder for creation
  • A Build method which constructs the instance with all the dependencies

The last point is interesting – how does it construct the instance if you’re not guaranteed to provide all the dependencies it needs?

Resolving dependencies

All that is needed here is a little bit of reflection. Given that we have our test subject T, we can use reflection to find a constructor on that type that we can use. The current implementation only works with subjects that have exactly one constructor, as that it all our requirements called for, but could easily be extended to select a constructor intelligently – perhaps the most specific one.

Then, you walk the parameters that the constructor expects, and match them up with the dependencies that were given to it via Bind(). Bind simply takes the dependency and adds it to an internal ServiceContainer, so that they can be looked up when the constructor arguments are being iterated over.

When the builder comes across a parameter that it does not have an instance for, we dynamically create one using Moq. If the parameter type is U, we use reflection to instantiate Moq<U> and pass that into the constructor instead. This way, the test subject can be built with everything it needs, and because you haven’t explicity told it to bind a parameter, the assumption is that you’re not interested in anything that happens to that dependency.

A couple of examples of usage:


Read More

Real-time system resource monitor with SignalR, MVC, Knockout and WebApi

Note: This article is a re-write of a previous article, showing how to build a real-time system monitoring application using SignalR and WCF. This update shows how to build the same thing, but with the release version of SignalR and using WebAPI instead of WCF.

Application building time! This article is designed to give you, the reader, some grounding in a few different technologies and help you build a working application which may or may not be actually useful. Nonetheless, it should be fun to build and at the end you will hopefully see some nice web wizardry to keep you entertained. Beware – this is a lengthy one, but you can fork/download the code from Github.

Here’s what we will build:

  • An MVC web application, exposing a Web API endpoint to clients
  • A console application to securely send processor and memory information to the service
  • A web page to show these system stats in real time using SignalR and KnockoutJS

These are the technologies we are going to use, all of which are installed through Nuget:

And this is what will happen: the console application will run on the host PC and post system resource usage as JSON to the Web API endpoint which is running on the web server. The endpoint will then send this information straight to the SignalR hub, which will then broadcast the information to all clients, who will display this information on the page.

In this article, I am using Visual Studio 2013 (with Update 3 applied) with an Asp.Net MVC 5 web project.

Before we get started, you should familiarise yourself just a little with SignalR and KnockoutJS as I’m not going to go into the technical concepts behind them, but merely show you how to use them. Both sites have excellent tutorials here and here to get you started. Also, I assume you know your way around Asp.Net MVC.

From now on, I will host any code that accompanies a tutorial or article that is worth downloading, on Github. You can download/fork/play with the code for this article to your heart’s content!


Read More

DDD North 2014

So I finally attended DDD North this year, which was held at Leeds University. It was sold out within a single day or something ridiculous like that, which is a testament to how many people look forward to this conference. I believe next year it (hopefully) heads back up to Newcastle, but we shall see! The day was very well put together by Andrew Westgarth (@apwestgarth) and co., and really was an incredible feat considering it’s a free event.

As much as I’d like to be in five places at once, I could unfortunately only attend one of the five sessions which ran concurrently in each slot.

I started off with Sana Sarjahani‘s “Writing JavaScript Unit Tests” which was an insightful introduction into writing unit tests for JavaScript, using Jasmine in particular. (more…)

Read More

Changing the ordering for single bundles in Asp.Net 4

The new bundling support for Asp.Net MVC and WebForms is superb, and as it’s all built into the new framework, there’s no real reason not to use it if you’re not doing anything of this kind already. In a nutshell, it allows you to package up all your related scripts and css files and serve them us as one request, and even minify them in the process. You can even get it to transform LESS files and all sorts of other cool stuff.

And that’s what I like about it; it’s so configurable, and it even does some smart things for you, like automatically favouring minified javascript files over non-minified (based on popular filename convention) when running with optimizations turned on. It will even put known framework javascript files first in the bundle automatically, such as jQuery or Prototype scripts, to make sure they run before your own code which uses their types gets executed.

But this last one can be stick, like in the case I had today. I was using the popular Plupload javascript file uploader, which requires a number of Javascript files to be included on the page. It also has a jQuery-UI extension library, which must be executed after the primary Plupload library, otherwise it complains about missing types and methods. So I created a bundle to handle all this stuff, which looks like this:

string pluploadBase = "/scripts/jquery/plupload/1.5.4/";
var pluploadBundle = new ScriptBundle("~/bundles/js/plupload").Include(
    pluploadBase + "plupload.full.js",
    pluploadBase + "plupload.browserplus.js",
    pluploadBase + "plupload.flash.js",
    pluploadBase + "plupload.gears.js",
    pluploadBase + "plupload.html4.js",
    pluploadBase + "plupload.html5.js",
    pluploadBase + "plupload.silverlight.js",
    pluploadBase + "jquery.ui.plupload/jquery.ui.plupload.js");

This looks fine, except what will actually happen is the jQuery-UI library will be rendered first when the bundle is actually used on the page. This is how the bundle is written out without optimisations turned on, when the bundle is configured exactly as above:

Luckily, in this case I want to just write out the scripts as they appear in my bundle without Asp.Net doing anything fancy to it. I can write a custom BundleOrderer by implementing the IBundleOrderer interface:

class PassthruBundleOrderer : IBundleOrderer
    public IEnumerable<BundleFile> OrderFiles(BundleContext context, IEnumerable<BundleFile> files)
        return files;

Simple: it just returns the same file list back to the caller without doing any ordering on it whatsoever. We can now apply this orderer to this one specific bundle that needs it, by setting the Orderer property, and our code becomes:

string pluploadBase = "/scripts/jquery/plupload/1.5.4/";
var pluploadBundle = new ScriptBundle("~/bundles/js/plupload").Include(
    pluploadBase + "plupload.full.js",
    pluploadBase + "plupload.browserplus.js",
    pluploadBase + "plupload.flash.js",
    pluploadBase + "plupload.gears.js",
    pluploadBase + "plupload.html4.js",
    pluploadBase + "plupload.html5.js",
    pluploadBase + "plupload.silverlight.js",
    pluploadBase + "jquery.ui.plupload/jquery.ui.plupload.js");
pluploadBundle.Orderer = new PassthruBundleOrderer();

And the order is now correct in the browser:

You can of course apply any ordering you like here, but at least it’s one way to break convention for one specific instance, should you need it!

Read More

Real-time system resource monitor with SignalR, WCF and KnockoutJS

If you’re looking for an article showing you how to build a real-time System Resource Monitor with Asp.Net MVC and SignalR, I’ve now updated this and created a new article. The new edition removes the need for WCF and instead makes use of Web Api. In addition, all the MVC and SignalR libraries have been updated and they actually work! Finally, all of the code is hosted on Github

The new article is at: http://stevescodingblog.co.uk/real-time-system-resource-monitor-with-signalr-mvc-knockout-and-webapi/

Read More

Basic Authentication with Asp.Net WebAPI

On a recent project, I undertook the task of implementing a RESTful API using the new Asp.Net WebAPI framework. The aim was to support clients of all types, including a .Net desktop app and iOS and Android mobile apps. My API had to support some sort of authentication mechanism.

Since this was a basic application (to be used as a learning tool for the other developers on our team) we decided to use Basic HTTP Authentication. As the name suggests, it’s a simple protocol whereby the client sends an authorization token as a header in the HTTP request, and the server decodes that token to decide whether or not it is valid. If it is, the request continues, otherwise it (should) return a 401 Unauthorized response.

So how can we implement this with WebAPI? With an Action Filter, of course.

The Basic Authentication Action Filter

Start by creating a new class for your filter. This must inherit from System.Web.Http.Filters.ActionFilterAttribute, which is different from the normal namespace that are used for Asp.Net MVC Action Filters. This one lives in System.Net.Http.Filters. Be careful to subclass the correct type and don’t get confused. Also, override the OnActionExecuting method:

public class BasicAuthenticationAttribute : System.Web.Http.Filters.ActionFilterAttribute
	public override void OnActionExecuting(System.Web.Http.Controllers.HttpActionContext actionContext)

We will use this method to intercept the API and check that everything is OK with the security side of things (well, as OK as Basic Auth can be!). First, lets check we have an authorization header:

if (actionContext.Request.Headers.Authorization == null)
	actionContext.Response = new System.Net.Http.HttpResponseMessage(System.Net.HttpStatusCode.Unauthorized);

Simple stuff here. The fact that this action filter is executing implies that we want to protect the action that it attributes, and so if we don’t have a header, we’re not authorized.

If we have a header, lets parse the value:

	string authToken = actionContext.Request.Headers.Authorization.Parameter;
	string decodedToken = Encoding.UTF8.GetString(Convert.FromBase64String(authToken));

	string username = decodedToken.Substring(0, decodedToken.IndexOf(":"));
	string password = decodedToken.Substring(decodedToken.IndexOf(":") + 1);

Just grab the header value, decode it from Base64 back to a string, and then split it. The value that is encoded would normally:, but really if this is a custom solution you can make it anything you want if you’re in control of how the value is encoded and decoded (which here I am – I just decided to follow the standard).

Now that we have the username and password, it really is up to you as to how you use it. The normal thing would be to look up some database value and check if the user exists. In my case, I grab a couple of services to find the user that the credentials refer to, and I set the user into the current principal. I then defer execution to the base filter and allow the action to run:

IPasswordTransform transform = DependencyResolver.Current.GetService<IPasswordTransform>();
IRepository<User> userRepository = DependencyResolver.Current.GetService<IRepository<User>>();

User user = userRepository.All(u => u.Username == username && u.PasswordHash == transform.Transform(password)).SingleOrDefault();

if (user != null)
	HttpContext.Current.User = new GenericPrincipal(new ApiIdentity(user), new string[] { });


If the user wasn’t found, simply return a 401:

	actionContext.Response = new System.Net.Http.HttpResponseMessage(System.Net.HttpStatusCode.Unauthorized);

In the code above where I set HttpContext.Current.User, I am just using a custom type called ApiIdentity, which is an implementation of IIdentity and allows me to store a user entity against the username. For brevity, it’s implementation is:

using System.Security.Principal;

public class ApiIdentity : IIdentity
	public User User { get; private set; }

	public ApiIdentity(User user)
		if (user == null)
			throw new ArgumentNullException("user");

		this.User = user;

	public string Name
		get { return this.User.Username; }

	public string AuthenticationType
		get { return "Basic"; }

	public bool IsAuthenticated
		get { return true; }

Using the Basic Authentication action filter

To use this thing, just decorate any action or controller with [BasicAuthentication], and any requests to those actions will require that the Authorization header is sent:

// GET /api/accounts
public IEnumerable<OwnerAccountDto> Get()
	var accounts = _accountsRepository.All(a => a.OwnerKey == AuthorizedUser.Guid).ToList();

	return Mapper.Map<IEnumerable<OwnerAccountDto>>(accounts);

I went one step further than this and created an AuthorizedApiController which already has this attribute on it. Furthermore, I added an accessor to get the actual user entity that was authorized when the request was made:

public class AuthorizedApiController : ApiController
public User AuthorizedUser { get { return ((ApiIdentity)HttpContext.Current.User.Identity).User; } }

A quick note on unit testing: having this property return something straight out of the current HttpContext instance mucks things up a little for unit testing, since there won’t be a context to look at in that scenario. The way to combat it is to either:

  • Create a provider model to retrieve the authenticated user through some IAuthenticatedUserProvider interface
  • Use a framework such as Moq to allow you to mock up a context just for this scenario (available through Nuget too)

Now you have the tools to custom-build an authentication scheme for your Web Api. Happy.. authenticating!

Read More

Fun with action filters

I was fortunate enough to be able to attend the brilliant DevWeek developer’s conference in London this year, and even more lucky to attend a lecture by Dino Esposito (please check out his brilliant Architecting Applications for the Enterprise book) on Asp.Net Action Filters. The purpose of his session was to demonstrate the importance of Action Filters and how we should all be using them much more than we normally do. I will be the first to admin that action filters are not the first possible solution that comes to mind when trying to solve a particular architectural problem in my Asp.Net MVC application.

I must say though, I was inspired. Yes, I realise I had previously posted about one particular useful action filter, but I really haven’t done too much with them until now. Dino put forth some uses for them, which I believe are discussed in more detail in his MVC book:

  • Using an action filter to figure out which button was pressed (on a multi-button form) and compartmentalise the resulting code path
  • An action filter which automatically populates generic data on a view model (imagine a list of countries or some other static data required by your view)

We’ve all had some grief with the first scenario at some stage or another. It’s just not quite as easy to do as one would expect. The second allows you to abstract away some data population code and generally keep things a bit tidier than they otherwise would.

Today I created another action filter which takes a CSV list of data, parses it, and gives you a strongly-typed list of values. It looks like this:

[SplitString(Parameter="contentItemKeys", Delimiter=",")]
public virtual ActionResult GetItemInfo(IEnumerable<Guid> contentItemKeys)

So imagine that I have POSTed a comma-delimited list of GUIDs to this action. Normally, if there is one GUID then Asp.Net MVC should be able to resolve that properly and give you a list with one thing in it. However, if you have more than one GUID in that comma-delimited list, then you will have an empty list given to you. Why? Because the framework doesn’t know how to parse that list properly.

You could use a custom model binder to achieve the desired effect, but creating an action filter to do the same thing is much neater and much more flexible.

I’ve created an action filter called ‘SplitString’ and it works like this:

  • The filter accepts a couple of arguments: the parameter you want to act on, and the delimiter to use.
  • It overrides the OnActionExecuting method and looks for the specified parameter, first in the routing data, then in request data.
  • It then finds the type that each item should be, using a little reflection.
  • It then parses the list, converts each item in the parsed string to the desired type, and spits out the full list.

First, the class definition:

public class SplitStringAttribute : ActionFilterAttribute
    public string Parameter { get; set; }
    public string Delimiter { get; set; }

    public SplitStringAttribute()
        Delimiter = ",";

Inside OnActionExecuting, lets find the value we need to work with:

string value = null;
HttpRequestBase request = filterContext.RequestContext.HttpContext.Request;

if (filterContext.RouteData.Values.ContainsKey(this.Parameter)
	&& filterContext.RouteData.Values[this.Parameter] is string)
	value = (string)filterContext.RouteData.Values[this.Parameter];
else if (request[this.Parameter] is string)
	value = request[this.Parameter] as string;

Next we need to find the type to convert to. Specifically, we need to find the type of the generic argument which forms the type of the parameter we’re interested in. So if our parameter is IEnumerable<T>, I want to know what type T is. I’ve wrapped this up in a method:

Type listArgType = GetParameterEnumerableType(filterContext);


private Type GetParameterEnumerableType(ActionExecutingContext filterContext)
	var param = filterContext.ActionParameters[this.Parameter];
	Type paramType = param.GetType();
	Type interfaceType = paramType.GetInterface(typeof(IEnumerable<>).FullName);
	Type listArgType = null;

	if (interfaceType != null)
		var genericParams = interfaceType.GetGenericArguments();
		if (genericParams.Length == 1)
			listArgType = genericParams[0];

	return listArgType;

Here we simply:

  • Find the type of the parameter using the filterContext
  • Check to see if the type is IEnumerable<>. We do this simply by getting the interface and checking if it is not null
  • Finally we get any generic arguments, check that there is exactly one, and then return that type. This is the type that we will convert each item in our CSV list to.

Next, we process our CSV string and create our container list:

string[] values = value.Split(Delimiter.ToCharArray(), StringSplitOptions.RemoveEmptyEntries);

Type listType = typeof(List<>).MakeGenericType(listArgType);
dynamic list = Activator.CreateInstance(listType);

We just split the string according to the delimiter that we need to use, then create a new generic list with the type we procured earlier. I’ve used a dynamic type here to make it much easier and more efficient to work with.

Next, we run through each value in our CSV list and add it to this new generic list:

foreach (var item in values)
		dynamic convertedValue = TypeDescriptor.GetConverter(listArgType).ConvertFromInvariantString(item);
	catch (Exception ex)
		throw new ApplicationException(string.Format("Could not convert split string value to '{0}'", listArgType.FullName), ex);

The real magic here is the type converter. We can simply pass it a type and an item to convert and it will just do it for you, in a nice generic way. This means you don’t have to manually support a known list of types – let the framework handle that for you!

Finally, and the real cherry on top, is that to make all this work, we simply substitute the original action parameter value with this new list that we’ve just created:

filterContext.ActionParameters[this.Parameter] = list;

The result is, your action parameter will now be populated correct with the parsed list of values, in a strongly typed fashion.

Download the full code file if you wish to inspect it further, or to use as you like.

Read More