Messages published in 3 2013

The power of IEnumerable(T)

When working with collections of data to which a set of rules, filters or data transformations has to be applied I often see implementations which are constructing one list after another to hold data between different workflow steps. Those solutions can be inelegant, make code hard to read and consume unnecessary memory. Those issues can be easily addressed with help on IEnumerable(T) interface and extension methods.

First, imagine scenario in which we load data from external source, lets say a CSV file provided by customer. The data can be expressed by following entity:

public class Entity
    public int Id { get; set; }
    public int CategorydId { get; set; }
    public int UserId { get; set; }
    public DateTime Date { get; set; }
    public string Name { get; set; }
    public decimal Amount { get; set; }

Now, before we can enter it to the system we need to normalise value in Name property. For this task we are using an implementation of INameCanonicalisator. Also we have to apply tax to the Amount. This calculation is done by implementation of IAmountTaxCalculator. Below are definitions of those interfaces:

Arranging mocks using DSL

One of the biggest problems with unit tests is poor readability. Bad naming convention, long methods, hard to understand Arrangement and Assert parts are making unit tests one of the hardest code to read and refactor. In previous article, Unit Tests as code specification, I presented the way to increase readability of test method names and use them to create code specification. Now I would like to tackle the problem of unreadable test methods.

Most of unit tests methods start with test arrangement. It usually takes a form of setting up mocks and initialising local variables. It’s not uncommon to start the test with code similar to the one below:

_testService.Expect(i => i.Foo()).Throw(new WebException("The operation has timed out")).Repeat.Once();
_testService.Expect(i => i.Foo()).Return(9).Repeat.Once();

Unit Tests as code specification

When asking people what is the purpose of writing unit tests we usually get following answer:

“To verify that the code actually does what it is supposed to do.”

Among other responses we will find that unit tests help to validate that changes are not breaking existing functionality (regression), or that practising TDD will guide the design. But are those the only purposes? Well, there is more. Because unit tests are executing our code, they can show how it is working. We can use them as a specification of the code. Well crafted tests, which have explaining names and are easy to read, create a live specification of the module, which is always up to date.

Whenever we need to analyse a class, whether because we are new to it or we are coming back, we can use reports from unit tests to get the understanding how the class is working and what is it’s contract.

To build a specification from unit tests, we need to keep them organised and apply proper naming convention.

Using policies to handle exceptions while calling external services

Exception handling very easily gets ugly. Typical try...catch block clutters method and grows with any new exception discovered. Then, bits of code are copied between methods which require same error handling. Adding any new logic into error handling is a nightmare and with each new release it seems like the same errors are coming back.

Policies for handling exceptions

To overcome those problems we can extract logic related to exception handling into separate objects – policies. This will keep main business logic clear, allow reusing and make testing easy.

Here’s the definition for recoverable policy:

public interface IRecoverablePolicy<TResult>
   TResult Execute(Func<TResult> operation);

One example of recoverable policy is handling transient exceptions. Usually they require retrying method call after a small pause.