An introduction to PowerShell Cmdlets

From time to time, I write a useful piece of code without any UI to interact with it. Typical examples are utility libraries, common in many code bases. One way to invoke such code, without an entire application around it, is through unit tests. In addition to unit tests, I often wish I had a way to invoke code directly and on-demand, especially when doing exploratory testing.

A lightweight approach to this task is to expose PowerShell Cmdlets from your library. PowerShell is available for all Windows versions since XP, and version 4 (which this post is based on) is available for Windows 7 and newer.

The starting point for this guide is a simple Messenger class, with two methods:

public static class Messenger
{
    public static void Configure(string configFile)
    {
        // ...
    }

    public static void SendMessage(string receiver, string message)
    {
        // ...
    }
}

The API requires you to pass a configuration file before sending messages. Before we can add any PowerShell Cmdlets, we must reference the System.Management.Automation assembly. The easiest way to accomplish this is through NuGet:

Install-Package System.Management.Automation

Our first task will be to wrap the Configure method in a Cmdlet. To expose a Cmdlet, simply create a class that inherits Cmdlet and decorate it with the CmdletAttribute.

using System.Management.Automation;

[Cmdlet(VerbsCommon.Set, "MessengerConfiguration")]
public class SetMessengerConfigurationCommand : Cmdlet
{
    [Parameter(Mandatory = true, Position = 0)]
    public string ConfigurationFile { get; set; }

    protected override void ProcessRecord()
    {
        Messenger.Configure(ConfigurationFile);
    }
}

The parameters to CmdletAttribute dictates how it is invoked from PowerShell, in this case: Set-MessageConfiguration. The ParameterAttribute on a property adds it as a command line parameter. In our case, it is a mandatory parameter, and we also specify that the parameter is positional, allowing us to pass the argument immediately after the command (position 0), without specifying the parameter name.

Finally, we override the ProcessRecord method to handle the actual invocation.

With the Configure method wrapped, it is time to tackle the SendMessage method. This will require us to handle multiple parameters:

using System.Management.Automation;

[Cmdlet(VerbsCommunications.Send, "Message")]
public class SendMessageCommand : Cmdlet
{
    [Parameter(Mandatory = true, Position = 0)]
    public string Message { get; set; }

    [Parameter(Mandatory = true, Position = 1, ValueFromPipeline = true)]
    public string Receiver { get; set; }

    protected override void ProcessRecord()
    {
        Messenger.SendMessage(Receiver, Message);
    }
}

This Cmdlet, invokable with the command Send-Message, is very similar to the previous. Key differences are that we now have two parameters and that one ParameterAttribute specifies ValueFromPipeline = true. The latter means that any value that is piped to the Cmdlet will be stored in the Receiver property. This facilitates scripting, as we will see later.

But first, let us take a look at how we can load and invoke our two Cmdlets. Assuming our library resides in an assembly named Messenger.dll, we load the Cmdlets with the Import-Module command.

Import-Module .\Messenger.dll
Set-MessengerConfiguration .\messenger.config
Send-Message "Test message" "Some receiver"

Now, let us look at how we can exploit the ValueFromPipeline specified on the Receiver parameter. Imagine a text file named receivers.txt with multiple receivers, one on each line:

Get-Content .\receivers.txt | Send-Message "Some alert...."

In this post, I have demonstrated how to wrap a simple API in Cmdlets, exposing a command line interface. The reasons for wanting a CLI can vary, but I tend to use it in exploratory testing. The fact that your API is suddenly available to scripting, further enhances its value.

Code Generation – a Tabu?

I spend most of my days developing object oriented .NET solutions, doing my best to adhere to best practices like the SOLID and DRY principles. Every once in a while, though, I find myself writing repetitive code. Not the kind of code you write in a hurry because of a tight schedule, but repetitive code enforced by the framework or other external conditions.

Enforced Redundancy

One example is custom Exception classes. The interesting bits of a custom Exception class are really only the class name, the base class and any additional data associated with it. Nevertheless, I must always remember to define a handful of constructors and make sure the class is serializable. The result is a collection of classes that follow a redundant pattern of boilerplate code, just because my programming language does not support generalization of this kind of redundancy.

To avoid having to write this code by hand each and every time, it is tempting to define a code snippet in Visual Studio that generates the skeleton for an Exception class. Then, I would only have to fill in the custom bits like the class name and base class. Problem solved! Or?

What if I make a change to my code snippet? Maybe I want a different formatting of the code, or I want to override a method. These changes would naturally not propagate to the code generated with my old snippet. To avoid inconsistency, I now face a tedious task of updating all the existing code, crossing my fingers that further changes will not be required.

What if changes to the snippet template could automatically update all previously generated code…

Code Generation

This is where code generation enters the picture. Since the DRY principle is about maintainability, it only applies to code that has to be maintained. If only the template adheres to the DRY principle, it does not really matter if the generated code is repetitive.

For .NET developers, T4 is the most accessible tool for code generation. T4 is short for Text Template Transformation Toolkit and is built into Visual Studio. It allows my to define some source data and a template which together produce a text file, typically a source code file. The resulting file is added to the project as a sub-item of the template. Any changes to the T4 template will regenerate the entire output file.

Image

Let us consider the issue with custom Exception classes with our new point of view. With T4, I can simply create a template which defines which classes I want and how I want them generated. Such a T4 template can look like this (the portion of the file you typically maintain is highlighted):

<#@ template language="C#" #>
<#@ output extension=".cs" #>
<#
var exceptions = new []
{
	DefineException("Message"),
	DefineException("BadResponse").DerivedFrom("Message"),
	DefineException("InvalidState")
};
//----------------------------------------------------------------------------------
#>
using System;
using System.Runtime.Serialization;

namespace MyNamespace
{
<# foreach(var exception in exceptions) { #>
	[Serializable]
	public partial class <#= exception.ClassName #> : <#= exception.BaseClassName #>
	{
		public <#= exception.ClassName #> () : base () {}
		public <#= exception.ClassName #> (string message) : base (message) {}
		public <#= exception.ClassName #> (string message, Exception inner) : base (message, inner) {}
		protected <#= exception.ClassName #> (SerializationInfo info, StreamingContext context) : base (info, context) {}
	}

<# } #>
}
<#+ 
//----------------------------------------------------------------------------------
ExceptionDefinition DefineException(string name)
{
	return new ExceptionDefinition { Name = name, BaseName = "" };
}

class ExceptionDefinition
{
	public string Name;
	public string BaseName;

	public string ClassName { get { return Name + "Exception"; } }
	public string BaseClassName { get { return BaseName + "Exception"; } }

	public ExceptionDefinition DerivedFrom(string baseName) { BaseName = baseName; return this; }
}
#>

The code generated from this template looks like this:

using System;
using System.Runtime.Serialization;

namespace MyNamespace
{
	[Serializable]
	public partial class MessageException : Exception
	{
		public MessageException () : base () {}
		public MessageException (string message) : base (message) {}
		public MessageException (string message, Exception inner) : base (message, inner) {}
		protected MessageException (SerializationInfo info, StreamingContext context) : base (info, context) {}
	}

	[Serializable]
	public partial class BadResponseException : MessageException
	{
		public BadResponseException () : base () {}
		public BadResponseException (string message) : base (message) {}
		public BadResponseException (string message, Exception inner) : base (message, inner) {}
		protected BadResponseException (SerializationInfo info, StreamingContext context) : base (info, context) {}
	}

	[Serializable]
	public partial class InvalidStateException : Exception
	{
		public InvalidStateException () : base () {}
		public InvalidStateException (string message) : base (message) {}
		public InvalidStateException (string message, Exception inner) : base (message, inner) {}
		protected InvalidStateException (SerializationInfo info, StreamingContext context) : base (info, context) {}
	}

}

Notice that I make use of partial classes from C#. Remember that Visual Studio regenerates the code whenever the template is touched. Hence, we need a way of augmenting the generated types without modifying the generated file:

namespace MyNamespace
{
	public partial class InvalidStateException
	{
		public int StatusCode { get; set; }
	}
}

Code generation has an undeservedly bad reputation, mainly due to many examples of abuse. And understand me right, code generation must not become your golden hammer. If you use it right, however, code generation can drastically improve the maintainability of a code base. It can also make debugging and troubleshooting easier, as generated code typically has fewer abstractions.

The best developers are those who manage to approach problems from multiple angles, looking for the best solution. Next time you want to create a code snippet, consider if code generation might be a suitable solution.

Links

Picking the Right Tool for the Job

When your only tool is a hammer, every problem looks like a nail.
– Abraham Maslow

I recently organized a coding dojo where we solved the bowling kata. In short, the bowling kata is about programming a score-keeper for a game of ten-pin bowling. At any given time during the game, the score-keeper must be able to yield the current score for all players. Additionally, the program must be able to tell which player is the current player, in order to assign scores correctly.

I began solving the kata in my programming language of choice, C#. The solution naturally converged to an imperative state machine, incrementing scores as the game progressed. This lead to entangled code with many special cases, struggling with the tracking of arbitrary strikes and spares.

Then I realized that the problem is in fact two-fold. One part of the problem is to keep track of which player knocks over which pins, while the other part is the actual calculation of the scores. Given a sequence of numbers representing the amount of pins knocked over, the score can be calculated as a relatively simple function. At this point, I reached for my .NET toolbox and picked the tool best suited for writing functional code, F#.

module BowlingCalculator

[<CompiledNameAttribute("CalculateScore")>]
let calcScore pins =

    let rec calcScore pins frame =

        match pins with

        // Strike with determined bonus
        | 10 :: y :: z :: rest -> 10 + y + z + calcScore (y :: z :: rest) (frame + 1)

        // Strike -without- determined bonus
        | 10 :: y :: [] -> 0

        // Spare with determined bonus
        | x :: y :: z :: rest when x + y = 10 -> 10 + z + calcScore (z :: rest) (frame + 1)

        // Spare -without- determined bonus
        | x :: y :: [] when x + y = 10 -> 0

        // Open frame
        | x :: y :: rest -> x + y + calcScore (rest) (frame + 1)

        // Special last frame
        | x :: y :: z :: [] when frame = 10 -> x + y + z

        // Otherwise
        | _ -> 0

    calcScore pins 1

If you are familiar with functional programming and pattern matching, the code above should be pretty obvious. I will not go into much depth explaining it, but suffice it to say that it is a recursive function traversing the list of pins knocked over, aggregating the score as it goes.

The rest of the program, responsible for keeping track of state, was kept in C#. After adding a reference to the F# module, calling into the calculating function is as simple as:

public class Player
{
    private readonly List<int> pinsKnockedOver;
    
    // snip...
    
    public int CalculateScore()
    {
        var pins = ListModule.OfSeq(pinsKnockedOver);
        return BowlingCalculator.CalculateScore(pins);
    }
}

Both being first class .NET citizens, interoperability between C# and F# is a breeze. The only hitch at this point was that my F# function required an F# list as its argument, while the Player class uses a regular List<T> to keep track of the pins knocked over. ListModule.OfSeq() converts any IEnumerable<T> into an F# list, solving that problem with ease.

The complete source code is available on GitHub at https://github.com/tormodfj/katas/tree/master/mixed/Bowling.

In my opinion, this solution takes the best from two worlds, using the imperative C# for state tracking and the functional F# for calculations. Learning the functional paradigm is like acquiring a new tool in your toolbox, enabling you to view problems from other points of view.

Converting an IList<T> to an FSharpList<T>

When calling F# functions from other .NET languages, you may encounter situations where you need to pass parameters of type 'T list. F# lists are immutable linked lists, appearing as the type FSharpList<T> in other .NET languages. Hence, passing a typical IList<T> is not possible. Luckily, converting an IList<T> to an FSharpList<T> is easily accomplished by recursively calling FSharpList<T>.Cons, passing each element of the source list. I keep the following code around for those occasions:

public static class Interop
{
	public static FSharpList<T> ToFSharpList<T>(this IList<T> input)
	{
		return CreateFSharpList(input, 0);
	}

	private static FSharpList<T> CreateFSharpList<T>(IList<T> input, int index)
	{
		if(index >= input.Count)
		{
			return FSharpList<T>.Empty;
		}
		else
		{
			return FSharpList<T>.Cons(input[index], CreateFSharpList(input, index + 1));
		}
	}
}

Note how F# lists are terminated using FSharpList<T>.Empty. Using this piece of code is as simple as:

var list = new List<int> { 1, 2, 3, 4 };
var fsharpList = list.ToFSharpList();

Update: @rickasaurus made me aware of the List.ofSeq<'T> function in the F# core library. This function solves the same issue. And, unlike my solution, its implementation is not prone to stack overflows when the input list grows large. In C#, this function is called like this:

var list = new List<int> { 1, 2, 3, 4 };
var fsharpList = ListModule.OfSeq(list);

Simple but Useful Extension Methods

In my previous post, I gave a fairly quick introduction to extension methods in C#. This post will present two examples to illustrate how readability can be improved by means of very simple extension methods.

One of the most common checks you perform on a string is whether it has any value. The string type has a static IsNullOrEmpty method intended for this purpose. The reason this method is static is that it could never check for null if it was an instance method. Rather, it would throw a NullReferenceException. Consider this extension method, however.

public static class Extensions
{
	public static bool IsNullOrEmpty(this string value)
	{
		return string.IsNullOrEmpty(value);
	}
}

Being static, this method can be invoked even when value is null. But, due to the fact that it is defined as an extension method, you can invoke it using instance method syntax, improving readability.

string foo = null;
if(foo.IsNullOrEmpty())
{
	// Do something
}

Another common scenario is parsing string values into corresponding enumeration values. Again, .NET provides a static method for this purpose. The Enum type has a static Parse method which takes a Type parameter and a string parameter, and returns an object which then has to be casted to the specified type.

string day = "Monday";
DayOfWeek dayOfWeek = (DayOfWeek)Enum.Parse(typeof(DayOfWeek), day);

The signal-to-noise ratio of that second line of code is rather poor. Consider the following generic extension method.

public static class Extensions
{
	public static T ToEnum<T>(this string value)
	{
		return (T)Enum.Parse(typeof(T), value);
	}
}

Notice how this method does exactly the same as the concrete DayOfWeek example above. With this extension method in place, however, each parse operation can now be reduced to the following.

string day = "Monday";
DayOfWeek dayOfWeek = day.ToEnum<DayOfWeek>();

Again, the major benefit is with the readability.

The examples in this post are extremely simple, but they illustrate how easily you can improve readability by simply wrapping existing functionality in a reasonably named extension methods. For more handy extension methods, I recommend browsing through this StackOverflow thread:
http://stackoverflow.com/questions/271398/post-your-extension-goodies-for-c-net