The Lazy Nature of Sequences

One of the great things about sequences is their lazy nature. As discussed in previous posts, sequences in .NET are represented by IEnumerable<T> objects, in the same manner lists are represented by IList<T> objects. Unlike lists, however, a sequence does not have to be readily available for an IEnumerable<T> object to be created. Your sequence reference is merely a handle to a generator which promises the ability to enumerate a sequence when requested. Let me illustrate this with a code example.

public static IEnumerable<string> FindCommonItems(
    IEnumerable<string> sequence1,
    IEnumerable<string> sequence2)
{
    Console.WriteLine("Iteration starting");
    foreach (string string1 in sequence1)
    {
        foreach (string string2 in sequence2)
        {
            if (string1 == string2)
            {
                Console.WriteLine("Common item found: " + string1);
                yield return string1;
            }
        }
    }
    Console.WriteLine("Iteration completed");
}

This method simply enumerates two string sequences and returns a sequence of all items which appear in both of them. To be able to determine when items are found equal and consequently added to the result sequence, some Console.WriteLine statements have been added. The lazy nature of sequences implies that calling this FindCommonItems method returns a reference to an IEnumerable<string> object without any iterations or any comparisons being made. To verify this, consider the following code example.

static void Main(string[] args)
{
    IEnumerable<string> result = FindCommonItems(
        new string[] { "One", "Two", "Three", "Four" },
        new string[] { "One", "Three", "Four", "Six", "Seven" });

    Console.ReadKey();
}

If you run the code above, you will notice that nothing is written to the console. Not a single line of the FindCommonItems method is run. The IEnumerable<string> variable simply holds a reference to a generator which will provide the sequence if it is ever needed. In the code above, the sequence is never enumerated. Hence, its values are not needed, and no CPU cycles are wasted on iterating the arrays and comparing their items. Consider the following code, however.

static void Main(string[] args)
{
    IEnumerable<string> result = FindCommonItems(
        new string[] { "One", "Two", "Three", "Four" },
        new string[] { "One", "Three", "Four", "Six", "Seven" });

    foreach (string item in result)
    {
        Console.WriteLine(item);
        if (Console.ReadKey().Key == ConsoleKey.Escape)
        {
            break;
        }
    }

    Console.ReadKey();
}

This time, we enumerate the result. Note that the user is free to cancel the enumeration at any time by pressing the Escape key. If the enumeration is canceled, no further iterations or comparisons between the input arrays are made. Again, no CPU cycles are wasted on performing calculations which are not needed.

To further illustrate the benefit of laziness, consider the following method.

public static IEnumerable<string> SkipShortStrings(
    IEnumerable<string> inputSequence)
{
    foreach (string input in inputSequence)
    {
        if (input.Length > 3)
        {
            yield return input;
        }
    }
}

The SkipShortStrings method shown above takes a sequence of string objects as input and produces a sequence of those string objects from the input sequence which are more than three characters long.

The sequence generator produced by this method can be combined with the sequence generator produced by the FindCommonItems method, chaining their operations as follows.

static void Main(string[] args)
{
    IEnumerable<string> result = SkipShortStrings(FindCommonItems(
        new string[] { "One", "Two", "Three", "Four" },
        new string[] { "One", "Three", "Four", "Six", "Seven" }));

    foreach (string item in result)
    {
        Console.WriteLine(item);
        break;
    }
    Console.ReadKey();
}

When running this program, the calculation of the first result from SkipShortStrings only involves fetching two elements from the sequence produced by FindCommonItems (the first element being discarded as too short). An illustration of the execution of this program is shown below. Once again, no items become part of the result until they are actually requested.

Chaining Sequence Generators

The easiest way for any .NET 3.5 developer to take advantage of the lazy nature of sequences is to start using the static methods in the class System.Linq.Enumerable, collectively known as LINQ to objects. These are extension methods which operate on sequences and consequently benefit from their laziness. Consider this simple example.

static void Main(string[] args)
{
    IEnumerable<string> result =
        (new[] { "One", "Two", "Three", "Four" })
        .Where(item => item.StartsWith("T"))
        .Select(item => item.ToUpper());

    foreach (string item in result)
    {
        Console.WriteLine(item);
    }
    Console.ReadKey();
}

This code uses LINQ in order to obtain a sequence of the uppercase variants of all input strings which start with a ‘T’. Before the foreach-loop, LINQ will not perform any iterations or calculations on the string array. This only occurs when the result sequence is enumerated.

As shown in this post, taking advantage of the lazy nature of sequences can lead to a more efficient program execution. In many situations, it will prevent your code from making unnecessary calculations. Also, when chaining several sequence operations, the first result can be produced much sooner than if each operation has to run through an entire collection before passing its result to the next operation.

Advertisements

Lists vs. Sequences: Concatenation

In my previous post, I presented some of the benefits of thinking in sequences rather than lists. This post will discuss a concrete example, concatenation of multiple series of elements.

The case is as follows. You have several series of elements and you would like to aggregate all the contained elements into one concatenated series. This resulting series is for iteration only and does not have to be mutable.

To solve the task using lists, one could write a method like this:

public static IList<T> ConcatenateLists<T>(params IList<T>[] lists)
{
	List<T> retList = new List<T>();
	foreach (var list in lists)
	{
		retList.AddRange(list);
	}
	return retList;
}

The elements of all the lists passed to the method are added to a result list which is ultimately returned to the caller. Semantically, there is absolutely nothing wrong with this method. It produces the expected result.

Consider, however, the performance of this method if the input lists contain a million elements altogether. The result list would need to contain one million elements. If you are lucky, the elements are of a reference type, requiring only storage for one million references. If the elements are of a value type, however, one million objects, including their data, would have to be copied to the result list. If each object holds a kilobyte of data, the result list will need to allocate an entire gigabyte of memory before the iteration can even begin!

Surely, a smarter approach would be beneficial. My proposal is to think in sequences. Since the resulting concatenation only needs to be iterated over, there is no specific need for the result of the method to be a list in the first place. A sequence is sufficient. Further, the input elements can be regarded as a series of sequences as well, yielding a method like follows:

public static IEnumerable<T> ConcatenateEnumerables<T>(params IEnumerable<T>[] enumerables)
{
	foreach (var enumerable in enumerables)
	{
		foreach (var item in enumerable)
		{
			yield return item;
		}
	}
}

Notice the yield return statement. As previously discussed, this facilitates lazy execution. If, for some reason, the caller decides it only needs the first ten elements, then the yield return statement is only executed ten times.

Consider again the scenario of a million kilobyte-sized value type elements being concatenated. This time, the caller will be served the sequence one element at a time, from already allocated memory, reusing a single kilobyte-sized variable through the entire iteration.

Evidently, thinking in sequences can lead to remarkable performance gains. Of course, you need to consider the requirements and determine whether a sequence will meet your needs. If, for any reason, the resulting series has to be manipulated as a list, consider still using sequences and relying on LINQ’s Enumerable.ToList<TSource> extension method to aggregate the elements into a list when needed.

Thinking in Sequences

When dealing with series of objects, it is easy to think of them as lists, or arrays. That is, after all, the first collection types most people get acquainted with when becoming a programmer. And by all means, lists are very versatile and easy to employ in most situations. However, they often provide more functionality than you strictly need. Rather than considering collections of objects as lists, I find it helpful to think of them as sequences.

Since version 1.0, the .NET Framework has provided the IEnumerable interface for iterating over a sequence of objects.  These days, its generic cousin, IEnumerable<T>, introduced in version 2.0, is often preferred. Initially, those interfaces existed mainly to support the foreach statement. However, with the advent of LINQ to Objects, IEnumerable got a morale boost. Suddenly, anyone could easily create advanced queries against any sequence of objects.

In my opinion, the main advantage of IEnumerable is its stream-based nature. Like with streams, the first items of an IEnumerable sequence can be made available for processing without having to collect every single item first. A sequences does not even have to be finite. Consider, for instance, the following code, returning an infinite sequence of all positive integers.

public static IEnumerable<int> EnumerateAllPositiveIntegers()
{
	int integer = 0;
	while(true)
	{
		integer++;
		yield return integer;
	}
}

Note the yield return statement. This is a very handy shortcut which C# provides for creating IEnumerable sequences. Simply yield return every item you wish to include in the sequence. Those who are unfamiliar with such iterator blocks in C# may suspect that the code above will lead to an infinite loop. However, every yield statement will lead to the enumerator’s MoveNext method returning, giving control back to the client object. Of course, if the client iterates the sequence using a regular foreach loop expecting the sequence to terminate, an infinite loop will in fact be the result.

The following code shows how the integer-generating method above can be used in a LINQ query, without risking an infinite loop.

IEnumerable<int> firstFiftyOddPositiveIntegers = EnumerateAllPositiveIntegers()
	.Where(num => num % 2 == 1)
	.Take(50);

Rather than generating an infinite number of integers, consider an operation which needs to complete some time-consuming task for every value it returns. In such cases, exposing the results through an IEnumerable sequence using yield return will allow the client to process each value without having to wait for every value to be produced, collected and returned.

So, when are sequences preferable over lists and other collections? Obviously, if a series of items is potentially infinite, a sequence has to be used; it cannot be represented as a list or collection. Generally, sequences are intended for items to simply be iterated over, while lists are collections you can add items to and remove items from. My rule of thumb is to expose IEnumerable sequences whenever feasible, relying on LINQ to convert the sequence into lists or arrays, should it be necessary.

By thinking in sequences, LINQ makes even more sense than before, and it also makes it easier to spot those situations where a sequence is not sufficient for the task.