Consider the following simple bit of code:

 

        static void Main()
        {
            double currentTemperature = 85; // from external sensor

            if (IsTooHot(currentTemperature))
            {
                ShutDown("Too hot!");
            }
            else
            {
                DoImportantWork();
            }
        }

        public static bool IsTooHot(double temperature)
        {
            const double MaxTemperature = 90.0;

            if (temperature > MaxTemperature)
                return true;
            return false;
        }

At first glance it looks fairly inocuous but there's potential for a disaster lurking in there, can you see it? What if we change the names of the temperature variables to currentTemperatureInCelcius and maximumTemperatureInFahrenheit? Uh oh. Think this an unlikely scenario? A similar mixing of measurements caused the loss of a NASA spacecraft. An embarassing and expensive error.

The values in our program aren't just numbers, they are a measurement of something, and that something is important too. Using primitives to represent them loses the essence of what we are dealing with. Recently I've been reading about F#, which has some very interesting features, one of which is being able to give values units of measure, which would prevent the kind of mishap here if used.  While we can't do exactly the same in C# we can use the type system to increase the safety of our code. First up lets make a couple of types to represent our measurements:

 

    public struct Celcius
    {
        public double Value;

        public Celcius(double value)
        {
            Value = value;
        }
    }

public struct Fahrenheit
{
public double Value;

public Fahrenheit(double value)
{
Value = value;
}

public static bool operator > (Fahrenheit val1, Fahrenheit val2)
{
return val1.Value > val2.Value;
}

public static bool operator <(Fahrenheit val1, Fahrenheit val2)
{
return val1.Value < val2.Value;
}
}

Notice that we are using a struct here not a class, this matches the semantics of the original usage better than a class would, which will become clearer later. For brevity's sake I've only defined greater than and less than operators for Fahrenheit as this is all that's needed for this sample.

Now let's update our original program to use the new types:

        static void Main()
        {
            Celcius currentTemperature = new Celcius(85);

            if (IsTooHot(currentTemperature))
            {
                ShutDown("Too hot!");
            }
            else
            {
                DoImportantWork();
            }
            Console.ReadLine();
        }

        public static bool IsTooHot(Fahrenheit temperature)
        {
            Fahrenheit MaxTemperature = new Fahrenheit(90.0);

            if (temperature > MaxTemperature)
                return true;
            return false;
        }

And it won't compile; we've now got type safety ensuring that we can't mix up our measurements. Now, we could write some kind of static converter and call it every time we need to convert values but that would make things look quite untidy. Another option is to make use of C# implicit conversion operator. Add this to the Celcius type:

        public static implicit operator Fahrenheit(Celcius val)
        {
            return new Fahrenheit(val.Value * 1.8 + 32);
        }

Now we can use Celcius values wherever Fahrenheit ones are expected and the value will be converted for us, maintaining the safety of our operations while reducing the friction of having to manually do conversions every place they are needed. I'd be wary of doing this with classes, where reference type semantics mean there's an expectation that the object will remain the same when doing these kinds of casts, but with value types there is no such expectation so it shouldn't cause any problems.