.NET Back to Basics: The int Class

The humble 'int'. We take it for granted every day we write software in .NET without even stopping to think what's behind the scenes. To many developers, it's just a simple type, but below the covers 'int' is actually a type, set in the 'System' namespace.

Granted, it's not as large as 'Math' or 'Collections' or even 'String', but it has a few unique tricks up its sleeve that can prove interesting.

For a start, int comes in several flavours:

  • Int16
  • Int32
  • Int64
  • UInt16
  • UInt32
  • UInt64

In most cases, you'll simply just use 'int' when writing software, but in cases where you need to make sure that your value sticks to a specific number of bits, and is definitely signed or unsigned, you need to use the actual class name.

The 16, 32, & 64 specify the 'Bit Size' of the number. The 'U' prefix means that the 'Unsigned' variety is to be used.

Let's try an example. Create yourself a simple .NET console mode program in Visual Studio and make sure program.cs has the following code in it:

using System;

namespace IntClassExplorer
{
   class Program
   {
      static void Main()
      {
         UInt16 umax16 = UInt16.MaxValue;
         UInt16 umin16 = UInt16.MinValue;
         Int16 max16 = Int16.MaxValue;
         Int16 min16 = Int16.MinValue;

         UInt32 umax32 = UInt32.MaxValue;
         UInt32 umin32 = UInt32.MinValue;
         Int32 max32 = Int32.MaxValue;
         Int32 min32 = Int32.MinValue;

         UInt64 umax64 = UInt64.MaxValue;
         UInt64 umin64 = UInt64.MinValue;
         Int64 max64 = Int64.MaxValue;
         Int64 min64 = Int64.MinValue;

         Console.WriteLine("Maximum Value for Unsigned Int16: {0}",
            umax16);
         Console.WriteLine("Minimum Value for Unsigned Int16: {0}",
            umin16);
         Console.WriteLine("Maximum Value for Signed Int16: {0}",
            max16);
         Console.WriteLine("Minimum Value for Signed Int16: {0}",
            min16);

         Console.WriteLine("Maximum Value for Unsigned Int32: {0}",
            umax32);
         Console.WriteLine("Minimum Value for Unsigned Int32: {0}",
            umin32);
         Console.WriteLine("Maximum Value for Signed Int32: {0}",
            max32);
         Console.WriteLine("Minimum Value for Signed Int32: {0}",
            min32);

         Console.WriteLine("Maximum Value for Unsigned Int64: {0}",
            umax64);
         Console.WriteLine("Minimum Value for Unsigned Int64: {0}",
            umin64);
         Console.WriteLine("Maximum Value for Signed Int64: {0}",
            max64);
         Console.WriteLine("Minimum Value for Signed Int64: {0}",
            min64);

      }
   }
}

If you run this, you should see the following output:

Int1
Figure 1: Output from our first code listing

You can see straight away that the unsigned versions all have a minimum value of 0, which means they can't be used to represent negative values. The signed versions, however, can, but at the cost of their upper positive limit being halved.

The maximum value is the number of bits to the power of 2, so for a 16 bit number this is

2^16 = 65536

This number comes from the binary columns across the number, which are all powers of 2. For example, a 4 bit number with four columns would be

8 4 2 1 (Work from the right to the left.)

Adding each of these up give us

8+4+2+1 = 15

Which means 15 is the maximum number a 4 bit integer can take. A 16, 32, and 64 bit integer are no different, except instead of four columns going from right to left, you go for as many columns as there are bits in the number.

So, What Makes Signed Numbers Different?

I'm glad you asked :-)

With a signed number, the left most bit is not used as part of the number; it's used as a flag to indicate if our integer is positive, or negative, and it's because of this that our number range is effectively halved.

Because each 'bit' column is a single power of 2, reducing the columns in your 16 bit number, for example, reduces it to a 15 bit number, effectively dividing the number by 2.

If you want to learn more about how this works, the 2's compliment article of Wikipedia is a great place to start.

The important point to take away from the previous example is the Sign type and constants that each of the integer classes have, which let you easily find the minimum and maximum values each can hold.

The integer classes also have a number of useful methods that allow you to convert string to integers, integers to strings, and perform testing on them.

The first two of these are

  • CompareTo
  • Equals

CompareTo returns an integer value indicating if the passed-in operand is greater than, less than, or equal to the value being converted. Change your program.cs file to contain the following code and run it.

using System;

namespace IntClassExplorer
{
   class Program
   {
      static void Main()
      {
         Int32 myInt = 5;

         Int32 lessInt = 1;
         Int32 greaterInt = 8;
         Int32 sameInt = 5;

         Console.WriteLine("{1} compared to {0} gives a value of {2}",
            lessInt, myInt, myInt.CompareTo(lessInt));
         Console.WriteLine("{1} compared to {0} gives a value of {2}",
            greaterInt, myInt, myInt.CompareTo(greaterInt));
         Console.WriteLine("{1} compared to {0} gives a value of {2}",
            sameInt, myInt, myInt.CompareTo(sameInt));

      }
   }
}

The results should look something like the output in Figure 2:

Int2
Figure 2: Output from code Listing 2

As you can see from Figure 2, if the operand is less than the integer being compared to, a value of 1 is returned. However, if the operand is greater, a -1 is returned, with 0 being returned if and only if the operand and the source integer are equal.

If you now change the three 'Console.WriteLine' statements in code snippet 2 as follows:

         Console.WriteLine("{1} equal to {0} gives a value of {2}",
            lessInt, myInt, myInt.Equals(lessInt));
         Console.WriteLine("{1} equal to {0} gives a value of {2}",
            greaterInt, myInt, myInt.Equals(greaterInt));
         Console.WriteLine("{1} equal to {0} gives a value of {2}",
            sameInt, myInt, myInt.Equals(sameInt));

Then re-run the program, you'll see the difference between 'Equals' and 'CompareTo':

Int3
Figure 3: The output from code snippet 2, with the write line statements changed

As you can see, 'Equals' simply returns a Boolean value indicating if the value is equal or not, whereas 'CompareTo' returns a value that also tells you the direction of the inequality.

Moving on from comparisons, next up are the "Parsing" methods.

Parsing is the act of taking a string containing a possible integer value, and attempting to turn it into an integer. For example:

"123" becomes 123

But

"ABC" would throw an error

All of the parse methods are static members of each of the Intxx classes, which means you don't need to instantiate an object using new to use them.

There are two main versions of the Parsing methods:

  • Parse
  • TryParse

Parse has four overrides, as follows:

  • Parse(string)
  • Parse(string, IFormatProvider)
  • Parse(string, NumberStyles)
  • Parse(string, NumberStyles, IFormatProvider)

The first simply takes the string you wish to parse, and attempts to convert it to an integer representation. If the string is un-parseable, parse will throw a 'FormatException' which can be caught by using the normal try/catch mechanism.

The remaining three take Format providers (a .NET class that represents culture-specific formatting information to help parse non-current culture formats) and/or a member of the 'System.Globalization.NumberStyles' enumeration that allows you to let parse know about things like the presence of hyphens, hexadecimal number formatting, and other similar options.

In most cases, the first version is generally all you'll want to use. If, however, you're dealing with values provided by a user who's not native with the currently selected culture, you'll often want to be very specific about the culture and formats you set up.

Change your program.cs file to look as follows:

using System;

namespace IntClassExplorer
{
   class Program
   {
      static void Main()
      {

         try
         {
            Int32 test1 = Int32.Parse("12345678");
            Console.WriteLine("Test 1 converted successfully to {0}",
               test1);
         }
         catch(Exception ex)
         {
            Console.WriteLine("Test 1 failed with exception {0}", ex);
         }
         Console.WriteLine();

         try
         {
            Int32 test2 = Int32.Parse("ABCDEF");
            Console.WriteLine("Test 2 converted successfully to {0}",
               test2);
         }
         catch (Exception ex)
         {
            Console.WriteLine("Test 2 failed with exception {0}", ex);
         }
         Console.WriteLine();

         try
         {
            Int16 test3 = Int16.Parse(Int32.MaxValue.ToString());
            Console.WriteLine("Test 3 converted successfully to {0}",
               test3);
         }
         catch (Exception ex)
         {
            Console.WriteLine("Test 3 failed with exception {0}", ex);
         }
         Console.WriteLine();

      }
   }
}

When you run this program, you should get something similar to Figure 4:

Int4
Figure 4: Output from testing the int Parse method

In the last bit of code, test 1 works fine because that's a parsable integer. Test 2, however, does not due to the fact that it's not possible to derive a number from it.

Test 3 fails because, although the value is a genuine integer, the value produced is far too large for the destination variable and so causes an overflow exception to be thrown.

'TryParse', like parse, has overrides, but you'll notice that, unlike parse, it returns its integer result in an 'out' parameter, rather than as a result of the method call. The main reason for this is because 'TryParse' returns a Boolean stating if the conversion was successful and does NOT throw any of the exceptions that 'Parse' does.

'TryParse' is designed for use inside code where you intend to detect and act on invalid format and overflow errors yourself, or where throwing an exception might prove to be more trouble than it's worth; for example, in an embedded Linq statement.

Change the code in program.cs as follows:

using System;

namespace IntClassExplore
{
   class Program
   {
      static void Main()
      {

         Int32 test1;
         bool test1result = Int32.TryParse("12345678", out test1);
         Console.WriteLine("Test 1 converted to {0} with result {1}",
            test1, test1result);

         Int32 test2;
         bool test2result = Int32.TryParse("ABCDEF", out test2);
         Console.WriteLine("Test 2 converted to {0} with result {1}",
            test2, test2result);

         Int16 test3;
         bool test3result = Int16.TryParse(Int32.MaxValue.ToString(),
            out test3);
         Console.WriteLine("Test 3 converted to {0} with result {1}",
            test3, test3result);

      }
   }
}

Then run it. The result should look a bit like the following:

Int5
Figure 5: Output from testing tryparse

If the number cannot be converted, the output result remains at 0 (the default value) and the output from the method call is false, allowing you to make the decision on how to handle the error yourself. The drawback is that you don't know the actual error that occurred, only that the conversion failed. This means that, if you want to handle the error differently depending on the issue, you'll still have to use 'Parse'.

The final method to cover is "ToString()" which, put simply, converts the integer value represented by the current Intxx object into its string representation.

The overrides to the method allow for an IFormatProvider to be given, allowing the integer to be formatted in a culture-specific way, or to provide a regular string that provides formatting instructions to build the string up in a certain way, such as padding out spaces.

Intxx can also use the normal +, -, /, and * operators to perform direct mathematic operations on its values, and can also take part in operations that perform AND, OR, or XOR on the variables in question.

Next month, we'll revisit the 'string' class in the same way as we've done here, and take a closer look at the various methods provided under the lid there.

Until then, don't get your integers in a knot and make sure you parse them correctly.

Shawty



About the Author

Peter Shaw

As an early adopter of IT back in the late 1970s to early 1980s, I started out with a humble little 1KB Sinclair ZX81 home computer. Within a very short space of time, this small 1KB machine became a 16KB Tandy TRS-80, followed by an Acorn Electron and, eventually, after going through many other different machines, a 4MB, ARM-powered Acorn A5000. After leaving school and getting involved with DOS-based PCs, I went on to train in many different disciplines in the computer networking and communications industries. After returning to university in the mid-1990s and gaining a Bachelor of Science in Computing for Industry, I now run my own consulting business in the northeast of England called Digital Solutions Computer Software, Ltd. I advise clients at both a hardware and software level in many different IT disciplines, covering a wide range of domain-specific knowledge—from mobile communications and networks right through to geographic information systems and banking and finance.

Related Articles

Comments

  • RE: R&D Manager

    Posted by Peter Shaw on 01/30/2016 03:20am

    Indeed Jim, and yes a small oversight on my part there.

    Reply
  • r&d manager

    Posted by jim lohr on 01/26/2016 01:01pm

    a mistake that could cause confusion amongst newbies... you state: "The maximum value is the number of bits to the power of 2, so for a 16 bit number this is 2^16 = 65536" this is incorrect. the formula for the maximum value of a 16 bit number is (2^16)-1=65535 the value 65536 is the number of unique values of a 16 bit number which are 0 through 65535 inclusive.

    Reply
  • R&D Manager

    Posted by Jim Lohr on 01/25/2016 09:09am

    Minor error, but could cause confusion to newbies... you state: "The maximum value is the number of bits to the power of 2, so for a 16 bit number this is 2^16 = 65536" The formula you gave is for the number of unique values which are zero though 65535 inclusive. The formula for the max value is actually (2^16)-1 = 65535.

    Reply
Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

Most Popular Programming Stories

More for Developers

RSS Feeds

Thanks for your registration, follow us on our social networks to keep up-to-date