Application Modernization: What Is It and How to Get Started
The Basics Of Binary
This article introduces the basics of how binary data is manipulated. It is intended for readers who are fairly new to numerical systems. The concepts of storing integers, real numbers, and characters are discussed.
I first will go through the general theory of numerical systems (decimal, binary, and hexadecimal) and then proceed to how these are stored in computers. I would guess that everyone who has heard of a computer or the Internet has at some point come across the concepts of binary and hexadecimal. But, for the sake of completeness, allow me to re-introduce you to it.
Now, consider the value 613. The value of the total number represented is:
Okay... that seed seemed a bit arbitrary... but stay with me.
Note the subscript, 10, to indicate which number system is being used. It's ten because there are ten elements in the number system.
Binary is much the same except that it only has two elements in its counting system. So, you start at 0 then 1, but now you have to move to the next digit so you get 10, then 11 and again to the next digit 100, and so on. Just as with the decimal system, you can represent any integer using this technique... and just like decimal, you can get a binary number's decimal value by converting it as follows:
The only difference here is that you use 2 instead of 10 because there are two elements in the number system.
The system used for hexadecimal is just like binary and decimal. Hexadecimal has 16 elements, namely 0, 1, 2, 3, ... 8, 9, A, B, C, D, E, F. The values of A, B,... F are 1010, 1110,... 1510. It follows the same rules of converting:
In binary, you call each digit a bit, and 8 of these bits are called a byte. Why 8? Who knows! But that's the way it happened and now we're stuck with it. Now, the nice thing about hexadecimal (and why you use it so much in computing) is that you can represent four bits using one hexadecimal digit, and eight bits using two hexadecimal digits.
Consider the binary number 110100112. Wow, eight digits are pretty hard to read. But, if you convert it into hexadecimal it becomes only two digits. By using the first four bits (11012), you can convert this to decimal equivalent 1310, which equals the hexadecimal number D16. Then the last four bits, 00112 = 310 = 316. So, the total byte is represented in hexadecimal by B316.
That pretty much concludes representing integers using binary and hexadecimal (except for negative numbers, but I'll get there). Now, I'll move on to something slightly more complicated.
The decimal number 0.5 is a good enough place to start. How would you represent this in binary or hexadecimal? It still follows the same rules as before. When storing integers in decimal, the last digit carries a value of 100 the next was 101 then 102 and so on. So, the "power of" increases as you look to the left, or another way to look at it is that the "power of" decreases as you look to the right.
If you were to look beyond the last digit (in other words, you look right of the point), you would find that the sequence continues. The digit just to the right of the point carries a value of 10-1 (i.e. 0.1) then 10-2 and so on.
The same rule applies in binary except that you work with a base of 2 not 10. So, the binary digit to the right of the point would carry a value of 2-1 then 2-2. Now, 2-1 is 0.5, 2-2 is 0.25 so when converting a binary number to decimal, you would use these values. Consider the binary number 101.112:
Easy? You get the hang of it as you go along.
And, obviously, the same is true for hexadecimal; for example, 4F.E16:
Floating Point Representations
In decimal, you often work with VERY large numbers or VERY small numbers (just ask astro and nuclear physicists). It soon becomes impractical to go writing a whole stack of zeros; for example, The speed of light, c, which is 300000000 m/s (but they get MUCH larger than that).
So, what some bright lad decided to do was put down the most significant numbers and then just indicate how many others there are. So, in the case of the speed of light, c, 3 * 108. This is shortened even more by writing E instead of "* 10", giving you 3E8, which is so much easier to write.
The two parts of this number (3 and 8) are called the mantissa and the exponent, respectively.
Naturally, you can do the same thing in binary except that you use 2 instead of 10. So, the number 10000002 could be written as 12 * 26. Writing the whole thing in binary (using our "E" notation), you get 12E1102.
Now, consider the decimal number 185712956274. How do you represent this in your "E" notation? Well, you put down the most significant numbers and then state how many other digits there are. Which numbers are significant depends on how accurately you want it, so you are going to approximate. Typically, one wouldn't worry too much about anything more than about three digits (which is what I'll use here). That massively complicated number now becomes 186E9.
But, it gets a bit hard to compare numbers if I choose 186E9 and you choose 1857E8. By quickly looking at these two representations, you can't quickly say whether they are approximately equal or if one is ten times larger than the other.
So, to get around this, you move the decimal point over in the mantissa so that only one digit is before the decimal point and then add the number of "shifts" to the exponent. With 186E9, the mantissa becomes 1.86 (two shifts), so the exponent becomes 11. Your number is then 1.86E11.
Looking back your new representation, 1.86E11 means 1.86 * 1011 = 186000000000.
This representation is commonly referred to as floating point. The name is pretty self explanatory; the point is not set in a specific place within the number, but "floats" around.
Binary can do it too. 101000002 would be represented as 1.012E01112.
Those are the basic concepts. Now, you can look more specifically at how these numbers are stored in a computer.