Introduction to Hexadecimal Numbers

Hexadecimal notation is often used in computer science because it is relatively easy to convert between hex and binary notation for numbers, and hex is much easier to remember than binary, e.g. which of the following do you find easier to remember:

      11010011101011000101010110011110
or
      D3AC559E
We will see below that these both represent the same number and with some practice you can learn to convert from one to the other in your head.

If you want to convert hexadecimal (i.e. base 16) numbers to decimal, you multiply the first digit by 16 and add the second digit, e.g.

 FF = 15*16 + 15 = 255
 F0 = 15*16 +  0 = 240
 C0 = 12*16 +  0 = 192
 0C =  0*16 + 12 =  12
Hexadecimal is used because one hexadecimal digit corresponds to a four bit binary number
0   0000
1   0001
2   0010
3   0011
4   0100
5   0101
6   0110
7   0111
8   1000
9   1001
A   1010
B   1011
C   1100
D   1101
E   1110
F   1111
Thus, a two digit hexadecimal number corresponds to a 2x4=8 bit binary number, i.e. a one byte binary number. The wonderful property of hexadecimal numbers is that it is extremely easy to convert hexadecimal to binary and vice versa. The convert to binary, you simply replace each hexadecimal digit with its four bit binary equivalent. Thus,
  CF83AA  -->  1100 1111 1000 0011 1010 1010 = 110011111000001110101010 =
Conversely, to convert a string on bits into a hexadecimal number you chunk it into groups of four bits and lookup the hexadecimal equivalent of each chunk. Thus,
      11010011101011000101010110011110

->    11010011101011000101010110011110
      ****    ****    ****    ****    

->    1101 0011 1010 1100 0101 0101 1001 1110
      ****      ****      ****      ****    

->    D    3    A    C    5    5    9    E

->    D3AC559E
Observe that it is fairly easy to remember an 8 bit hexadecimal number, but it is nearly impossible to remember the equivalent 32 bit binary number.