Binary to Decimal to Hexadecimal : Number Mysteries Revealed

I am a published author and I love writing! My first novel is called “Phantammeron Book One”. And I’ve been a visual artist and painter. So I am very much a creative right-brain person.

But I am also one of those strange people that has a very strong “left brain”, and have enjoyed being a very logical person who has always liked math, programming, and computers. I work in software and have been programming in several languages for over 15 years. So mathematics, code, data, and numbers is very much an interest of mine. My brain I guess you might say is one of those weird things……desiring and enjoying both the messy, free creativity of the fine arts and the coldly rigid and structured nature of architecting software and working with computers.

And so I want to digress and share something that’s been a mystery for many people entering the Digital Age. There has been a lack of a simple explanation behind the concepts of Binary, Decimal, and Hexadecimal numbers and how they relate. I am hoping this article helps those who may still be stuck understanding the basic numerical structures that underlay almost all the technology we use today.

One of the mysteries of using numbers in the Digital Age is how Binary numbers used by computers get translated into “normal numbers” which we use (Decimal), and how those two numbers relate to Hexadecimal numbers used in software, cryptography, images, and web coding.

Without getting into long explanations of complex mathematical systems and definitions, I will get right to the heart of the matter in how these three numerical systems can be converted into each other. If you are interested in the deeper meaning and math behind Binary, Decimal and Hexadecimal number systems then feel free to search the Internet for more detailed information.


Here is how Binary, Decimal and Hexadecimal integrate. Traditionally “binary” has become the term for computer code, the assembly level bits used by nearly all computers. Binary is simply a long series of on and off “dual” values (1’s and 0’s) that get run though computer processors and stored in memory. All software is translated or converted into binary. And its these rows of 1’s and 0’s that make all the pretty stuff you see on your computer screen and make the digital world function.

Here is a simple example of a binary “series” of values. In this case you are seeing 8 bits in a row of 1’s and 0’s:


This series of “on” and “off” values gets translated into real numbers we use. But more on that later.

In math terms this series of dual bit values has more and more combinations as it grows. It’s first value can be just two values, 1 or 0, right? Adding another bit value to the left of the first gives you now 4 possible combinations. Adding a third bit now gives you 8, etc. But it’s each dual combo that defines it as Binary in mathematical circles. Again binary means a number having two possible values.

A series of eight values of these ones and zeroes (say 00101101) is the same mathematically as saying that all the possible 1’s and 0’s in that 8-bit binary value can have 2 to the 8th power (or 256) combinations. And that is why these bits are called Base 2 or “Radix 2” in our numerical systems.

So these bits or binary values represent a Base 2 math system in terms of the combinations it can represent and how they can be translated back into other number systems, which by the way is called Decimal or Base 10. More on that later.

As mentioned above there are essentially 256 combinations of 1’s and 0’s in a series of 8 bits (ranging from 00000000 – 11111111). (8-bits it turns out is the standard number of bits used in most computer memory systems). Note that in this set of eight bits some are 1’s and some are 0’s. These bits are off when 0 or on when 1. Many people use the term “bit” to represent the on state or 1 value. So don’t get confused if you here “bits” to represent binary, but also the “on” state or 1 of a binary value. Just understand, bits, binary, Base 2, series of 1’s and 0’s, and the “on” or 1 state in a series of binary digits are all the same! They are all bits.

I’ve found that information extremely confusing over the years but now I understand.

Any series of 8 bits is called a “byte”, by the way. So, 10110010 is a Byte or 8-bit value in the computer world. It represent the fundamental storage value talked about in computer jargon. Maybe you’ve heard about Gigabytes in your computer. That’s around a billion bytes! So now you know how bits and bites are measured and how everything in computers is described in bytes. They are just large groups of binary values.


Decimal notation is basically “normal numbers” we use as human beings. Its the standard number system we use for counting, for money, units, and for translating everything we do in terms of various counting units world-wide. And it is the counting system we were taught as babies.

Like the binary system, the decimal system has a base system, too: Base 10, or Radix 10. Decimal means we use a series of 10 numbers, 0-9, or ten numbers including 0 to represent this basic numerical series. At 10, we move the digit over one and continue. Notice how that looks like Binary. In fact it acts the same but using 10 rather than 2 numbers.

At 100, we move the number over 2 places and continue counting, etc. etc. At 999 we move to 1000. So the series grows in characters based on series of 10 digits.

Binary or Base 2 itself moves over to a new number after the first two values are represented, so after moving from 0 to 1 we add another number to its left to represent the next value, which then becomes 00.

In Decimal we again are counting the same but using 0 to 9 (using one digit) we then add a digit and move to 10 until we get to 99 (using two digits), then go to 100, etc.

So you can see how Base 2 and Base 10 work the same. One advances after two values, the other after 10. This shows how binary and decimal are related as they grow in value, as follows:

0 binary = 0 in decimal

1 binary = 1 in decimal

10 binary = 2 in decimal

11 binary = 3 in decimal

100 binary = 4 in decimal

101 binary = 5 in decimal

If we note that 1100 in binary = 12 in decimal, we can see how as bit series in binary grow, decimal gets increased as well using its own increment when 9 moves to 10. So they are very much brothers. They just look alien to each other visually because they use different base numbers.

Let’s look at this translation between the two number systems again:

A 4-bit’s maximum value of four 1’s is 15. This will become important in understanding Hexadecimal below.

1111 binary = 15 decimal

Now let’s go back to our original Byte or 8-bit value mentioned above at the beginning of the article:

01111000 = 1 Byte, or 8-bit number in binary notation

But binary can be represented in decimal and translated to numbers we understand by going back to Binary as simply a math system of combinations, represented by “2” raised to the power as follows:

0 in binary = same as 0 x (2 to the 0th power)

This then is translated to:

0 in binary = 0 x 1 = 0 in decimal

Let’s try a 2-bit value:

11 in binary = 1 x (2 to the 1st power) + 1 x (2 to the 0th power) = 2+1 = 3 in decimal

You can see how Base 2 bits are now translated into a series of Base 10 values depending on whether they are 0 or 1 in the binary series. Using the same math translation as above we can now translate any Binary computer values back into real Decimal numbers as follows:

1011 binary = 1 x (2 to the 3rd power) + 0 x (2 to the 2nd power) + 1 x (2 to the 1st power) + 1 x (2 to the 0th power)

Which is:

1 x (8) + 0 x (4) + 1 x (2) + 1 x (1) = 8 + 0 + 2 + 1 = 11 in decimal

So, 1011 binary is 11 in decimal

That is how binary to decimal can be translated. Easy huh?

Now your typical computer binary code, as I mentioned, is in 8-bit series. As our computer systems and on-board memory systems have grown, we have begun to store larger and larger numbers in these series of bits and bytes. As mentioned, a Byte is 8 combos of 1’s and 0’s. Larger systems use 32-bit and even 64-bit, which is 8 Bytes or 8 sets of 8-bits. So you can see how large numbers in decimal notation are stored in larger and larger binary memory systems. Encryption codes in 256-bit and now 1024-bit and larger grow even bigger with even larger values.


These bit/decimal values are tied closely to running programming languages – like C#.NET by Microsoft –  and are often stored in a variable assigned to memory which often match the maximum bits a computer can store for that “number type” in its RAM or random access memory.

For example a 2-byte value is called a “short” in C# language. Example sample decimal value would be:

short x = 2567;

A 2-Byte Binary is the same as 16 bits. So it can hold 65,535 combinations of 1’s and 0’s represented in the 16 bits it stores. But shorts are “signed”, meaning they must support both negative and positive numbers. So a signed “short” as above can actually hold -32768 to +32767. In terms of 16-bits that’s a minimum and maximum binary of from “0000000000000000” to “1111111111111111” with one exception. One of the last binary values (far left binary digit) represents the sign, – or +. That’s why we have “unsigned” shorts or “ushort” in C#, as it holds only positives and the full range of 0 to 65,535 decimal numbers represented in binary.

But let’s go back to the Byte….

A simpler 8-bit Byte (like 01111000) has 256 combinations of numbers or 255 plus zero (0). That means 2 Bytes as above can store 65,535 numbers (in an unsigned variable or in memory), and 4 Bytes around 2.15 billion decimal numbers. So you can see how powerful bits are in representing very large decimal numbers simply looking at all the binary combinations in the math of counting 1’s and 0’s. But it’s the Base 2 math that allows us to count and translate that to decimal.

That’s also one of the logical reasons bits are used in computers to store decimal numbers in binary. Besides the fact computers use transistors and electronics that limit it to binary on off values, it really is simpler for computers to use large series of binary values than to store 10 strange number symbols in memory. Numbers in computer languages are bits and human numbers are Decimal. But the two are easily translated as I describe above.


Now that you know binary and decimal, let’s look at hexadecimal. Hexadecimal numbers are also valuable forms of computer created number systems. Hexadecimal is a form of taking the simplicity of binary and the compression of large numbers in Decimal and creating a new system that uses the best of both number systems that speed up computing and open up greater storage capacity as numbers get gigantic.

Hex means six as Dec means ten. So Hexadecimal is basically 10 plus 6, or Base 16.

Base 16 numbers are a little odd. They work exactly like Base 2 and Base 10 but are tricky to translate. But they work on the same principles.

Let’s go back to our Byte example above:

“01111000” in binary as mentioned can be represented as a “power of 2” series to get the decimal value. Let’s do that for this full Byte series again:

01111000 in binary = 0 x (2 to 7th power) + 1 x (2 to 6th power) + 1 x (2 to 5th power) + 1 x (2 to 4th power) + 1 x (2 to 3rd power) + 0 x (2 to 2nd power) + 0 x (2 to 1st power) + 0 x (2 to 0th power) = 0 + 64 + 32 + 16 + 8 + 0 + 0 + 0 = 120 (the binary number represented in decimal)

Now there is one other way math people represent these two values:

01111000 = (120)subscript 10 = 120

What that means is the decimal value 120 can like Base 2 be represented as a series of 10 to the power values:

120 = 1 x (10 to the 2nd power) + 2 x (10 to the 1st power) + 0 x (0 to the 0th power) = 1(x100)+2(x10)+0(x1) = 120

Basically, (120)subscript 10 is just itself or 120. You will see why this is helpful below in terms of using Hexadecimals.

In Hexadecimal, we have the “power of 16” or Base 16. But it uses its own system, where it uses the decimals 0-9, then A-F for 10-15. If you include the zero (0), we have 16 possible numbers using 0-F. Example:

8 = 8 in hexadecimal, but

F = 15

52FB is 4 separate Hex values, translated as 5, 2, 15, 11 in Decimal which then get translated into Binary. So as you can see 4 simple hex numbers becomes a bigger number in Decimal which becomes an even larger series of Binary numbers.

As with a decimal number each Hexadecimal value also has its own set of Binarty values. As we mentioned above 120 must be stored as 01111000. It’s the same for hex values. But because each hex is 16 values, it needs blocks of 4 bits to represent each hex value, which is 0-15, or shown as 0-9 or A-F.

When Hexadecimals are thus translated into binary numbers, each Hex is represented by a half of a byte or 4-bits as follows:

1001 in binary would be one Hex value. In this case its the value “9”, same as the decimal

Two 4 but series or blocks, as in 1100 1100 = AA in hex (or 12 12)

Remember numbers over 9 get translated into the letters A-F in hex. So 1111 is “F” or “15” in Hexadecimal. F is the correct Hex value represented here as 15 in Decimal. The Decimal value of 15 has created two digits whereas hex uses one digit. So memory is saved. Base 16 can thus represent up to 16 values saving memory in the hex to binary translation.

You can now see how much more compact hex is than decimal in representing binary. Let’s look again more closely as to how binary maps to hex:

1111 1111 = FF in Hexadecinal. Here you can see two 4-bit blocks or 1 Byte represents two hexadecimal values. Looking at our decimal value, that same binary Byte value would be “255”. So it uses 3 values in Decimal.

1111 1111 in binary = 255 in decimal =  but two values, or FF in hexadecimal

FF in hex also works like Base 2 when translating itself back into decimal as follows:

1111 1111 in binary = FF in hex = 15 x (16 to the 1st power) + 15 (16 to the 0th power) = 240+15 = 255 in decimal

Now you can see how all three Base systems connect. So lets now use our original Byte sequence to translate again:

01111000 in binary = 0 x (2 to 7th power) + 1 x (2 to 6th power) + 1 x (2 to 5th power) + 1 x (2 to 4th power) + 1 x (2 to 3rd power) + 0 x (2 to 2nd power) + 0 x (2 to 1st power) + 0 x (2 to 0th power) = 0 + 64 + 32 + 16 + 8 + 0 + 0 + 0 = 120 in decimal

What’s the Hexadecimal?

Let’s see how Hex is understood in the Binary world. Again, each Hexadecimal value is 0-15 (0-9, A-F representing 10-15). A 4-bit block has a max value of  “1111”, or 15 right? So it makes sense each 4-bit binary set would represent one Hex value, as follows:

Our original binary value of “01111000” can be broken into two hex blocks, or “0111” and “1000” (two blocks to help us translate to hex). So in hex notation, we don’t count in bytes or larger groups of binary digits like decimal. Each block of 4-bits is it’s own hex value. So our 8-bit series becomes in hex the following broken into 4-bits:

0111 in binary = 7 in hex

1000 in binary = 8 in hex

01111000 in binary = 78 in hexadecimal = 120 in decimal

If we want to translate “78” hex back into decimal we use the same math as above and break it apart into its two number values:

78 in hex = 7 8 = 7 x (16 to the 1st power) + 8 (16 to the 0th power) = 112+8 = 120 in decimal

Cool huh?

Thus 78 in Hex is the same as writing (78) subscript 16 or (120) subscript 10, or 01111000 in binary.


Last explanation of hex. This will show you how hex code really has been used to compress both binary and decimal numbers in computing and save memory and space.

Web developers have often used Hexadecimal values to represent colors in Cascading Style Sheets. You often will see “color:#ffffff” which is white, or “color:#000000” which is black, as in text color on a white page. Each hex value in that series can be translated into both a decimal and a binary series as follows:

The hex white color value ffffff is really:

1111 1111 1111 1111 1111 1111

… in binary, right?

That’s a big value. But in Decimal it’s around 16,000,000, a really large complicated number to display! Besides, its actual Decimal number is just too crazy a value for me to remember when thinking of the color “white”…

These colors are very big 24-bit or 3-byte numbers, it turns out. And it is why Hexadecimal is so much easier to use for color control because it’s Base 16 values are so much simpler. 

In the old True Color VGA monitors of the 1990’s early web they had decided on 3-byte color combos as the numbers that stood behind then could represent over 16 million possible decimal numbers or color combinations. It opened up a richer color display on monitors, even if that is way more color than we can visibly decipher with our limited color vision. But the style sheet color codes allowed developers to use simple combinations of Hexadecimal letters and numbers to represent huge ranges of color….more than ever imagined or needed… a simple 6 digit hex system (8D3FA5 for example).

In this 6-digit hex color system, the first two hex values represented Red, the next two Green, and the last two Blue. Thus, we have RGB hexadecimal numbers used today, with each byte having 256 color possibilities. It’s 256 x 256 x 256 in getting the 16 million color possibilities.

You can see how Hexadecimal is with us today and why Base 16 number systems have saved us time and storage in our quest to simplify the growing digital systems that grow more complex each year.

Hope that makes sense now! I may post some other interesting technological facts in future blogs. So stay tuned!

– the Author


Leave a Reply