Hex to Decimal Converter
Convert hexadecimal numbers to decimal, binary, and octal instantly. Essential tool for programmers and developers.
Decimal
—
Binary —
Octal —
Extended More scenarios, charts & detailed breakdown ▾
Decimal
—
Binary —
Octal —
Professional Full parameters & maximum detail ▾
Decimal (Base 10)
—
Binary (Base 2) —
Octal (Base 8) —
Bit Length —
Bytes Needed —
AND 0xFF (low byte) —
High Byte —
Low Byte —
Two's Complement (32-bit) —
How to Use This Calculator
- Enter your hexadecimal value in the input field (e.g. FF, 1A3F).
- The decimal, binary, and octal equivalents are shown instantly.
- Valid hex characters are 0–9 and A–F (case insensitive).
Formula
Hex to Decimal: sum of (digit × 16^position) for each digit from right (position 0).
FF = (15 × 16¹) + (15 × 16⁰) = 240 + 15 = 255
Example
Example: Convert 1A to decimal:
(1 × 16) + (10 × 1) = 16 + 10 = 26
Frequently Asked Questions
- Hexadecimal is a base-16 positional number system that uses sixteen symbols: the digits 0 through 9 represent values 0–9, and the letters A through F represent values 10–15. So A=10, B=11, C=12, D=13, E=14, F=15. Each position in a hex number represents a power of 16: the rightmost digit is the ones place (16⁰ = 1), the next is the sixteens place (16¹ = 16), then 256s (16² = 256), then 4096s (16³ = 4096), and so on. Hexadecimal is widely used in computing, programming, and electronics because it provides a compact human-readable representation of binary data. HTML/CSS color codes (#FF5733), memory addresses, machine code, and error codes are all commonly expressed in hex. One hex digit exactly represents 4 binary bits (a nibble), so two hex digits represent one byte (8 bits).
- To convert a hexadecimal number to decimal, multiply each digit by 16 raised to its positional power (starting from 0 on the right), then sum all the results. For example, to convert 2A3: the digits are 2, A (=10), and 3. Their positions from right are 2, 1, 0. Calculation: (2 × 16²) + (10 × 16¹) + (3 × 16⁰) = (2 × 256) + (10 × 16) + (3 × 1) = 512 + 160 + 3 = 675. For a two-digit hex like FF: (15 × 16) + (15 × 1) = 240 + 15 = 255. For 1A: (1 × 16) + (10 × 1) = 26. A handy shortcut: memorize the powers of 16 — 16¹=16, 16²=256, 16³=4096, 16⁴=65536. Each additional hex digit multiplies the range by 16.
- FF in hexadecimal equals 255 in decimal. This is the maximum value that can be stored in 1 byte (8 bits): 11111111 in binary. The calculation is: F=15, so FF = (15 × 16) + (15 × 1) = 240 + 15 = 255. This value appears everywhere in computing: RGB colors where each channel goes from #00 to #FF (0 to 255); network masks like 255.255.255.0 (which is FF.FF.FF.00 in hex); and in hexadecimal opcodes and machine instructions. 0xFF is also used as a common bitmask in programming — performing a bitwise AND with 0xFF isolates the lowest 8 bits of any integer. The next hex value, 0x100, equals 256 in decimal, marking the start of the two-byte range.
- Programmers use hexadecimal because it is a compact, human-readable shorthand for binary data. The key insight: one hex digit maps exactly to 4 binary bits (a nibble). So 8-bit bytes need exactly 2 hex digits, 16-bit words need 4, and 32-bit integers need 8. This perfect correspondence makes hex easy to convert to and from binary mentally. For example, the binary number 11001010 can be split into 1100 (=C) and 1010 (=A), giving hex CA — much easier than reading 8 binary digits. Hexadecimal also appears naturally in memory addresses, color codes, error codes, checksums, and network data. Octal (base 8) was historically used as a binary shorthand (1 octal digit = 3 bits) and is still seen in Unix file permissions (chmod 755), but hex has largely replaced it.
- 0xFF is the standard C/C++/JavaScript notation for the hexadecimal value FF. The "0x" prefix signals that the following characters are in base 16. 0xFF = 255 in decimal = 11111111 in binary. This prefix convention originated in the C programming language and is now used in virtually all modern programming languages including Python, Java, JavaScript, C#, and Go. Other examples: 0x10 = 16, 0x1A = 26, 0x100 = 256, 0xFFFF = 65535 (maximum 16-bit value), 0xFFFFFFFF = 4,294,967,295 (maximum 32-bit unsigned integer). Some languages use alternative notations: # for colors in CSS (#FF0000), $ in assembly, or &H in Visual Basic. When you see 0x in a hex value, simply ignore the prefix and convert the remaining hex digits to decimal.