# 4.8 — Floating point numbers

Integers are great for counting whole numbers, but sometimes we need to store very large numbers, or numbers with a fractional component. A floating point type variable is a variable that can hold a real number, such as 4320.0, -3.33, or 0.01226. The floating part of the name floating point refers to the fact that the decimal point can “float”; that is, it can support a variable number of digits before and after the decimal point.

There are three different floating point data types: float, double, and long double. As with integers, C++ does not define the actual size of these types (but it does guarantee minimum sizes). On modern architectures, floating point representation almost always follows IEEE 754 binary format. In this format, a float is 4 bytes, a double is 8, and a long double can be equivalent to a double (8 bytes), 80-bits (often padded to 12 bytes), or 16 bytes.

Floating point data types are always signed (can hold positive and negative values).

Category Type Minimum Size Typical Size
floating point float 4 bytes 4 bytes
double 8 bytes 8 bytes
long double 8 bytes 8, 12, or 16 bytes

Here are some definitions of floating point numbers:

When using floating point literals, always include at least one decimal place (even if the decimal is 0). This helps the compiler understand that the number is a floating point number and not an integer.

Note that by default, floating point literals default to type double. An f suffix is used to denote a literal of type float.

Best practice

Always make sure the type of your literals match the type of the variables they’re being assigned to or used to initialize. Otherwise an unnecessary conversion will result, possibly with a loss of precision.

Printing floating point numbers

Now consider this simple program:

The results of this seemingly simple program may surprise you:

```5
6.7
9.87654e+06
```

In the first case, the std::cout printed 5, even though we typed in 5.0. By default, std::cout will not print the fractional part of a number if the fractional part is 0.

In the second case, the number prints as we expect.

In the third case, it printed the number in scientific notation (if you need a refresher on scientific notation, see lesson %Failed lesson reference, id XX%).

Floating point range

Assuming IEEE 754 representation:

Size Range Precision
4 bytes ±1.18 x 10-38 to ±3.4 x 1038 6-9 significant digits, typically 7
8 bytes ±2.23 x 10-308 to ±1.80 x 10308 15-18 significant digits, typically 16
80-bits (12 bytes) ±3.36 x 10-4932 to ±1.18 x 104932 18-21 significant digits
16 bytes ±3.36 x 10-4932 to ±1.18 x 104932 33-36 significant digits

It may seem a little odd that the 12-byte floating point number has the same range as the 16-byte floating point number. This is because they have the same number of bits dedicated to the exponent -- however, the 16-byte number can store more significant digits.

Floating point precision

Consider the fraction 1/3. The decimal representation of this number is 0.33333333333333… with 3’s going out to infinity. If you were writing this number on a piece of paper, your arm would get tired at some point, and you’d eventually stop writing. And the number you were left with would be close to 0.3333333333…. (with 3’s going out to infinity) but not exactly.

On a computer, an infinite length number would require infinite memory to store, and typically we only have 4 or 8 bytes. This limited memory means floating point numbers can only store a certain number of significant digits -- and that any additional significant digits are lost. The number that is actually stored will be close to the desired number, but not exact.

The precision of a floating point number defines how many significant digits it can represent without information loss.

When outputting floating point numbers, std::cout has a default precision of 6 -- that is, it assumes all floating point variables are only significant to 6 digits (the minimum precision of a float), and hence it will truncate anything after that.

The following program shows std::cout truncating to 6 digits:

This program outputs:

```9.87654
987.654
987654
9.87654e+006
9.87654e-005
```

Note that each of these only have 6 significant digits.

Also note that std::cout will switch to outputting numbers in scientific notation in some cases. Depending on the compiler, the exponent will typically be padded to a minimum number of digits. Fear not, 9.87654e+006 is the same as 9.87654e6, just with some padding 0’s. The minimum number of exponent digits displayed is compiler-specific (Visual Studio uses 3, some others use 2 as per the C99 standard).

The number of digits of precision a floating point variable has depends on both the size (floats have less precision than doubles) and the particular value being stored (some values have more precision than others). Float values have between 6 and 9 digits of precision, with most float values having at least 7 significant digits. Double values have between 15 and 18 digits of precision, with most double values having at least 16 significant digits. Long double has a minimum precision of 15, 18, or 33 significant digits depending on how many bytes it occupies.

We can override the default precision that std::cout shows by using the std::setprecision() function that is defined in the iomanip header.

Outputs:

```3.333333253860474
3.333333333333334
```

Because we set the precision to 16 digits, each of the above numbers is printed with 16 digits. But, as you can see, the numbers certainly aren’t precise to 16 digits! And because floats are less precise than doubles, the float exhibits has more error.

Precision issues don’t just impact fractional numbers, they impact any number with too many significant digits. Let’s consider a big number:

Output:

```123456792
```

123456792 is greater than 123456789. The value 123456789.0 has 10 significant digits, but float values typically have 7 digits of precision (and the result of 123456792 is precise only to 7 significant digits). We lost some precision! When precision is lost because a number can’t be stored precisely, this is called a rounding error.

Consequently, one has to be careful when using floating point numbers that require more precision than the variables can hold.

Best practice

Favor double over float unless space is at a premium, as the lack of precision in a float will often lead to inaccuracies.

Rounding errors make floating point comparisons tricky

Floating point numbers are tricky to work with due to non-obvious differences between binary (how data is stored) and decimal (how we think) numbers. Consider the fraction 1/10. In decimal, this is easily represented as 0.1, and we are used to thinking of 0.1 as an easily representable number with 1 significant digit. However, in binary, 0.1 is represented by the infinite sequence: 0.00011001100110011… Because of this, when we assign 0.1 to a floating point number, we’ll run into precision problems.

You can see the effects of this in the following program:

This outputs:

```0.1
0.10000000000000001
```

On the top line, std::cout prints 0.1, as we expect.

On the bottom line, where we have std::cout show us 17 digits of precision, we see that d is actually not quite 0.1! This is because the double had to truncate the approximation due to its limited memory. The result is a number that is precise to 16 significant digits (which type double guarantees), but the number is not exactly 0.1. Rounding errors may make a number either slightly smaller or slightly larger, depending on where the truncation happens.

Rounding errors can have unexpected consequences:

```1
0.99999999999999989
```

Although we might expect that d1 and d2 should be equal, we see that they are not. If we were to compare d1 and d2 in a program, the program would probably not perform as expected. Because floating point numbers tend to be inexact, comparing floating point numbers is generally problematic -- we discuss the subject more (and solutions) in lesson 5.6 -- Relational operators and floating point comparisons.

One last note on rounding errors: mathematical operations (such as addition and multiplication) tend to make rounding errors grow. So even though 0.1 has a rounding error in the 17th significant digit, when we add 0.1 ten times, the rounding error has crept into the 16th significant digit. Continued operations would cause this error to become increasingly significant.

Key insight

Rounding errors occur when a number can’t be stored precisely. This can happen even with simple numbers, like 0.1. Therefore, rounding errors can, and do, happen all the time. Rounding errors aren’t the exception -- they’re the rule. Never assume your floating point numbers are exact.

A corollary of this rule is: never use floating point numbers for financial or currency data.

NaN and Inf

There are two special categories of floating point numbers. The first is Inf, which represents infinity. Inf can be positive or negative. The second is NaN, which stands for “Not a Number”. There are several different kinds of NaN (which we won’t discuss here).

Here’s a program showing all three:

And the results using Visual Studio 2008 on Windows:

```1.#INF
-1.#INF
1.#IND
```

INF stands for infinity, and IND stands for indeterminate. Note that the results of printing Inf and NaN are platform specific, so your results may vary.

Conclusion

To summarize, the two things you should remember about floating point numbers:

1) Floating point numbers are useful for storing very large or very small numbers, including those with fractional components.

2) Floating point numbers often have small rounding errors, even when the number has fewer significant digits than the precision. Many times these go unnoticed because they are so small, and because the numbers are truncated for output. However, comparisons of floating point numbers may not give the expected results. Performing mathematical operations on these values will cause the rounding errors to grow larger.

 4.9 -- Boolean values Index 4.7 -- Introduction to scientific notation

### 121 comments to 4.8 — Floating point numbers

• Marina

Hello!
I get an error while trying to run setprecision function.
Mine looks just like in the lesson above.
#include <iostream>
#include <iomanip> // for std::setprecision()

int main()
{
float f(123456789.0f); // f has 9 significant digits
std::cout << std::setprecision(9); // because we want to show all 9 significant digits in f
std::cout << f << std::endl;
return 0;
}

It compiles, but when I try to run it I get "sh:1: Syntax error:"(" unexpected" in the answer box.

What is wrong and how can I fix it?
Regards,
Marina

• Alex

How are you compiling your program? sh:1: Syntax error looks like a script error, not a compiled program error...

• BROKEN WINDOW

Hi Alex,
if we want to store a 60 digits number in a variable or print it in output without scientific notation, what can we do ?
for example in python we can calculate 9999 to the exponent 9999 and it will print the result.
is there any way to do it ?

Thanks.

• Alex

Since C++ doesn't have support for arbitrarily large integers, probably the best solution here would be to install a library that implements large integers (e.g. https://mattmccutchen.net/bigint/).

• takise

Hello, I can't get one thing - is there any difference with rounding errors depending on which math operations we use.

double d = 0.1;
cout << d << endl;         // shows 0.10000000000000001
double d2 = (d + d + d + d + d + d + d + d + d + d);
cout << d2 << endl;         // shows 0.99999999999999989
double d3 = 10 * d;
cout << d3 << endl;              // shows 1  ?? why ??
return 0;

Why after multiplying the result is without rounding error ?

Thanks upfront for answer and great tutorial!

• Alex

Yes, the amount of rounding error can depend both on which mathematical operation you and how many times you use them. The more times you use them, the more errors tend to grow. The plus case has more error because we used plus 10 times.

• AK

A good example which made me to understand the precision error is "Patriot Missile Failure" during GULF war. The internal clock was multiplied by 1/10 to get real time.
This calculation was doing a fixed point 24 bit multiplication. And over 100 hours this multiplication was yielded a drift of 0.34 sec which was enough for the missile to go undetected by the Radar.

Details at :

• Pranav

float f is printed out as 8e-010. I am using visual studio. Is there a mistake in initializing the variable? ignore the random long double.

• Alex

No. 8e-10 and 8e-010 are the same number.

• Pranav

But rather than giving out 0.0000000008 it gives out 8e-10.

• Alex

Yup. std::cout will print some numbers in scientific notation, particularly if they are large or small.

• After mixing my stupid experiments with your great lessons on floating point numbers, I ended up with this conclusion.  Please correct me if I m wrong somewhere:

Floating numbers are different in a machine from what we expect. For example: 1.11 is not stored as 1.11 in a machine. Floating point numbers can't take more than 4,8,12 or 16 bytes in memory. That is why numbers going out of Infinity has to be rounded up or down to an approximate value. Setting precision shows how numbers are stored in the machine but can't result the exact number because it has to truncate the number according to the given limit or due to limited memory. After truncating, it returns an approximate value (approximate according to machine).  However, we can see exact 1.11 to be printed out to the console because cout displays expected results when precision is set under the precision range of a type given to the object (e.g. double x; // x is okay with precision 15,16 or 17 when cout prints it's value. If precision is set to 18 for x, cout will display unexpected digits after 15,16 or 17 significant digits) type and fails only when floating numbers are compared to each other using relational operator, or when evaluated as a result of an expression, using arithmetic operators. 50 is the largest parameter setprecision can hold (setprecision (50)). I m a bit confused here. Why 50. Does a number with 51 digits goes out of 16 byte (maximum a floating point number can reserve). Or setprecision () has a rule that this object can't display more than 50 significant digits.

• Alex

You're pretty much correct. Floating point numbers are not stored in decimal format internally (1.11 is not stored as "1.11"), they are stored as some magic combination of bits that gets reconstituted into 1.11). Just like decimal numbers, some floating point numbers can be represented precisely, and some can not. For example 1/3rd can't be represented precisely in decimal format (0.333333... you have to truncate somewhere), but 1/10 can (0.1).

It's unclear to me whether setprecision() having a max value 50 is a limitation of your compiler or something else.

• cout prints maximum 50 significant digits for all setprecision values greater than 50 in my machine. I m using code::blocks.

• Sarah Gunn

I wanted to ask about the division by zero.  While I am sure you understand the mathematics behind why it is undefined, here in your code it indicates that it will return infinity.  Is it the double having a decimal that causes this to happen rather than just giving an error instead?  I ask because I want to understand what the code is doing under the hood so that I never mistakenly fall into this trap.  Thanks and great set of lessons!

• Alex

The short answer is that it works that way because that's how IEEE 754 (the standard to which floating point number implementations adhere) defines it that way. See http://grouper.ieee.org/groups/754/faq.html#exceptions.

• Sarah Gunn

Thanks!  I will bear this in mind when I wish to reprogram my coffee pot!  ;)

It does make sense though from that perspective, hardware can be stupid and this has to be adaptable to that.