Why is it when I print/display the result of

```
eval("11.05") + eval("-11")
```

it comes out as 0.05000000000000071 instead of the expected 0.05. Is there something I am missing?

This has nothing to do with `eval`

(which you should avoid).

You get the same problem with `11.05 - 11`

.

This is just the usual floating point problem

This has nothing to do with `eval`

. In fact, this is what happens if you type in a console `11.05 - 11`

:

This is a consequence of how programming languages store floating-point numbers; they include a small error. If you want to read more about this, check this out.

The function `eval`

is absolutely innocent here. The culprit is floating-point operation. If you do not expect a large number of numbers after the decimal, you may limit. But you can not avoid it.

As others have indicated, this is a floating point problem and has nothing to do with `eval`

. Now for `eval`

: you can easy avoid it here, using:

```
Number("11.05") + Number("-11");
```

To avoid the faulty outcome you could use `toPrecision`

:

```
(Number("11.05") + Number("-11")).toPrecision(12);
// or if you want 0.05 to be the outcome
(Number("11.05") + Number("-11")).toPrecision(1);
```

Similar Questions

It seems that Swift allows, in Xcode Beta, floating point number in the range operator however the results are not desirable. for i in 0..109.88 { i sin(Double(i)) } This causes it to hang or run fo

I am thinking recently on how floating point math works on computers and is hard for me understand all the tecnicals details behind the formulas. I would need to understand the basics of addition, sub

I am trying to convert a floating-point double-precision value x to decimal with 12 (correctly rounded) significant digits. I am assuming that x is between 10^110 and 10^111 such that its decimal repr

I'm reading the Python doc about floating-point issues and limits. About halfway down the page it says: Interestingly, there are many different decimal numbers that share the same nearest approximate

I want to represent a floating-point number as a string rounded to some number of significant digits, and never using the exponential format. Essentially, I want to display any floating-point number a

Is floating point multiplication with 0.0 faster than average fp multiplication? The same question about adding 0.0 and multiplying with 1.0. For the question to make exact sense: Is it faster on rece

I am having trouble with floating points. A the double . 56 in Java, for example, might actually be stored as .56000...1. I am trying to convert a decimal to a fraction. I tried to do this using conti

This question already has an answer here: Limiting floats to two decimal points 9 answers Why exactly does the latter case in Python doesn't yield the result 3.3? >>> 1.0 + 2.3 3.3 >

I wrote a little program to find the position of a point(x,y) in relation to a line defined by a point(px,py) and an angle(deg) to the x-axis (in a cartesian coordinate system). toRadian deg = deg * (

Can someone point towards (or show) some good general floating point comparison functions in C# for comparing floating point values? I want to implement functions for IsEqual, IsGreater an IsLess. I a

I have to convert floating point number from binary to usable decimal number. Of course my floating point number has been separated into bytes, so 4 bytes total. 1 2 3 4 [xxxxxxxx][xxxxxxxx][xxxxxxxx]

I am racking my brain trying to figure out why this code does not get the right result. I am looking for the hexadecimal representations of the floating point positive and negative overflow/underflow

Converting numbers from double-precision floating-point format to single-precision floating-point format results in loss of precision. What's the algorithm used to achieve this conversion? Are numbers

I have a strange floating-point problem. Background: I am implementing a double-precision (64-bit) IEEE 754 floating-point library for an 8-bit processor with a large integer arithmetic co-processor.

How do hardware implementations of a floating-point square root work? Which algorithm would they use and can anyone provide links to verilog/vhdl implementations?

I am trying to figure out how to print floating point numbers without using library functions. Printing the decimal part of a floating point number turned out to be quite easy. Printing the integral p

I've seen a few questions about this already, but none that I read helped me actually understand why what I am trying to do is failing. So I have a bunch of floating point values, and they have differ

I'm trying to write a RegEx to validate a floating point number. Here's what I've managed thus far: /^[-+]?[1-9]\d{0,2}(\.\d{1,1})?/ The number is valid if: Either positive or negative Max of 2 dig

Can someone point towards (or show) some good general floating point comparison functions in C# for comparing floating point values? I want to implement functions for IsEqual, IsGreater an IsLess. I a

This question already has an answer here: Floating point arithmetic not producing exact results 7 answers I am writing a method which calculates the equation of a 2D Line in the form a*x+b*y=1

When trying to run GWAN on Ubuntu 12.04 LTS I sometimes get the Floating point exception error. Sometimes it will happen many times in a row, and it can start and run fine a few times in a row. But

as you know single number will save in memory by following format: (-1)^s * 1.f * 2^e: and zero will save like that: 1.0000000000000000 * 2 ^ -126 now If I multiply it to another floating point number

Let's say I have some 32-bit and 64-bit floating point values: >>> import numpy as np >>> v32 = np.array([5, 0.1, 2.4, 4.555555555555555, 12345678.92345678635], dtype=np.float32) >

I am building a system to read tables from heterogeneous documents and would like to know the best way of managing (columns of) floating point numbers. Where the column can be represented as real numb

I am trying to print some floating point numbers using printf. For example: int main() { printf(%.1f,76.75); return 0; } THE OUTPUT: 76.8 And I have some questions about the result. First of all, w

The write function does not print a floating point number in the following code: #include <unistd.h> int main(){ float f = 4.5; write(1,&f,sizeof float); return 0; } This results in: �@ Wh

I have created a class Matrix to do mathematical operation on floating point matrices (basic operations, inverse, ...). Floating point precision make it difficult to test them. Typically, if test the

Everything I've found after searching has been how to use them. I'm really curious as to why. What are the benefits of using a floating point value in hex format? Editing for clarification: I am study

Trying to create a Mandelbrot set, I have been trying to using 8 and 15 digit floating point variables in my program, and have run into an issue: the double approximates to 0. I tried unit testing and

Is there a way to execute an eval-like function that coerces its values to floating point? I am hoping to eval('1/3') and have it return the floating point value .333333 rather than the integer value

float totalAmount = 0; . . .//totalAmount assigned value 1.05 correctly . totalAmount += float.Parse(dataRow[Amt].ToString()); //where dataRow[Amt] has value 4.93 The answer I get for totalAmount

To summarize the problem, I can not select any entity against a floating point value using SQLAlchemy. For example: m = session.query(Model).get(1); all_m = session.query(Model).filter(Model.some_flo

I tried doing some floating point comparison and here is what I found: 130 === 130.000000000000014210854715 // true 130 === 130.000000000000014210854716 // false 9 === 9.0000000000000008881784197001 /

I have a newbie question about floating point numbers in PostgreSQL 9.2. Is there a function to round a floating point number directly, i.e. without having to convert the number to a numeric type firs

I set tow functions, one to enable exception for floating point and one to disable exception. In the code bellow, I have enabled tow exception in one time (_EM_ZERODIVIDE and _EM_OVERFLOW), after I ne

I understand the complexities when using binary (or binaryoid) floating point and representing the result in decimal: 1 (do ((numerator 1 (* 10 numerator))) 2 ((>= numerator 1000000000000)) 3 (let

I have a program written in C# and some parts are writing in native C/C++. I use doubles to calculate some values and sometimes the result is wrong because of too small precision. After some investiga

I am trying to match the perecent value in a string: Starting task, 13.73 % (11.17 fps, avg 14.66 fps, ETA 00h03m53s) The problem is my regex works but it matches all three floating point values: \d*

When there is division by zero for a floating point number, the result is either a division by zero exception, or the term evaluates to NaN, Inf or -Inf. Is it possible to somehow change the behavior

I got a function that compute a floating point representation of 2^x. I do understand most of its part, however I am facing some confusion on the nested if-else statement of this function: /* Compute

Is there any epsilon constant (as in Matlab) in numpy or scipy modules to compare floating point numbers?

I have been trying for the last three days to understand the exact differences between floating and fixed point representations. I am confused reading the material and I'm unable to decide what is rig

i am doing floating point addition using MARS. SameExponent: add $s6,$s4,$s5 after I aligned the exponents, I added up the significands,but how do I detect if the sum is still normalized or not in

I'm a little confused on how to normalize numbers in C. I know that if you have something like the floating-point binary value 1101.101, it is normalized as 1.101101 x 2^3 by moving the decimal point

Yesterday I asked a floating point question, and I have another one. I am doing some computations where I use the results of the math.h (C language) sine, cosine and tangent functions. One of the deve

So I know that mathematically infinity/infinity = NAN. My question is, how does the computer come to this result? infinity is represented by 7F800000. So from a computers perspective, 7F800000/7F80000

I am implementing a conventional (that means not fast), separated Fourier transform for images. I know that in floating point a sum over one period of sin or cos in equally spaced samples is not perfe

I've encountered a strange error trying to work with a floating point number. I'm attempting to calculate the log10 (in math.h) of the following number: -0.000000000000000000000000000000002584877722

Which libraries for Java are there that have a fast implementation for floating point or fixed point operations with a precision of several thousands of digits? How performant are they? A requirement

Some Duplicates: 1.265 * 10000 = 126499.99999999999 ????? How is floating point stored? When does it matter? Strange floating-point behaviour in a Java program Why do I see a double variable initializ