This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of CD1 status.

231. Precision in iostream?

Section: 28.3.4.3.3.3 [facet.num.put.virtuals] Status: CD1 Submitter: James Kanze, Stephen Clamage Opened: 2000-04-25 Last modified: 2016-01-28

Priority: Not Prioritized

View other active issues in [facet.num.put.virtuals].

View all other issues in [facet.num.put.virtuals].

View all issues with CD1 status.

Discussion:

What is the following program supposed to output?

#include <iostream>

    int
    main()
    {
        std::cout.setf( std::ios::scientific , std::ios::floatfield ) ;
        std::cout.precision( 0 ) ;
        std::cout << 1.00 << '\n' ;
        return 0 ;
    }

From my C experience, I would expect "1e+00"; this is what printf("%.0e" , 1.00 ); does. G++ outputs "1.000000e+00".

The only indication I can find in the standard is 22.2.2.2.2/11, where it says "For conversion from a floating-point type, if (flags & fixed) != 0 or if str.precision() > 0, then str.precision() is specified in the conversion specification." This is an obvious error, however, fixed is not a mask for a field, but a value that a multi-bit field may take -- the results of and'ing fmtflags with ios::fixed are not defined, at least not if ios::scientific has been set. G++'s behavior corresponds to what might happen if you do use (flags & fixed) != 0 with a typical implementation (floatfield == 3 << something, fixed == 1 << something, and scientific == 2 << something).

Presumably, the intent is either (flags & floatfield) != 0, or (flags & floatfield) == fixed; the first gives something more or less like the effect of precision in a printf floating point conversion. Only more or less, of course. In order to implement printf formatting correctly, you must know whether the precision was explicitly set or not. Say by initializing it to -1, instead of 6, and stating that for floating point conversions, if precision < -1, 6 will be used, for fixed point, if precision < -1, 1 will be used, etc. Plus, of course, if precision == 0 and flags & floatfield == 0, 1 should be = used. But it probably isn't necessary to emulate all of the anomalies of printf:-).

Proposed resolution:

Replace 28.3.4.3.3.3 [facet.num.put.virtuals], paragraph 11, with the following sentence:

For conversion from a floating-point type, str.precision() is specified in the conversion specification.

Rationale:

The floatfield determines whether numbers are formatted as if with %f, %e, or %g. If the fixed bit is set, it's %f, if scientific it's %e, and if both bits are set, or neither, it's %g.

Turning to the C standard, a precision of 0 is meaningful for %f and %e. For %g, precision 0 is taken to be the same as precision 1.

The proposed resolution has the effect that if neither fixed nor scientific is set we'll be specifying a precision of 0, which will be internally turned into 1. There's no need to call it out as a special case.

The output of the above program will be "1e+00".

[Post-Curaçao: Howard provided improved wording covering the case where precision is 0 and mode is %g.]