• Tom Lane's avatar
    Reduce number of bytes examined by convert_one_string_to_scalar(). · aad663a0
    Tom Lane authored
    Previously, convert_one_string_to_scalar() would examine up to 20 bytes of
    the input string, producing a scalar conversion with theoretical precision
    far greater than is of any possible use considering the other limitations
    on the accuracy of the resulting selectivity estimate.  (I think this
    choice might pre-date the caller-level logic that strips any common prefix
    of the strings; before that, there could have been value in scanning the
    strings far enough to use all the precision available in a double.)
    
    Aside from wasting cycles to little purpose, this choice meant that the
    "denom" variable could grow to as much as 256^21 = 3.74e50, which could
    overflow in some non-IEEE float arithmetics.  While we don't really support
    any machines with non-IEEE arithmetic anymore, this still seems like quite
    an unnecessary platform dependency.  Limit the scan to 12 bytes instead,
    thus limiting "denom" to 256^13 = 2.03e31, a value more likely to be
    computable everywhere.
    
    Per testing by Greg Stark, which showed overflow failures in our standard
    regression tests on VAX.
    aad663a0
selfuncs.c 219 KB