***nalaginrut_ is now known as nalaginrut
***karswell` is now known as karswell
<mark_weaver>ijp: sneek just delivered your message to me on #guix: ijp says: is (equal? (/ 0.0 0.0) (/ 0.0 0.0)) => #t, (= (/ 0.0 0.0) (/ 0.0 0.0)) => #f kosher?ijp says: is (equal? (/ 0.0 0.0) (/ 0.0 0.0)) => #t, (= (/ 0.0 0.0) (/ 0.0 0.0)) => #f kosher? <ijp>I can think of reasons, but I'd prefer to hear yours <ijp>I mean, the obvious point with the exception of mixed exactness, equal? and eqv? agree with = <mark_weaver>well, as for why 'equal?' returns #true: that's because equal? on numbers means eqv?, which is historically an approximation of operational equivalence. <mark_weaver>whereas = tests mathematical equality, which is a different concept. <mark_weaver>mixed exactness is not the only case where eqv? and = differ. they also differ for the IEEE signed zeroes, and if multiple precisions of inexacts are available and affect the precision of the results, then they differ there also. <mark_weaver>for example, if (sqrt 2.0) differs from (sqrt 2.0s0), then (eqv? 2.0 2.0s0) must return false because they are not operationally equivalent, but of course they are mathematically equivalent. <Fulax>R5RS states that eqv? returns #f if both arguments are numbers of same exactness and for which = returns #f <ijp>this is not disputed <ijp>I'm trying to find the notes about precision, since it isn't under eqv? in the r6rs as far as I can see <mark_weaver>Fulax: from R3RS was very explicit that 'eqv?' on numbers was about operational equivalence. they simplified that language in R4RS and R5RS. and then R6RS again returns to an operational equivalence def'n for 'eqv?'. <ijp>although there is the obvious note about the unreliability of = with inexacts under the definitions for = & co <mark_weaver>ijp: they don't explicitly mention precision as affecting 'eqv?', but if you read what 'eqv?' means on inexact numbers, it can be deduced. <mark_weaver>ijp: suppose low-precision-two is 2.0, and high-precision-two is also 2.0 but represented as a bigfloat with 10000 bits of precision. now suppose that (sqrt low-precision-two) returns the usual amount of precision, and (sqrt high-precision-two) returns 10000 bits of precision. <mark_weaver>from this, and the definition of 'eqv?' on numbers in R6RS, you can deduce that (eqv? low-precision-two high-precision-two) must return #f. <ijp>hmm, so this follows from the "same results from a finite composition blah blah" part? <ijp>because although (= low-precision-two high-precision-two), sqrt can distinguish <ijp>okay, that satisfies the lawyer devil on my right shoulder <ijp>I need to get around to reading that "what every computer scientist should know about floating point" at some point <ijp>mark_weaver: out of curiousity, are there any other common inexact representations other than IEEE floats? <mark_weaver>ijp: another good representation (superior in many ways, really) is truncated continued fractions. <ijp>ah, of course, how is that implementation coming btw? <mark_weaver>but of course, you can't beat the performance of IEEE simply because it's so widely implemented in hardware. <mark_weaver>ijp: well, I was trying for something more tricky: exact representations of irrational numbers as continued fraction streams. <mark_weaver>but I learned that you can't really swap in an exact real number representation in an existing system, because code everywhere assumes that = and < and > will terminate. <mark_weaver>but in general, comparisons of exact reals do not necessarily terminate. <mark_weaver>but two nice properties of continued fractions are (1) any rational number has a finite representation, and (2) any solution to the quadratic equation has a continued fraction representation that repeats, in the same way that decimal representations of rational numbers repeat (if not finite). <mark_weaver>also, fundamental irrational constants like pi and e have continued fraction representations that make their simplicity evident, whereas they seem quite mysterious when written in decimal (or any other base).