IRC channel logs

2022-01-07.log

back to list of logs

<rlb>wingo: haven't had time to keep up lately, and likely won't for a while yet, but as far as a release goes, I'm not sure what the current plan is for inlining, but the (as of a month or two ago) the 3.0 behavior broke lokke and I'd love to avoid that if possible. Of course if I can fix it in lokke (if I can figure out what's wrong), that'd be even better.
<rlb>And if there's something I should read wrt current plan, by all means, just point me that way.
<sneek>Welcome back dsmith!!
<dsmith>!uptime
<sneek>I've been running for 5 days
<sneek>This system has been up 24 weeks, 1 day, 10 hours, 47 minutes
<dsmith>sneek: botsnack
<sneek>:)
<dsmith>goodbot
<chrislck>quantum computing would make (amb) so much faster
<wingo>civodul: lloda: pushed wip-inline-digits
<wingo>not merged yet
<wingo>but mergeable i think
<wingo>wdyt?
<wingo> 16 files changed, 5061 insertions(+), 5068 deletions(-)
<civodul>wingo: woow, i'd have expected (hoped for) 2 files :-)
<civodul>i'll take a look!
<wingo>haha in-place mutable bignums leaked everywhere :P
<wingo>civodul: if you could do performance testing that would be really great. i don't have numbers currently
<civodul>yes, that's a good idea
<wingo>so we should be enabling lto
<wingo>in configure, if possible. how do we do that
*wingo glares at the useless libtool
<wingo>a -g -O2 -flto link for libguile here just took around 5-10 seconds
<wingo>... and knocked a megabyte off the size of the libguile.so
<civodul>this much?
<wingo>hum, maybe not. i was running with just -flto but i needed to also configure with AR=gcc-ar NM=gcc-nm RANLIB=gcc-ranlib to get it working
<wingo>hm, seems to work tho
<wingo>neat
<wingo>gosh, so much elf superposition
<lloda>*triggered*
<wingo>humm, callgrind tells me i slowed things down significantly. weird
<wingo>maybe i recompile without lto, to do apples to apples...
<civodul>we should add a --with-lto package transformation option in Guix
<wingo>lol this is funny. with stock guile, no GUILE_INSTALL_GMP_MEMORY_FUNCTIONS, the chudnovsky benchmark runs in 0.74s. with GUILE_INSTALL_GMP_MEMORY_FUNCTIONS=1, it's 0.24s. on my branch, it's always 0.33s
<wingo>i wonder if i will be able to find the optimization to beat GUILE_INSTALL_GMP_MEMORY_FUNCTIONS
<civodul>heh
<civodul>i'm building Guix with Guile from that branch right now
<civodul>not doing any measurement though, maybe i should
<civodul>the compiler can be bignum-intensive so it's a good benchmark
<civodul>oh, it crashed, something about autoloads
<civodul>looks like language/tree-il/resolve-free-vars.scm:54 eagerly tries to resolve autoloads
<dsmith-work>Happy Friday, Guilers!!
<civodul>happy Friday, dsmith-work!
*dsmith-work celebrates with a fresh cup of coffee
<wingo>civodul: hum that certainly sounds like a bug!
<wingo>disable that pass, perhaps? -Ono-resolve-free-vars
<civodul>wingo: should 'imported-resolver' have a special case for interfaces marked as "autoload"?
<wingo>civodul: i guess so
<lloda>how do i get the exact bits of a nan?
<lloda>bytevector i guess
<lloda>so this is kind of an old bug, that log10 of real nan could return a complex nan (nan+i*finite) for certain values of nan
<lloda>turned out i could work around this by using log10(magnitude(nan)), because log10 was wrongly looking at the sign of its arg even when that arg was nan
<lloda>but in the wip-inline-digits, magnitude(nan) doesn't do anything, so the workaround doesn't work anymore
<lloda>can't say it's wrong for magnitude(nan) to do nothing so log10 should be fixed instead to always return real nan for real nan argument
<lloda>the specific change is (magnitude #vu8(0 0 0 0 0 0 248 255)) would give #vu8(0 0 0 0 0 0 248 127) and now it gives #vu8(0 0 0 0 0 0 248 255). (log10 #vu8(0 0 0 0 0 0 248 255)) is the bug in either branch
<lloda>i don't see a fix without an explicit nan check
<wingo>lloda: should we change something?
<lloda>wingo: I'll send a patch to the list
<lloda>i remember something similar wrt sqrt back when
<wingo>i fixed the perf regression, whee
<wingo>or close enough anyway
***jackhill is now known as KM4MBG
***KM4MBG is now known as jackhill