IRC channel logs
2025-09-02.log
back to list of logs
<rlb>old: if you meant 10, then yes, I'd say anyone who wants to use 3.0.10 can apply the relevant patches as we did for debian, but most people should just adopt 11 once it's available. <rlb>i.e. 3.0.10 has been fixed, in main, which will become 3.0.11. <dpk>and yes, in both R6RS and R7RS small it’s forbidden to mutate imported and exported bindings; R6RS offers a get-out from this in the form of identifier-syntax <dpk>as is typical, the effects of mutation are that in R6RS a violation is required to be raised, and in R7RS small anything could happen (from silent failure to memory corruption including the notorious demons flying out of your nose) <rlb>I was thinking about performance and regressions, and not sure this is a good idea, but randomly wondered if we have anywhere we could host some random amd64 machine, were one available -- solely for running benchmarks via a codeberg runner. Clearly all the benchmarking caveats would apply, but it'd make it much easier to spot obvious differences. <rlb>(I was also trying to remember how expensive ecraven's benchmarks are -- I've run them, but don't recall.) <rlb>As a smaller step, I suppose we could make it easier to at least manually test two commits locally, so people could do that if they liked during a release. <rlb>e.g. right now say "meta/compare-perf v3.0.10 main" <rlb>(I'd probably also want to be able to just compare two built guiles, so I can avoid bootstrap costs when I already have a build.) <ArneBab>rlb: I can run ecraven’s benchmarks for two guile versions within less than an hour. <apteryx>is there something like for-each-par (parallel) I can use in Guile? <lloda>rlb: should i send a patch to the list or a pr on codeberg? i'm guessing i shouldn't push to savannah any longer? <identity>lloda: R^7RS: “If any argument is inexact, then the result will also be inexact (unless the procedure can prove that the inaccuracy is not large enough to affect the result, which is possible only in unusual implementations).” <identity>most, if not all, procedures follow this rule <lloda>i see the same in r5 & r6, but tbh i don't understand it <lloda>where the inaccuracy comes from. If it's from the previous chain of computation, that's a ridiculously high bar, if it's from float precision, the proof is trivial. I also think it's no good that (max a b) might return something that is neither a nor b and force the user to guard against that <lloda>iow i don't think max is like * <identity>see, floats even confuse me, what about computers? <identity>the point i tried to make, poorly, is that you can not guarantee that any computation involving float point numbers is exact for all floating point implementations, as there are floats that are not IEEE 754 around (“This report recommends, but does not require, that implementations that use floating-point representations follow the IEEE 754 standard”), and the proof that (max 1 #i1) is #e1 for all of them is non-trivial, and the <identity>proof that (max x (inexact y)) is exact is non-trivial even for IEEE 754 floats <identity>“Nothing brings fear to my heart more than a floating point number.” —Gerald Jay Sussman <lloda>ok, i agree that the proof isn't trivial even without context, bc you can have an exact number that is between two consecutive floating point values <lloda>i'm still not sure that what we have is more useful or makes more sense than guaranteeing that you get one of the two arguments :-\ <identity>an inexact number represents an approximation or a number that has been obtained out of an approximation (garbage in, garbage out; floats in, floats out), so i think it makes sense to return a float here: it is “probably” the maximum number, not “definitely” the maximum number <linas>while I'm here, I'd like to express an old desire for arbitrary-precision floats. <linas>I ended up coding a library in C/C++ but guile was my original hope for this. <lloda>i imagine you'd want integration in the scheme numeric stack, not having to call fp+ fp* fp unfp all the time <linas>hi dsmith yes, of course, I made a misleading comment. The lib I wrote had a pile of functions gnu mpfr doesn't have. <linas>I would have written them in scheme; t would have been easier. :-) <linas>lloda why yes, I imagine I would :-) There are some rather subtle issues of how much accuracy is needed for intermediate calculations (often one needs many more bits for the terms in the middle, to get an accurate final result) and so there'd be some klunkiness in having to set that. <linas>some new srfi would probably be the way; I have not thought about this in years. <ArneBab>linas: do you mean arbitrary precision as with Fortran (where you can define the precision of each number)? <linas>I have not used Fortran in .. decades. But say you want 300 bits of accuracy in your final result, but the final result is the difference between two numbers that are very close to each other. So you may need to compute those to 600 bits, to get a result that's good for 300. <linas>That example feels artificial, but often one has to sum alternating series where terms alternate being positive and negative, often very large, but the total sum is small. <linas>mpfr has this automatically built-in for those functions it supports (sine, cosine, etc.) but to create new functions, one has to deal with this. <old>dont't we already have unlimited precision arithmetic on rationals in Guile? <old>or you are working on irrational numbers as well <linas>complex floats, actually. Number theory. Past tense, "was". I work on other things now. <lloda>afaik fortran doesn't have arbitrary precision. You can say *8 or *16 but that's the same as saying float/double/quad in C <lloda>i'm sure there are libraries like anywhere else <linas>Fortran changed since I last used it. It got weirdly modern :-) <ArneBab>linas: that example is what you actually find in fortran codebases of weathermodels :-) <lloda>modern fortran is totaly different from f77. It's barely the same language <ArneBab>but it can still use all that old f77 code. <identity>yeah, but i still like my apples more :) <ArneBab>ACTION decided in 2013 between moving to Fortran or to Guile -- stuck with Guile by gut feeling after using both for a while. <ArneBab>Fortran was the pragmatic choice, because that’s what I would have used in my PhD and PostDoc, and it’s really fast. Guile Scheme allowed me to push my boundaries of programming. <lloda>guile is really easy to interface with fortran now with the 'iso c binding', and that's pretty recent, like gcc 12 or 13 <ArneBab>I didn’t know that -- do you happen to have a guide? <lloda>you could say it's even easier than c itself, bc c doesn't have a proper numeric array type <identity>also try APL, it is also a really interesting family of languages <lloda>apl is for sure worth knowing <linas>When I read SICP, my brain exploded. I thought I already understood computing pretty well, but SICP made me realize how little I'd seen. <ArneBab>linas: The little Schemer did that for me :-) <linas>all right then ... irritation of the day: are there scheme API's into CUDA or OpenCL? Just ... curious <identity>there should be opengl bindings, at least <linas>OpenGL and OpenCL are wildly different beasts. <linas>But it's an idle question; I don't see that anyone will start writing transformers in guile. It's just ... whatever. Wondering <rlb>nb. codeberg puts pull requests at refs/pull/N/head <mwette>How is codeberg different from gitlab, notabug, etc? <identity>compared to gitlab, it is actually free software, not sure about notabug or others <FuncProgLinux>notabug is based on gogs which is also free software and the original source before gitea/forgejo :) <rlb>I'd also say my impression is that gitlab has more features (perhaps for better and worse - I've assumed it's much heavier weight), and the core of it is free software, but it's another project where I believe a company drives most of the development and has (I think) proprietary add ons. One notable, large instance of the free core is https://salsa.debian.org. <FuncProgLinux>I've self hosted GitLab in the past. It does need a quite buffed server last time I tried 4GB RAM at minimum. Gogs/Forgejo can run happily on a raspberry pi :D <ieure>In my experience, Gitlab is horrendously buggy. <ArneBab>We use the free software version of gitlab at work, and it’s pretty full-featured with a lot of integrations to other tools. <ieure>My last job used it, multiple times a week, very basic stuff would just break. Like the ability to submit code review. <ArneBab>sounds like our IT folks do a pretty good job keeping it just working. I’ll see that I tell them. <rlb>That doesn't seem great: https://debbugs.gnu.org/79100 -- though if the code is correct, and it's supposed to work, and we don't have any related tests, I'm not surprised it doesn't now. <FuncProgLinux>Does flymake support guile? Or is there a good way to see warnings/errors in Emacs for this language? :) <identity>FuncProgLinux: not out of the box, it seems; i just do M-x compile to check that the stuff runs properly from the command line <rlb>...looks like that code crashes when null is passed to is_dynamic_state(x). <rlb>via guilify_self_2() <FuncProgLinux>identity: I never thought of that approach :o I'll have to read a bit more on the compile feature. Thank you. <dsmith>rlb, Doesn't crash for me if I put 100ms sleep in between creating the threads <dsmith>rlb, Does segfault (for me) with 10ms sleep. Me things there is a race somewhere... <rlb>it looks like the problem may just be that we capture the default_dynamic_state value (nil) long before anyone calls scm_i_init_guile() which eventually calls scm_init_threads to initialize the default dynamic state to #f. <rlb>If so, not sure how that was ever supposed to work, but guessing it used to. <dsmith>rlb, Yeah, the first time scm_i_with_guile is called, dynamic_state is 0x0. The second time, it has some other value, Unless it's called too quickly, then the second time it's also 0x0. <rlb>it's initialized indirectly via scm_i_init_guile() I think, which is guarded by the scm_i_init_mutex (at first glance), but clearly "something is wrong (TM)". <dsmith>Sounds like someone hand-rolled what pthread_once should do? <rlb>oh, wait -- I can see the "second" thread blocking on the lock, but by that point, I bet it already has (cached) the bad null state (i.e. in the argument there in scm_i_init_thread_for_guile) <rlb>when it gets the lock, everything all set up by the other thread, but it still has the null via the arg. <rlb>which it then passes to guilify_self_2, and boom. <rlb>i.e. we either need to not pass that as an arg, or refresh it after getting the lock, or... <ArneBab>rlb: I just ran the r7rs benchmarks against 3.0.8, 3.0.9, 3.0.10, and main. I see no slowdown in geometric mean of the tests compared to 3.0.10. This is only a single run, so it’s not very robust, but there’s at least no indication of catastrophic slowdowns. <ArneBab>rlb: there was a slowdown in 3.0.10, though. <rlb>Interesting - was this with all versions newly compiled, or were you comparing older built versions? <ArneBab>That’s with all versions newly compiled <ArneBab>(which takes more than half the time ☺) <ArneBab>the exact command I run (though it won’t run for you without adaptation): export VERSIONS="main v3.0.10 v3.0.9 v3.0.8"; cd ~/eigenes/Programme/r7rs-benchmarks; for i in $VERSIONS; do (cd ~/eigenes/Programme/guile; git checkout $i; guix shell -D guile gperf sed guile -- bash -x -c 'make clean; autoreconf -i; ./configure CFLAGS="$CFLAGS -march=native"; make -j6'); GUILE=~/eigenes/Programme/guile/meta/guile ./bench guile all; mv results. <ArneBab>Guile results.Guile--$i; done; rm all.csv; for i in $VERSIONS; do grep -a -h '+!CSVLINE' results.Guile--$i | sed s/guile/guile--$i/g | sed 's/+!CSVLINE!+//' >> all.csv; done; for i in $VERSIONS; do ~/Schreibtisch/wisp/examples/evaluate-r7rs-benchmark.w ~/eigenes/Programme/r7rs-benchmarks/all.csv guile--$i 2>/dev/null; done | grep -A2 "Geometric Mean slowdown" <rlb>I suspect it might be ok to just incrementally build at least for "forward" movement on the branch -- which should be much faster, if that's not what you were already doing. <rlb>If we can also run just that test, might make it reasonably easy to track down the relevant changes at some point. <ArneBab>I’m actually doing a clean build -- usually before taking an hour break :-) <ArneBab>you can just run ./bench guile array1, I think. <rlb>(I've often successfully rebuilt incrementally even after jumping around randomly in the past, which I'd imagined might be more trouble -- though there are certain commits you can't jump across.) <rlb>i.e. in the utf-8 branch there's one that changes the .go string layout -- crossing that boundary requires a boostrap. <rlb>(was painful sometimes when tracking done some issues and/or rebasing to cleanup) <rlb>So if we have a way to just "make -jN && run-that-test" could just collect a bunch of results maybe fairly quickly via "git rebase -i -x '...'. <ArneBab>yes, just use GUILE=/path/to/guile/meta/guile ./bench guile array1 <ArneBab>But I just checked my local build with march=native vs. the guix build, and the guix build is (surprisingly to me) around 20% faster. <ArneBab>So it seems that I’m missing an optimization that’s done for guix. <rlb>or maybe (if you're running prebuilt binaries?) it's related to build env, i.e. compiler, etc. <rlb>though that's a pretty big difference :) <ArneBab>Or I’m setting CFLAGS as fixed value and not adding -O2 '^_^ <rlb>Not sure, but it may "augment" as you'd expect if you do that during configure -- I forget which projects do what. <rlb>i.e. ./configure CFLAGS=... <rlb>I *think* that might not be an override, but rather an augmentation, but may be confusing with another project/arrangement.