IRC channel logs

2025-09-08.log

back to list of logs

<euouae>Hello how is load-extension to be used in a guile module?
<sneek>Welcome back euouae, you have 1 message!
<sneek>euouae, ArneBab says: wisp distcheck works, and it’s built on guile distcheck. So while autoconf or automake improvements could make this more convenient (it was a hassle) they are not required to make it work.
<euouae>I fixed distcheck based on an idea in the automake ML
<euouae>ArneBab_: ^
<euouae>Basically used GUILE_EFFECTIVE_VERSION instead. I am going to eventually post all the guile project skeleton examples in the guile mailing list
<euouae>But until then -- I'm curious about load-extensino
<euouae>I see there's an extensions dir e.g. (assoc 'extensiondir %guile-build-info)
<euouae>but if I install an extension there, how does guile know what init function to load?
<euouae>e.g. with load-extension you do (load-extension "libguilefoo" "init_guilefoo")
<rlb>euouae: if you haven't, see "Putting Extensions into Modules" in the guile info pages -- you create a small scheme module that calls load-extension.
<euouae>No I understand that
<euouae>I am wondering how this works when "installing" the extension in the extension dir
<euouae>load-extension requires an entry point function to be called that loads all the other functions in guile definitinos
<euouae>and does other administrative stuff
<rlb>Right, so you still have to have a call to (load-extension ... init) somewhere.
<rlb>Or maybe I still misunderstand.
<euouae>ah I see
<euouae>So where is it called? top-level at some installed module?
<euouae>I guess so
<rlb>See some of the load-extension calls in "git grep load-extension modules" in the guile source, for example srfi-4.scm.
<rlb>Though some things in the guile source are handled a bit differently.
<rlb>For an external case, you'd normally just create a module in a scm file in an appropriate place in the load path containing a define-module for the module, and then call load-extension in there.
<rlb>The define-module form can then export as appropriate, etc.
<euouae>yeah you call it top-level is what I'm saying in that module
<euouae>I guess with no guard? (unless guard? (load-ext ...) (set! guard? #t))
<rlb>Generally unguarded, right.
<euouae>alright thank you
<rlb>And define-module, etc. make sure it's a one-shot, you don't need guard or anything.
<rlb>yw
<ArneBab_>euouae: nice! Thank you for the info! Our discussion here got me to fix distcheck for my website (after I had let that bitrot too long). Thank you for that, too! It’s full of hacks, but it does a full org-publish with a custom Emacs and has been working reliably for years. And I should really start to add some TESTS files like checking external links or so: https://hg.sr.ht/~arnebab/draketo/browse/Makefile.am?rev=tip
<euouae>ArneBab_: hehe nice
<euouae>ArneBab_: custom emacs? anyway, I wrote my own ox-html5 mode for my blog :P
<euouae>unfortunately there's some tricky parts that prevent me from making that package useful for others
<euouae>I'm kind of the opinion that I need to rewrite it entirely from scratch, not based on ox-html. They made a very unfortunate decision on making ox-html a polyglot (xhtml, html, html5, etc)
<ArneBab_>euouae: custom emacs setup: there’s a .emacs.d included in the repo and the Makefile uses that as setup. It’s not containerized, so there may be some dependencies on the local system left, but it’s stable even when I experiment with new packages :-)
<ArneBab_>euouae: maybe just rename it to make it distinguishable, and release it? May not be perfect, but it might already be inspiration? If I only released stuff when it’s ready, I wouldn’t release much (and get much less ready than I do today).
<euouae>no, I have released it in that sense <https://github.com/createyourpersonalaccount/ox-blorg> but I don't think it's easy for others to figure it out
<euouae>my blog repo using it is here <https://github.com/createyourpersonalaccount/blog> and blog itself is here <https://createyourpersonalaccount.github.io/blog/>
<ArneBab_>Nice! Maybe add a readme with the link to publish.el and a link to your blog on the github repo?
<euouae>are you the maintainer of publish.el?
<ArneBab_>No
<ArneBab_>I just use it.
<euouae>Do you mean may you add to your personal files? :P
<euouae>Either way go ahead
<euouae>I am tired and I might not be grasping what you're asking right now
<ArneBab_>I meant, add a readme to your repo and link your repos with the blog
<ArneBab_>in your repo
<euouae>oh you want to do a PR?
<euouae>yeah go ahead
<ArneBab_>euouae: https://github.com/createyourpersonalaccount/ox-blorg/pull/1
<euouae>Cool, thanks!
<euouae>why does guild have -L only while guile has -L and -C?
<euouae>-L is load path and -C is compiled load path
<euouae>(guild compile)
<ArneBab_>I think so, yes
<ArneBab_>I’d need to speculate: since guild compile is for compiling, it may not need access to compiled files, only to source files.
<euouae>so the (use-modules (...)) stuff is resolved at runtime only? not checked during compile time?
<euouae>or ... ahve to think about this
<ArneBab_>it’s checked during compile time, but IIRC it uses the source files. But I’m not really sure right now, because I can bind together files using different languages for compilation, and I think for that it needs the *.go files.
<ArneBab_>So the reason might also be historical. In that case it would be something to change.
<ArneBab_>I added the guile fibers webserver to https://web-frameworks-benchmark.netlify.app/compare but the performance is much worse than I expected. Did I do something wrong? Do you have an idea why that could happen? Source: https://github.com/the-benchmarker/web-frameworks/tree/master/guile/fibers benchmark run: https://github.com/the-benchmarker/web-frameworks/actions/runs/17541430737/job/49813762027?pr=8646
<ArneBab_>(the guile-web run is mismatched in the graphical comparison and shows go-web instead, so don’t use it for comparison ☺)
<ArneBab_>ACTION is *not* the maintainer of those benchmarks, just wrote the guile part.
<euouae>ArneBab_: what are you comparing fibers against?
<ArneBab_>euouae: python flask. locally I saw much better performance with fibers than with flask, but on the test system that doesn’t seem to work.
<euouae>same test suite on both your and their system?
<ArneBab_>Should be, but I’m not sure. I’m especially surprised that (when you check the table¹) guile-fibers and guile-web have almost exactly the same result, but in my local testing fibers is significantly faster ¹: https://web-frameworks-benchmark.netlify.app/result?l=guile
<euouae>If you don't know what their setup is then the test results are difficult to interpret
<ArneBab_>their setup is the github runner with Ubuntu 24
<ArneBab_>but yes :-)
<euouae>is github runner even appropriate for benchmarks?
<ArneBab_>it’s at least defined :-)
<dsmith>sneek, later tell euouae For a simple example, see: https://gitlab.com/dalepsmith/guile-sqlite/
<sneek>Got it.
<rlb>I poked at the hashing issues a bit (we have no hashing for bytevectors, srfi-4 vectors, etc.), but fixing that, assuming we want to, raises some questions. One I wondered about while toying with bytevectors is the fact that our current general vector hashing "randomly" (using hash as index mod length) samples the elements, which I'd expect to be cache unfriendly for (large) homogeneous vectors, as compared to some more ordered access.
<rlb>fwiw
<dthompson>rlb: I was really surprised to see that we don't have bytevectors
<dthompson>hash*
<rlb>yes, nor (as the bugs mention) f32vector, etc.
<dthompson>well those are just views of a bytevector so makes sense
<dthompson>the same bytevector hashing would apply to them all
<dthompson>the entire hashing algorithm needs replacement. I was surprised to discover that hash combination is commutative.
<rlb>I started hacking up a bytevector version mirroring the existing vector code when I started wondering about the behavior -- in this case I was just making it a little bit optimized to work in long-sized chunks when not "at the end", which then made me wonder about cache-hostility.
<dthompson>I think the entire bytevector should be hashed
<rlb>Sure, there's also the "sampling question", but I was ignoring that for now :)
<dthompson>a good hashing algorithm should be able to hash even large bytevectors quite quickly
<rlb>I could imagine people might argue against the whole bytevector for similar reasons to the existing sampling.
<rlb>i.e. what about 300m bytevectors.
<rlb>I don't have a strong opinion myself.
<rlb>atm
<dthompson>in hoot we just compute a 32-bit murmur3 hash for the entirety of a bytevector/bitvector.
<dthompson>though I regret choosing murmur3 and want to switch away from it to a seeded hash that resists hash flooding
<rlb>I could see that being very expensive for juggling a lot of large bytevectors in some set-ish operations, etc. (deduplication?).
<rlb>i.e. I could understand the concerns either way.
<rlb>You'd easily keep the caches "clobbered".
<dthompson>iirc chez scheme hashes the entire thing
<dthompson>would need to double check, don't trust my memory
<rlb>...and you're right that it should be one of the fastest things most hardware can do :)
<rlb>(More the working set size question, I'd assume.)
<rlb>But if you sample too densely, and randomly, then you just get worst of both worlds.
<dthompson>yeah I guess I lean towards doing the simpler, more obviously correct thing until a use case emerges that needs special consideration
<rlb>I wondered (if you are going to keep sampling) about some linear approach, but hadn't gotten any further :)
<rlb>(i.e. somehow cheaply determine the indexes in order)
<rlb>I could also see some "larger data" oriented inclination favoring cached hashes for at least larger strings (since they're immutable), etc.
<rlb>Probably not what you want until the string is larger than say some number of typical cache lines.
<rlb>(or more)
<rlb>Some string related operations would be *far* faster then.
<rlb>(Second question would be whether you compute the hash lazily/atomically or at construction.)
<rlb>For anyone familiar with the C side, I'd love a second opinion about https://codeberg.org/guile/guile/pulls/10 for 3.0.11.
<old>rlb: strings hashing is not performant right now in Guile?
<rlb>And I may just merge #11 if I don't hear anything in a while.
<rlb>old: oh, I don't recall it well atm, but I wasn't saying it had any big issues, we were just musing about "improvements".
<rlb>And of course taking advantage of the fact that strings are immutable (well will be most of the time at least with utf-8), and only computing the hash once could be a big improvement for say using a set to deduplicate strings, etc.
<rlb>(But I wouldn't/won't seriously consider that in any detail until we settle the utf-8 question.)
<rlb>If it were to become relevant (I imagine we'll just fix main), I think we can also work around the GCC 15 incompatibility via -std=c17 (gcc 15 bumped the default from 17 to 32).
<rlb>Actually perhaps s/c17/gcc17/
<rlb>dthompson: not sure if you saw (also didn't know if it was in the right place), but here's a trivial doc patch someone submitted to guile for guile-opengl: https://debbugs.gnu.org/78908
<dthompson>I did see that but haven't had time to act on it
<dthompson>I have access to guile-opengl but I haven't actually hacked on it ever...
<dsmith>Wow gnu.org is s. l. o. w..
<rlb>indeed
<rlb>And no worres, just saw your name on the project page with wingo's :)
<rlb>"worries"
<dthompson>thanks for the reminder :)
<shawnw>gnu.org has been unusably slow for a while now.
<shawnw>(Is the best way to submit patches to guile these days using codeberg or still going through the mailing list?)
<rlb>I suspect either is fine.