IRC channel logs

2025-11-13.log

back to list of logs

<cdegroot>Pretty much all you need is there, not? You'd need to come up with a scheme to organize the code. Different config.scm for teachers vs TAs vs students, say, and then reading data from a database or sorts to setup initial username and ssh key and a password. But yeah, I'd push for such a setup if it were me :)
<meaty>what would be cool would be if the "shell" functionality could be constrained per-user, so that everyone could still try out new software without admin approval (a huge drag on traditional setups, esp. for work) but only people with the right security briefings are able to enable network access in the shells, or even have host filesystem access
<meaty>this way people could flesh out new extensions to the institution's software suite without needing constant checking for security risks, etc.
<mange>I've recently installed Guix on a new machine, and I just tried to use "guix system container" recently. Running the resulting container script fails on (close-port #<output: /proc/1/uid_map 6>) with "operation not permitted", which then fails to bring up the container. Has anyone seen this before?
<mange>I haven't really investigated anything yet, but I haven't had this issue on other Guix systems, so I figured I'd ask before I sink too much time into it.
<meaty>having just one source of software (as opposed to having elpa, pip, etc.) makes security easier, too, i imagine
<rekado>apteryx: in commit c23b9a10174ce304cfc7a87aa70759364fcb76e6 did you accidentally replace define-deprecated-package with the more verbose (define-public python-libxml2 (deprecated-package ...))?
<rekado>looks like a mistake as the commit is about removing libxml2-xpath0.
<apteryx>rekado: ah yes, there was a conflict, and I didn't see the difference
<apteryx>rekado: fixed with 429e41739ce, thank you
<apteryx>can someone check on their system if icecat(-minimal) is still unable to infer the correct time zone when TZ is not set?
<apteryx>It seems to be resolved using current icecat, but I'd like a 2nd confirmation
<apteryx>so, 'TZ= icecat' and then Ctrl+Shift+I -> Date() should return your correct time zone.
<apteryx>oh, you need to disable 'resist fingerprinting' in Settings -> IceCat Settings
<civodul>Hello Guix!
<SirNeon>Hello.
<efraim>o/
<kestrelwx>o/
<user_oreloznog>\o
<gabber>what's the guix-package build-time equivalent of nproc? aka how can i figure out how many cpus are available to build a package during package build?
<fanquake>parallel-job-count
<gabber>fanquake: thanks!
<efraim>apteryx: I'm taking a look at the zstd compression for debug output patch and I'm getting an mmap failure on riscv64 (function not implemented). I'm trying to track it through guix/build/gremlin ATM
<efraim>I didn't see those issues on aarch64/armhf/ppc64le. ppc32 isn't plugged in currently
<ArneBab>civodul: are the substitutes provided by bordeaux and ci served by raw guix publish (publish.scm) or is there some other infrastructure in place for serving the files?
<ArneBab>civodul: on https://bordeaux.guix.gnu.org/ I get nginx as Server -- is it only proxying or does it do something extra.
<attila_lendvai>should it work to install gnome extensions through firefox on extensions.gnome.org? any special package needed? i do have gnome-shell-extensions installed, and some extensions that work.
<cdegroot>IIRC it's just bits of Javasscript that get dropped into some XDG dir under your home directory, so yes, that should work.
<untrusem>yt-dlp has now yt-dlp-ejs and runtime dependencies to work for youtube
<attila_lendvai>yeah, it's ll ~/.local/share/gnome-shell/extensions, but it's not as simple as git checkout-ing an extension. plus i don't want to deal with updates manually.
<ArneBab>civodul: or is the nar-herder used to actually provide the data for the substitute servers (that’s then provied by nginx)? https://codeberg.org/guix/nar-herder/src/branch/trunk/nar-herder/server.scm
<ArneBab>cbaines: is nar-herder from you? (referring to the previous question)
<civodul>ArneBab: hi! ci.guix runs ‘guix publish’ while bordeaux.guix runs nar-herder
<civodul>both are behind nginx
<civodul>but nginx doesn’t do any caching
<civodul>supporting code is in (guix substitutes) and (guix narinfo)
<attila_lendvai>FTR, when i gave up debugging i reported it as: https://codeberg.org/guix/guix/issues/4216
<ieure>attila_lendvai, I can reproduce your issue. I think what's happening here is that extensions.gnome.org is detecting the browser as Chrome, then trying to use the Chrome extension instead of the Firefox one.
<attila_lendvai>ieure, i'm no expert in this... but i have a suspicion that the two should behave the same when it comes to this subsystem. what makes you think it even tries to differentiate the browser?
<ieure>attila_lendvai, The fact that "chrome" is in the name of the thing it's trying to run.
<ieure>In the error message.
<attila_lendvai>ieure, i believe (with weak conviction) that it should work fine for both, contrary to the name
<JodiJodington>ieure: looking online, gnome_chrome_shell is just the internal name of gnome-browser-connector (https://wiki.gnome.org/Projects/GnomeShellIntegration/Installation). It's not just for chrome
<JodiJodington>it does appear like gnome-browser-connector is a seperate program though so maybe guix just doesnt ship it
<attila_lendvai>ieure, FYI, Rutherther quickly diagnosed the issue (guix is missing a package for this)
<Rutherther>right, it's not in guix channel and even if it was, just installation of a package with native application is not going to do anything. Librewolf looks for native applications only at specific paths (~/.librewolf/native-messaging-hosts or /usr/lib/librewolf, not completely sure about the second working with librewolf version from guix)
<attila_lendvai>is fishinthecalculator here? the author of the small-guix channel?
<ArneBab>civodul: thank you!
<cbaines>ArneBab, yep, the nar-herder is something I've been putting together https://codeberg.org/guix/nar-herder
<cbaines>whereas guix publish works off the store, the nar-herder works off a SQLite database, and that's what allows for moving nars between machines, and the mirroring
<ArneBab>cbaines: Two students I supervise will be hacking on guix to add swarming downloads with the simplest approach (range requests + a list of chunk hashes), so I (or they) may have questions. If they stumble over stuff, how do you prefer contributions?
<cbaines>ArneBab, that's cool, the nar-herder is hopefully pretty good for hacking on, although this hasn't been tested yet. Via Codeberg Pull Requests is the way to contribute.
<cbaines>ArneBab, can you say more about swarming downloads?
<cbaines>I've thought about trying to index compressed nars to allow for range requests for individual files, but I've never looked in to it further
<attila_lendvai>ArneBab, your team would probably benefit read through the distributed substitutes threads: https://lists.gnu.org/archive/cgi-bin/namazu.cgi?query=distributed+substitutes&submit=Search&idxname=guix-devel
<ArneBab>attila_lendvai: from what I see in the thread, it assumes a much more complex solution (having a separate storage for chunks). The much simpler solution uses plain RANGE http requests and keeps a list of ranges it already has (stored directly in the download file).
<attila_lendvai>ArneBab, yeah. but keeping in mind the big picture can help make decisions that later fit much better into the big picture
<ArneBab>cbaines: for swarming downloads they’ll build on the Gnutella Download Mesh that requires only range requests and 4 HTTP headers to inform downloaders of other downloaders.
<cbaines>ok, cool :)
<ArneBab>The advantage of that is that swarms form around the files that get downloaded, so no management of shared files, long-term peers, search or similar is needed. You also cannot upload to such a swarm: it always starts at e server and ends when the transfer is ended.
<ArneBab>So the big picture is: you can offload file bandwidth cost of downloads from the server to users. It does only one thing and introduces no new concepts.
<attila_lendvai>ArneBab, (not sure i follow) but a large portion of the substitute downloads i'm seeing lasts a fraction of a second. won't meta-communication be too much wasted noise compared to actual useful data sharing?
<ArneBab>the base of that is https://rfc-gnutella.sourceforge.net/src/Partial_File_Sharing_Protocol_1.0.txt but with a simplified list of chunks and hashes (a bencoded list of chunks as with bittorent is a better choice nowadays)
<ArneBab>attila_lendvai: yes, for small files the overhead would be too high. For them the server can just skip starting swarms.
<ArneBab>there already is a draft, but it does much more than needed (it’s more of a playground for me): https://hg.sr.ht/~arnebab/wispserve/browse/wispserve/serve.scm?rev=tip#L116 (at the point where the four needed non-standard headers are declared)
<ArneBab>cbaines: aside: there’s already some range request support in https://hg.sr.ht/~arnebab/wispserve/browse/wispserve/serve.scm?rev=tip#L510 -- I needed that to support streaming video.
<yelninei>civodul: Hi, did you verify that the childhurds start now?