IRC channel logs

2025-11-17.log

back to list of logs

<cluelessguixer>Excuse my whining. Can't reconfigure my system since zfs 2.3.4 can't build on kernel 6.17, and 2.4.0 is yet to be released. Unsure if I should continue waiting or try to pin kernel 6.16 instead. I've waited a while already... sunken cost lol.
<ColdSideOfPillow>cluelessguixer: Sunk Cost Fallacy for the win!
<ColdSideOfPillow>Though, pinning should probably be the better option
<cluelessguixer>ColdSideOffPillow: Probably useful to get a hang of for any such future snags I might encounter too. Maybe I'll try doing it tomorrow.
<apteryx>weird, C-c doesn't stop `make check`
<apteryx>cluelessguixer: the fun of out-of-tree kernel modules! You can thank the ZFS license for it, or the apparent lack of interest from the OpenZFS developers to change that situation: https://github.com/openzfs/zfs/issues/17047
<ArneBab>apteryx: I expect that C-c isn’t properly forwarded to child processes in multithreaded make. Does that also happen with make check -j1?
<apteryx>It should be; I've never this broken behavior, and I've used parallel Make a lot. It doesn't also seem to happen all the time, perhaps some test changes the signal action for SIGINT?
<apteryx>never seen this broken behavior elsewhere when using parallel make*
<ArneBab>Maybe I just got used to it from Guile …
<ArneBab>apteryx: regarding zfs: I remember the work done to get Mercurial from GPLv2 to GPLv2 or later: that was a tiny change, but the effort is huge, because you have to get the OK from every copyright holder.
<apteryx>ArneBab: it wouldn't be easy, but given the payback (proper integration in the kernel, meaning it'd work well for each released kernel) would be worth it it seems
<avalenn>I hit the GIO_EXTRA_MODULES today again on foreign distro. https://issues.guix.gnu.org/77057 Is there any work in progress for this ?
<ArneBab>I agree. I’d also like to run zfs (I co-maintained a solaris cluster with zfs for a while; the bitrot protection alone would already be worth it)
<efraim>IIRC isn't the other option that oracle could release a newer version of the CDDL that is compatable with the GPL2?
<identity>and another option is having a good non-zfs filesystem in the kernel, which, looking at what happened recently, is unlikely
<Rutherther>avalenn: I doubt that's going to be resolved soon, that is touching some very cores of Guix. You might want to just "unset GIO_EXTRA_MODULES" environment variable in your shell's profile file if the problem for you is that the env var is set by your ~/.guix-profile. In case you're starting an app that is wrapped with GIO_EXTRA_MODULES and it starts other programs, that's much harder to overcome, yeah :(
<apteryx>identity: btrfs works well enough for me
<avalenn>Rutherther: I will do the unset, I wonder if here is any reason to export it in the first place
<identity>apteryx: it might work well enough for you, but it does not for many other people, and whether it does is highly hardware-dependent. it also does not have some performance features that zfs has, while those it does have are stuff like NOCOW; they do not matter as much for desktop use, i guess
<identity>if you love finding hardware bugs, btrfs is for you
<Rutherther>avalenn: well probably not, but it's not easy to get rid of the export due to other stuff that has a reason. But I would have to know the exact package to check it in particular to say for sure
<avalenn>I think it is propagated-inputs of glib from zathura in my case. But I did not investigate further.
<Rutherther>packages should generally be wrapped with GIO_EXTRA_MODULES when they need it. But that also is not so good as such programs that start other apps will 'pollute' those apps with that env var as well
<Rutherther>zathura propagates cairo that propagates glib, yeah :( and the reason for propagation is the .pc file. This is mostly concerning packages using zathura as a library, but not end users. It kinda sucks tbh :( There was a gcd that was supposed to take care of this at least partially, but it has been turned down (or like not accepted by enough people)
<efraim>I think it would be interesting to adjust the pc files to use full paths if that's the only reason it needs to propagate other packages
<efraim>we can't propagate per output, otherwise we could also just propagate those packages on a separate pkgconfig output
<Rutherther>efraim: it's not full paths. The pc files are already full paths
<Rutherther>it's that if a pc file has a Requires on other package, its pc file also has to be present, that's the reason for propagation. Ie library X requires library A, when compiling something that has X, you also need pc file of A
<efraim>I'm looking at mpv.pc, it lists libraries in Requires.private and Libs.private lists only libc libraries
<Rutherther>the best solution I know of is symlinking the pc files from required libraries rather than propagating the whole packages
<Rutherther>in case you meant full paths to the libraries in Requires etc., then I am afraid that is not possible as pkg config doesn't support full paths to pc files like that
<efraim>I meant changing Lib.private to add -L/path/to/dependency -L/path/to/other/dep ...
<Rutherther>oh, like removing the Requires.private and putting what pkg-config would output Libs.private? I don't think that would be sufficient, also include paths and possibly other flags the libraries might want - it's Cflags probably. But yeah, should be possible. Still looks to me more complicated than just symlinking the pc files and letting pkg-config take care of it
<efraim>hmm, that's a good point
<Rutherther>but hey I am happy with any solution that gets us rid of propagated inputs that are propagated just due to Requires :) any ideas on how this should be approached, does it need a GCD?
<efraim>I'd go with proof-of-concept first
<efraim>Rutherther: for mpv specifically I think the output of `guix shell pkg-config mpv libdisplay-info -- pkg-config --libs --static mpv` could be substituted for the Libs section of mpv.pc, and something similar for Cflags
<efraim>that also assumes that it's not propagated for any other reason
<efraim>Rutherther: https://codeberg.org/guix/guix/pulls/4268
<Rutherther>efraim: btw could you merge / review the manifests for a release? It would definitely be good to have them merged and I think you are the only one on release team with commit access
<Rutherther>thanks for this! I will test it out, but I believe this should work, yeah. Wondering if there can be some edge cases. I will try to go over pkg-config docs if I can spot anything
<flurando>Why, there is no "inherit" for operating system in guix, while services have?
<flurando>But daviwil's github repo clearly shows the use of inherit for operating system definition, I wonder why mine failed with "ambiguous target blablabla" error
<identity>flurando: because there is an issue with your operating-system definition?
<flurando>identity: maybe, because I just saw daviwil used "operating-system-user-services" while I used "operating-system-services"... I would try a switch
<flurando>Well, using "operating-system-user-services" worked! It is interesting that for services block in os definition, developers added <user> specifically in between.
<apteryx>it takes 5h43m to bootstrap to hello
<df0>heh
<flurando>does guix deploy requires a constant ssh connection? Seemingly yes! If the connection is interrupted, the deploy would fail and you would have to deploy from the start again. This is exactly the same behaviour with guix system reconfigure, but as a remote deploy over network, I am afraid this should be avoided, if possible.
<flurando> https://paste.debian.net/plain/1409071
<flurando>nginx service could not start, and I failed to get any debug point from the error message
<flurando>I see, by running the command manually, it says no ssl defined.
<Rutherther>the actual part that reconfigure has to repeat happens in a few seconds unless you are using some custom services that take a lot of time to activate. But activation should be fairly quick. Other parts of reconfigure aren't repeated, ie. if you have built some packages, they aren't built again. The built packages are used
<flurando>Rutherther: Got it. While true that what is built is always there, I still think a guix deploy that acts like a trigger of "unattended upgrade" sort of thing would be better... especially when build-local? is #f.
<flurando>For now, guix deploy just acts like a shortcut of "scp ... ssh ... sudo guix system reconfigure ..." for me. Haven't tested multiple remotes at a time, though.
<cdegroot>I would not be surprised if looking at the source code that is pretty much what it does. It's a handy shortcut. But for unattended, you want something more robust (we use Nix at work, but pretty much the same, and config upgrades are sent through S3 and a special little script checks the bucket and pulls the new config, then runs everything locally. It's the sort of thing that's a no-brainer to add if you must have this functionality
<cdegroot>and you can hardcode everything to your liking, and probably quite tricky to add if you're Guix and would have to conver all cases).
<Youplaboom>Hello! Does anyone have a working `home-startx-command-service-type`? It seems like even with the default configuration, `herd` fails to start it.
<Rutherther>it doesn't add any shepherd service... so what do you mean that herd fails to start it?
<Youplaboom>Hello Rutherther! I mean that: https://paste.debian.net/1409089/
<Rutherther>I don't think that xmodmap has something to do with home-startx-command-service
<Rutherther>anyway did you look in the log for an error, then?
<apteryx>hm, https://ci.guix.gnu.org/nar/lzip/p20dx71smbb94d8mfxxqi38jmxmnfq7b-guix-1.4.0-47.21ce6b3 is downloading at a blazing 8KiB/s
<flurando>apteryx: try a proxy and see if the problem persists
<Youplaboom>Rutherther: Oulala. Sorry. I meant the `home-xmodmap-service-type`. There doesn't seem to be a log associated with the service
<apteryx>flurando: it's odd, using wget it appears fast enough (> 1 MiB/s)
<apteryx>it's the guix substituter that is slow, or at least reports being slow
<apteryx>yeah, the gnome monitor confirms its crawling at 10-12 KiB/s
<flurando>apteryx: guix uses guix-daemon to download, it is different from wget. You might try to herd set-http-proxy guix-daemon http://<your http proxy> and see if the problem persists.
<Rutherther>Youplaboom: so you get no entries in the shepherd log, are you sure?
<Rutherther>It's very strange a package would just not log anything and exit
<Rutherther>to be more exact I am expecting you are looking into ~/.local/state/shepherd/shepherd.log for lines with xmodmap content in them
<flurando>apteryx: if not, view the page on libre planet for Guix Mirror. use --substitute-urls="https://... https://..." instead of official.
<Rutherther>flurando: why would using a proxy speed up a connection?
<flurando>Rutherther: When your ISP limit speed on direct access to ci.guix.gnu.org, but does not have time to limit your proxy.
<Rutherther>huh, why would isp limit ci.guix.gnu.org, but not the proxy? Also if that was the case, why would wget work normally whereas guix substituter not?
<Youplaboom>Rutherther: Ah… I was looking for a xmodmap file in the sheperd directory. There's indeed a log line relative to xmodmap in the sheperd.log file : "2025-11-17 15:09:04 [bash] /gnu/store/r3n1y4bvy5qh9gdxa46y49ijpkcfzxpz-xmodmap-1.0.11/bin/xmodmap:  unable to open display ''"
<flurando>Rutherther: Good question! Seemingly you got a good ISP that delivers what you paid :)
<Rutherther>Youplaboom: yeah, I was kind of expecting that. The service doesn't seem properly written to me, it doesn't really have a way to obtain an env var like DISPLAY. So the only way I can see it working well is if you use a display manager that starts shepherd from the xserver, that way shepherd gets the DISPLAY variable
<postroutine>Hello. On Guix System, is it possible to configure Shepherd to send an email when certain timers are triggered ? I think about the Unattended Upgrades, Restic and Certbot timer. I would like to receive by e-mail containing the result code of the commande (success/error) and a copy of the stdout and strerr of the triggered command.
<flurando>Rutherther: It is like asking, why do someone's ISP block Tor, but not the bridges. I know ci.guix.gnu.org is by no means similar to Tor, but a speed limit is also by no means similar to a lockdown, it comes at no price for ISPs.
<postroutine>Or maybe it's need to be done on log management ?
<flurando>postroutine: you have to extend them.
<Youplaboom>Rutherther: I'm indeed using startx without a display manager. OK, thank you! I will do without this service then
<flurando>postroutine: No, check Shepherd service, you can extend it with a job, basically telling it how to start and how to stop your job. Then you can run some daemon or bash to do the work.
<Rutherther>Youplaboom: there are other services that do support that, they have to depend on the home-x11-service-type that can give you a display env var. So you definitely do not have to give up on X11 shepherd services completely
<flurando>postroutine: I am afraid you are actually wondering for some real example, as in practice you can do this even with mcron-service-type... But I am afraid few people here has did it with sending email.
<flurando>For timer, mcron-service-type is much easier than a shepherd timer, while the latter is more flexible.
<postroutine>> postroutine: No, check Shepherd service, you can extend it with a job, basically telling it how to start and how to stop your job. Then you can run some daemon or bash to do the work.
<postroutine>You mean, modifying the Shepherd services for Restic, Certbot, etc, and set the `start` to run it's software command (restic, certbot, etc) and then run a script that send an e-mail ?
<postroutine>> For timer, mcron-service-type is much easier than a shepherd timer, while the latter is more flexible.
<postroutine>Ok. But if I remember, the `restic-backup-servicee-type`, `certbot-service-type` and `unattended-upgrade-service-type` are creating a Shepherd timer.
<flurando>postroutine: What?! Oh, I got it. You want restic, certbot, .. to send you a notification (as email) when their job is done. This requires extending their shepherd service, which I have no experience in. So you might have to check the source code if such functionality is provided.
<postroutine>Ah, ok.
<flurando>postroutine: Don't worry, I use unattended-upgrade, so I'll have a search too.
<postroutine>On my old home server, who run on Fedora for now, I have added a cron entry to do these tasks (auto update, backup and certificate renew). And I have configured Cron to send me a notification email. But as I plan to switch to Guix System, I wonder how it can be done in the "Guix way" while I use the system services I quoted above.
<Rutherther>it can't be done with those services, you would need your own services
<flurando>Rutherther: Agree, at least from the document, there is no such extend available
<postroutine>Ok, I will check for that.
<flurando>postroutine: however, certbot service have hooks provided for you to run command on specific actions
<Rutherther>with unattended service you would be able to hack it through the shepherd services that it restarts
<flurando>postroutine: So you can add your send mail command there (namely authenticate-hook, cleanup-hook and deploy-hook)
<flurando>postroutine: sadly, restic service does not provide such hook now, you have to write custom service to watch its log file, which is not complex but very time consuming if you haven't write such shepherd service before.
<postroutine>I wonder if I can write a new `service-type` that can set notification for a selection of shepherd timers. But I don't know enough the internal of shepherd to know if it's possible.
<Rutherther>I don't think this can be time consuming in any way, you just copy the original service and modify it slightly to add the lines to send you an e-mail
<flurando>postroutine: Good if you accept a hard fork...
<flurando>Rutherther: I welcome such change, as a patch, though
<flurando>postroutine: Not possible in Guix without modifying shepherd itself, unless the timers "export" themselves somehow. It's like asking to access another local variable in a seperate local environment.
<flurando>postroutine: Go default hook way in Certbot, have a daemon script to watch logs and send email, then write shepherd service as a wrapper to call it.
<postroutine>On Guix System, how the Shepherd services logs are managed ?
<flurando>postroutine: same with how Sheperd dose, Guix just wraps them with Guile as configuration language. So exactly how? I don't know.. because Shephed document reads quite hard for me.
<flurando>But if you are asking where to watch the log, restic service has default log place, and you can check where the log is with a herd command called ... I forgot, wait for a search.
<df0>shepherd (system) logs to /var/log/messages
<postroutine>But when Shepherd run a process defined in a service `start`, does it collect the process output and redirect it to a file in `/var/log/service-name` ? Or does Shepherd let the process manage itself it's own log file ?
<flurando>the process of course can have a own file, you can view the current output in /var/log/.. or herd status <service name> -n 1000000
<Rutherther>shepherd redirects the stdout and stderr to /var/log/messages by default. It can be configured by the service to redirect to a standalone file. The process can of course also manage it's own log file, there is nothing shepherd can do about that, all programs are different, it's not possible to tell the programs to do or not log to a file
<postroutine>It the `log-file` parameter of the procedure `make-forkexec-constructor` ?
<Rutherther>yes, that decides where to redirect to
<postroutine>Ah, ok, thank you.
<postroutine>I read the Shepherd manual again and I maybe found something that could do the notification I try to do.
<postroutine>In the section `4.1 Defining Services`, I see that the procedure `service` have a parameter named `#:termination-handler`.
<postroutine>If I redefine the Shepherd service timer, for example for restic, I could use it to give a handler that could send an email ?
<postroutine>The handler receive a copy of the service, so I could get the log file from that.
<postroutine>And the handler also receive the exit status.
<postroutine>I could write a `notification-timer-termination-handler` that I can re-use on multiple timers.
<postroutine>I would need to find a way to pass it it's configuration like the destination address and it's smtp login, password and server.
<postroutine>Or it could use the command sendmail.
<postroutine>That I could configure with a `sendmail-service-type` or something like that.
<postroutine>Do you think it's a good idea ?
<flurando>just use "sendmail ..." in Certbot hook, mmm... Just use "sendmail ...". The easier, the simpler, the better.
<flurando>Good idea, though, if your service can be easily extended by new services.
<mgd>Hello. I'm getting an error when starting Cider. The error is "error in process sentinel: Could not start nREPL server: Execution error (ClassNotFoundException) at java.net.URLClassLoader/findClass (REPL:-1)." I have OpenJDK 24. In the terminal, clj and the clojure command seem to work correctly
<simendsjo>I tried url-fetch/executable for the first time, and it gives me a different hash each time even though I get the same hash when I download manually. The code looks straight forward though. But I only see it used for bootstrap-executable. Am I just using it wrong?
<ieure>simendsjo, Possible that the location you're downloading from notices that it's from an automated source and serves a CAPTCHA or other anti-bot measure?
<simendsjo>ieure: Ah, of course. It returns 302 with the download link in location. Doesn't seem possible to instruct url-fetch to follow redirects etc. If I try to download the file using the path in location: it fails because it requires an jwt token. I guess my binary download from github is a dead end :/
<ieure>simendsjo, What are you downloading?
<simendsjo>ieure: https://github.com/docker/compose/releases/download/v2.40.3/docker-compose-linux-x86_64
<ieure>I see.
<ieure>simendsjo, Definitely strange IMO that the download code doesn't follow redirects.
<ieure>simendsjo, Maybe you can solve this with a computed origin?
<simendsjo>ieure: Computed origin? I guess I can download the file and host it somewhere else.
<nckx>ieure: It does, or it should.
<nckx>Is it really returning 302?
<nckx> https://codeberg.org/guix/guix/src/branch/master/guix/build/download.scm#L600
<ieure>nckx, Seems to be.
<simendsjo>nckx: I tested with curl -i
<nckx>They might not serve the same content.
<nckx>What does ‘guix download’?
<nckx>* do?
<nckx>If it's the exact URL above, it does follow a redirect.
<simendsjo>guix download follows the redirect and stores the binary in the store
<ieure>nckx, The contents of the redirected location are identical, the 302 with the redirect differs every time.
<ieure>simendsjo, Yes, you can write a procedure which encapsulates arbitrary computation and returns an origin, which you can use as an input to something else. It's undocumented, but make-librewolf-source is a pretty decent example.
<simendsjo>nckx: This is my definition for now: https://paste.sr.ht/~simendsjo/d8fb57cc94b1875897fb721b1e9a5ae4e354d77d
<ieure>simendsjo, Why are you using this url-fetch/executable thing? Are you making a package from this binary?
<ieure>(I understand the reasons why you might want to do that.)
<ieure>simendsjo, You should use vanilla url-fetch and add a build phase to chmod it.
<ieure>Also if using copy-build-system, you should use #:install-plan, you don't have to write a phase to do that.
<simendsjo>ieure: Yes, it's to avoid creating a from source package.
<nckx><the 302 with the redirect differs every time> That doesn't matter though.
<nckx>url-fetch follows redirects.
<ieure>nckx, copy-build-system example: https://codeberg.org/ieure/atomized-guix/src/branch/main/atomized/packages/shell.scm#L41
<nckx>Oh, I missed the /executable.
<ieure>Sorry, meant simendsjo.
<JodiJodington>if I'm packaging something who's tests require networking, would it be better to disable tests altogether or patch out the tests that can't be run without networking
<simendsjo>ieure: computed-source sounds powerful, so I'll look into that. Right now, I'll just add the binary to my path without going through guix just to get things going.
<ieure>simendsjo, You don't need it at all if you use copy-build-system and a normal origin, I think.
<nckx>I can't reproduce this.
<nckx>It doesn't matter whether I use url-fetch or /executable or gnu- or copy-build-system.
<nckx>Redirects just fine.
<nckx>I'm using the URL you gave earlier, I didn't try the /tag/ URL you mentioned later.
<nckx>What's the expected difference?
<simendsjo>ieure: What do you mean by "nomal origin"? Just url-fetch will also produce a different hash each time.
<simendsjo>nckx: You get the same response each time? I.e. the same hash?
<nckx>Yes, when downloading https://github.com/docker/compose/releases/download/v2.40.3/docker-compose-linux-x86_64. Not when downloading https://github.com/docker/compose/releases/tag/v2.40.3/docker-compose-linux-x86_64 but then CURL just returns a 200 and some HTML too.
<nckx>(Context: your first link used /download/. Your second link, https://paste.sr.ht/~simendsjo/d8fb57cc94b1875897fb721b1e9a5ae4e354d77d, switched to /tag/, I don't know why.)
<simendsjo>Wait, what? I've been downloading the tag?! I never meant to do that :/ Thanks both nckx and ieure!
<nckx>Further context: I have no idea what different GitHub URLs do.
<nckx>Oh.
<nckx>OK.
<nckx>‘Solved.’ ☑
<nckx>JodiJodington: If at all feasible the latter.
<simendsjo>My first link was just a copy/paste from the github page I had opened. I guess I just copied the source from another package where I used a source build without thinking.
<nckx>And I took it as a deliberate choice by someone more familiar with GitHub.
<ieure>simendsjo, You should still IMO use a normal url-fetch, copy-build-system with #:install-plan, and a phase to chmod.
<ieure>JodiJodington, In order of preference: 1. Fix the tests so they don't need the network. ex. if they need some data, provide it as an origin. 2. Disable the specific tests needing the network. 3. Disable all tests. Use only in extreme cases IMO.
<nckx>simendsjo: Just one last thing: you also *weren't* using the same URL between curl and Guix, then, right? No bug in {url,http}-fetch?
<simendsjo>nckx: I used tag/ in my Guix setup, while download/ in my curl test. The tag/ just opens the release page, so I guess I somehow copied the url and added the filename manually or something. So no bug, just me doing strange things.
<nckx>Standard human behaviour confirmed.
<simendsjo>ieure: Good points. With the changes: https://paste.sr.ht/~simendsjo/4f08eff0300daf325b70a11082e7f7e8d4d3de10
<ieure>simendsjo, Nice, that definitely feels cleaner.
<simendsjo>ieure: I probably need to create a service to place it where docker will find it though. But I'll have to look into that some other day. Thanks again for the help.
<ieure>simendsjo, Sure thing. I think you can point it to the helper with the config file. Otherwise, I'd hope it works like git, and docker-compose on $PATH is sufficient. Not sure, though, I switched to podman specifically because the Guix Docker packages are ancient and compose fully does not work last I tried.
<ieure>I am not really satisfied with podman-compose.
<JodiJodington>ieure, nckx: thanks!