IRC channel logs

2026-03-02.log

back to list of logs

<Rutherther>gabber: no, not u boot. Setup of the firmware of it. If it has something like that
<Rutherther>gabber: because desktop services contains it and it is what starts login
<Rutherther>gabber: you didnt give details so it is just a guess. If you are using a login manager then it is not possible because none of the login managers in guix support this. You would have to make your own one
<ieure>Rutherther, PineBook Pro is an arm64 machine, I don't think it has a BIOS.
<ieure>gabber, I would write a udev rule for this.
<RavenJoad>I figured out why texdoc is not working for me. The texlive-(scheme|full)-* packages do not install the "doc" output of their constituent packages. Is there a way to make that happen? Do I need a transformation on the inputs transitive closure to select the doc output? Is there a way to filter out non-texlive packages from that closure?
<mange>I don't understand what you mean, RavenJoad. Can you link a paste showing me what you've got at the moment?
<ieure>Anyone else having trouble building master, commit e22a088a239de57832544d78827f579f1440ee95? The docs aren't building for me.
<mange>CI hasn't built it yet: http://ci.guix.gnu.org/eval/2140092
<mange>I think it might take another 15 minutes, based on how long the previous evaluation took.
<mange>Whelp, the evaluation is still going, so that's probably not a good sign.
<RavenJoad>sneek: later tell mange I don't have anything yet. I just figured out this problem on Friday and don't know if there is already a solution. I have texlive-scheme-full installed in my home-environment. For example, "texdoc hyperref" does not work because texlive-hyperref's doc output is not installed because texlive-scheme-full only uses the out output.
<sneek>Okay.
<efraim>gabber: if you're using sddm there's a config option called numlock that defaults to on
<fanquake>Running guix pull, is the build of guile-ssh-0.18.0.drv currently broken for everyone else?
<cbaines>yes, I'm seeing this too
<fanquake>Couldn't find an existing issue, so dumped the details here: https://codeberg.org/guix/guix/issues/6816
<cbaines>fanquake, very fed up of this stuff, but I've managed to build it locally (with --cores=1)
<Rutherther> https://ci.guix.gnu.org/jobset/guix you can also see cuirass is struggling with evaluation
<cbaines>if only we were checking things built before pushing to master :/
<Rutherther>so I guess we should revert the libssh bump for now?
<Rutherther>csantosb: cc
<cbaines>Rutherther, that's presumably not the problem
<Rutherther>cbaines: oh? so why it broke on that commit then?
<gabber>a pull on my Pinebook Pro starts a build for libssh-0.12.0 ?? why would that be
<cbaines>Rutherther, nothing is my theory
<gabber>(it's a aarch64 device)
<cbaines>you can see a failed build for the previous guile-ssh output https://data.guix.gnu.org/repository/1/branch/master/package/guile-ssh/output-history
<Rutherther>gabber: because it has been bumped and it's a dependency of guix, specifically of guile-ssh
<cbaines>the problems that people are experiencing are existing issues with the guile-ssh package, because there's not yet a substitute for it
<cbaines>these issues existed before, it's just that there was a substitute
<Rutherther>cbaines: but CI is going to be indefinitely stuck now, if this already happened, I don't get why CI hasn't been stuck since that time
<cbaines>Rutherther, since what time?
<Rutherther>cbaines: since the last time this happened. There must've been a time when there was no substitute of libssh
<gabber>Rutherther: wouldn't we want to wait for at least the basic prerequisites to be available as substitutes?
<Rutherther>gabber: wait how, though? There is not really a mechanism for that
<csantosb>As for #6816, guix 4d0fe69
<csantosb>So guile-ssh was broken well before
<gabber>Rutherther: we wait until th cuirass-build-bot builds the PR (also for aarch64) before we push to master..? or am i missing something?
<cbaines>Rutherther, assuming it fails with a random chance, ci.guix.gnu.org especially builds things repeatedly, so it probably built eventually
<Rutherther>gabber: the cuirass build bot builds only x86_64. Also, its results are not substitutable, it builds in a sandbox.
<Rutherther>cbaines: okay, so with more pushes to master it should be fine eventually you mean?
<cbaines>Rutherther, maybe
<Rutherther>Cuirass is not going to schedule that build since it didn't evaluate, though. So it tries to build this only on new commits I believe
<cbaines>I'm trying to build it manually on berlin, assuming that works, we can just go back to ignoring the problem
<Rutherther>yes, that would work
<cbaines>if we actually want to avoid this problem in the future, we need to go back to doing more testing of changes, plus doing more rigerous testing of trying to check if things build reliably (for some definition), instead of just things can be built
<Rutherther>gabber: to be more precise, it is possible to substitute from the cuirass build bot. But on a different domain with a different key. Users do not typically have that authorized (and they shouldn't, the results of the bot are not to be trusted as the evaluation is not shielded in any way)
<cbaines>this is now built on berlin /gnu/store/qzm6gswjqkicaakidds3nb4nzxy4v57n-guile-ssh-0.18.0
<csantosb>Regarding the update of `libssh` to 0.12.0 in #6776, `guile-ssh-0.18.0` builds
<Rutherther>okay, good, so maybe the next evaluation of cuirass will be fine
<Rutherther>csantosb: it builds... sometimes
<Rutherther>...sometimes it doesn't
<gabber>huh. i see. thanks Rutherther
<csantosb>Ci should be doing rounds ? Bad for the bill ...
<Rutherther>csantosb: there is really no mechanism for rounds when the evaluation itself fails, it cannot schedule builds at that point
<Rutherther>the builds necessary for guix pull to finish are sort of invisible for cuirass
<csantosb>I meant, ci tests pass provided they pass ten times in a raw, otherwise, this is to be considered as a failure
<nmeum>does somebody have the time to review https://codeberg.org/guix/guix/pulls/5260 ? it adds a new webdav server service and includes both system tests and documentation (so it should hopefully be straightforward to review).
<efraim>trying to reference rust-crates is really difficult
<Rutherther>efraim: why are you referencing them?
<efraim>I'm trying to arbitrarily create references to them so I can replace them ad-hoc in qemu (and for mesa)
<efraim>but I figured I should try a different method after banging my head on it most of the day
<futurile>they're changing around so rapidly as well, the new importer seems to pull in the minor version source crates
<efraim>I've decided to use 'unpack-rust-crates to drop them in the build and then I can reference them from there
<Rutherther>maybe there should be a 'this-package-input-rust-crate' or something that finds the input
<efraim>this-package-native-input works great, but not when I try something like #$(this-package-native-input (string-append "rust-" crate-name "-" crate-version ".tar.gz"))
<evilsetg>hi guix!
<untrusem>hello evilsetg
<futurile>o/ both
<theesm>o/
<seres_>o/
<Rutherther>cbaines: it doesn't seem that the guile-ssh substitute would help cuirass to evaluate
<Rutherther>hmm, although maybe it did. It could be just different architectures
<cbaines>this specification seems to be working https://ci.guix.gnu.org/jobset/master I assume you mean evaluate the guix one https://ci.guix.gnu.org/jobset/guix
<cbaines>which does look broken
<Rutherther>yes
<Rutherther>cbaines: could you build the guile-ssh for all architectures the guix jobset runs for?
<Rutherther>that's x86_64-linux, aarch64-linux, powerpc64le-linux, i686-linux. I expect aarch64 to be the problem here
<cbaines>Rutherther, do you see anything in the log suggesting that's the problem?
<cbaines>or rather there's a failing build for libssh/guile-ssh?
<cbaines>actually, the earlier failed evaluations have the build failures
<cbaines>the builds must not be being reattempted, but the failures are still present
<cbaines>what a mess
<Rutherther>cbaines: cuirass evaluates all the systems in parallel after it builds for its system. And here the evaluations need builds. Unfortunately these parallel evaluations are not logged at all.
<Rutherther>cbaines: so I am expecting it is still a build failure, just for a different architecture and we do not see the log just because of the evaluations not being logged at all. I think it has to do with the parallel evaluation, n-par-map probably somehow affects the stdout or something. Not sure precisely
<cbaines>so for eval 2140205 I see something in the logs to suggest the failures relate to powerpc64le-linux and i686-linux, but nothing more specific
<Rutherther> https://ci.guix.gnu.org/build/18960365/details seems that it's going to be i686, not aarch64 as I thought initially
<Rutherther>powerpc64le seems to be built already
<Rutherther>but here the culprit is libssh itself
<cbaines>I'm 'pretty sure I had guile-ssh failing locally, but I could have not read the output correctly
<Rutherther>on i686?
<cbaines>maybe both packages don't build reliably, I think there was a discussion on libssh build failures on the Pull Request at least
<cbaines>I was looking again at https://codeberg.org/guix/guix/issues/4332 trying to better articulate the impact of the current situation
<cbaines>fixing the grafting issue would help here
<Rutherther>cbaines: hm, tbh I don't understand how that helps - you still need to build guile-ssh even without grafts, no?
<Rutherther>Also, Cuirass builds without grafts
<cbaines>not for the guix self/pull derivations
<cbaines>that's currently not possible
<Rutherther>oh, so you mean for other packages
<cbaines>in this case, it would mean Cuirass could compute the derivations without having to perform builds
<cbaines>it would still need to build guix for x86, but it would avoid this issue for other systems
<Rutherther>I mean... there is currently torbrowser that is not substitutable. And I don't understand why. It's substitutable with --no-grafts, but with grafts you need to build it. I am wondering if it is related to this issue. No clue at all, though
<cbaines>it shouldn't be related, but that still sounds odd
<cbaines>Rutherther, is there an open issue for the torbrowser thing?
<Rutherther>seems not, yesterday someone was saying they will make it iirc
<Rutherther> https://logs.guix.gnu.org/guix/2026-03-01.log#205625 see here for discussion with redacted who mentioned it
<untrusem>it was me
<untrusem>I said "making a issue" bit
<cbaines>I think I see the issue locally, `guix build torbrowser` doesn't seem to produce a derivation matching any of the outputs here https://data.guix.gnu.org/repository/1/branch/master/package/torbrowser/output-history
<Rutherther>cbaines: yeah, I think that's it... but why that is, I do not understand
<cbaines>go-gitlab-torproject-org-tpo-anti-censorship-pluggable-transports-snowflake-v2 seems to exhibit the same behaviour
<cbaines>in other news, my guix pull on berlin has finally finished
<cbaines>so running guix build --system=i686-linux guile-ssh
<cbaines>I'll try with --cores=1 if that fails
<Rutherther>in the meantime I was trying to restart it on Cuirass, since that's the only thing I can do myself. And it was failing with different tests based on the build machine
<Rutherther>it's either these two tests: torture_ssh_channel_direct_tcpip_success, torture_ssh_channel_direct_tcpip_failure. Or this one by itself: torture_forwarded_tcpip_callback
<Rutherther>cbaines: did guile-ssh succeed always with --cores=1? Because if so, maybe we should disable parallel tests on it
<Rutherther>it's at least more permanent solution than this one of making sure guile-ssh builds each time the derivation changes
<cbaines>Rutherther, nope, I got a libssh test failure with --cores=1
<cbaines>so one wild hack for comparing derivations is to use wdiff | colordiff on the builder scripts
<cbaines>doing that with go-github-com-quic-go-qpack shows that the difference is specification-qifs
<Rutherther>searching for specification-qifs I found this "#$(this-package-native-input "specification-qifs")" which seems wrong. It should probably be "#+". Not saying this is the issue
<cbaines>it seems that the grafting isn't being hoisted, which is actually a similar issue to the guix pull/self case
<cbaines>but this generally works with packages being inputs to other packages
<cbaines>maybe something is wrong with the copy-build-system
<cbaines>yeah, I'm not seeing the problem, I'll raise an issue about it
<cbaines> https://codeberg.org/guix/guix/issues/6820
<cbaines>...
<Rutherther>hmm, yeah also no clue
<cbaines>in other news, I've been able to build guile-ssh locally for i686-linux, but it's still failing to build on berlin
<Rutherther>but thanks for the investigation, it would take me more time to find out it's this derivation specifically
<cbaines>it's annoying from the perspective of working on substitute servers, as I'm sure people will see this and go "just those slow substitute servers again"
<Rutherther>cbaines: and is libssh built on berlin now?
<cbaines>not for i686-linux yet at least
<meaty>is there a way to find out the last version of guix that could build a package?
<gabber>meaty: i usually use the search function at ci.guix.gnu.org
<Rutherther>meaty: you can try through cuirass, searching for the package and it shows the build history
<ieure>meaty, Yeeeess, but it kind of sucks. You can clone the repo and use a `git bisect' script to use `guix time-machine' to try building the package on that commit.
<ieure>meaty, What package?
<meaty>ieure: both ungoogled-chromium and fastboot, i'm trying to get grapheneos installed
<meaty>huh, nvm, i guess i had internet gremlins. fastboot substitutes now.
<ieure>meaty, I was trying to use ungoogled-chromium yesterday for some WebUSB stuff, at that time, the build was broken in master.
<ieure>Maybe it's fixed, it looks like some of the failures were due to the box being out of disk.
<cbaines>civodul, does guile-sqlite3 have a home yet? just asking after your move of some other Guile things to Codeberg
<civodul>cbaines: nope! do you want to move it there?
<civodul>noe was asking me the same thing
<civodul>if you have the latest copy, please go ahead
<civodul>the last commit i have here is 8772658fd9acf076d58ce562720ac16b1c562508
<civodul>haven’t checked SWH
<civodul>ACTION heads back home
<civodul>ttyl!
<cbaines>I can try and set something up, I want to try fixing that sqlite-close bug
<amano>Can gnu guix orchestrate other machines on the premise and in the cloud?
<ieure>amano, Sure... Ansible is packaged. lol
<identity>amano: fsvo orchestrate. there is guix deploy, a system service for Ganeti, and probably other things i do not know about
<folaht>Hey there. I seem to have trouble displaying thumbnails of webp images.
<ieure>folaht, In what software?
<folaht>Thunar
<bjc>'static-networking' on the hurd seems to require 'udev'. i ran into the same issue in the installer. is this a known issue?
<bjc>to get the installer to work i used no networking and configured it after login
<Rutherther>bjc: only the default "no networking" option is supported on hurd iirc
<bjc>i don't think any hurd options are supported, but yeah =)
<bjc>might make a good first project since that's the first major hurdle
<Rutherther>although somehow it should work with qemu network. But I don't think it's possible to just use default config from the installer with anything other than "no networking"
<bjc>there's a special hurd+qemu-networking service that it uses
<bjc>well, i mean if i can fix the udev dependency somehow (stub maybe?), then the next installer build should pick that up
<bjc>at least for static networking. i know dhcp has options, as well, but i'm not familiar with them yet (and they also have the same udev issue)
<Rutherther>take a look at the hurd-barebones-os / hurd64-barebones-os definitions in guix. I expect them to work
<bjc>yeah, they're using the hurd's custom qemu networking
<bjc>%base-services+qemu-networking/hurd
<Rutherther>yes, they remove the udev requirement
<bjc>i'll dig through it and see if i can figure out how they're bypassing udev deps
<bjc>are they just deleting it from the services list?
<Rutherther>they just remove the requirement of static-networking by overriding the field
<bjc>ok
<Rutherther>no, it is never in services list. That is what the error is saying. That it wants it, but it is not available
<bjc>i don't see where that's happening
<Rutherther>look up the %qemu-static-networking definition
<Rutherther>and also the <static-networking> record default values
<bjc>ahh, i see
<bjc>thank you
<bjc>so, two possible obvious approaches: add a <hurd-static-networking> which removes 'udev from requirement, or, add a udev shepherd-service for hurd which doesn't do anything
<bjc>i think the latter is cleaner, but maybe i'm not seeing something
<Rutherther>what are you trying to solve in the first place?
<bjc>get the normal 'static-networking' configuration working, which will also allow for that to be selected from the installer
<Rutherther>the installer is customizable, it can add the rquirement field
<bjc>the installer just writes a static-networking-service-type, i think, so it's not hurd-specific. which is why it wants udev
<Rutherther>what I am saying is that it can be hurd specific. The installer can be changed
<bjc>yeah, that's true
<bjc>i think overall it'd be better to have it not be. as much as possible the hurd os config should be the same as linux
<bjc>stubbing udev allows that, but is also a kludge. i just prefer it to having a custom hurd-*-networking-service-type
<bjc>at some point, dhcp will have the same issue, for instance
<bjc>i am also unsure how feasible stubbing udev is =)
<bjc>doesn't look too hard: create a hurd-udev-shepherd-service that provisions udev, and add that to the service list
<bjc>should even be able to test it directly in the os configuration
<bjc>eyy it works. or at least doesn't error when building. we'll see what happens when i boot it =)
<bjc>seems to work. weirdly it looks like we're not using 'settrans' on /servers/socket/*
<bjc>i wonder how this is happening
<bjc>if anyone's interested: https://paste.debian.net/hidden/720d7ae5
<bjc>the only special sauce is the simple-service that creates udev, everything else is standard
<bjc>i won't have time to build a proper patch this week since i'll be away, but the above does work
<futurile>Q: what are the circumstances that justify making a package "hidden"?
<Rutherther>futurile: https://codeberg.org/guix/guix/pulls/5996#issuecomment-10828109
<futurile>Rutherther: thanks, that's great
<ammaratef45>Hello, new here, joined to ask if someone knows how to set up the guix package manager on nixos? (tried to install the guix standalone system but sadly my wifi chip is not supported)
<Rutherther>ammaratef45: it's going to be easiest using the nixos module for guix.
<Rutherther>ammaratef45: the regular guix install.sh will not work well on NixOS
<ammaratef45>Rutherther: I installed the module but it didn't install the daemon too, was wondering if there is more to it than just adding guix to the user packages
<Rutherther>ammaratef45: the nixos module definitely adds the daemon service as well as you can see here -https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/guix/default.nix#L285
<Rutherther>ammaratef45: yes there is more to it, you need to properly set up the users/permissions and so on. Just installing the package will not configure it properly for it to work well.
<ammaratef45>Rutherther: I was hoping there is a guide to help, but if not I'll go on and explore/read more
<ammaratef45>Thanks for the link, I'll use it as starting point to understand this (new to nix and guix in general)
<Rutherther>I am afraid there is no guide since installation on NixOS is not easy as the services are managed by NixOS, users are managed by NixOS etc.
<Rutherther>ammaratef45: but really the least configuration possible to make it work is just "services.guix.enable = true" option.
<Rutherther>and for other options, see https://search.nixos.org/options?channel=25.11&query=services.guix. But now we're getting quite offtopic
<ammaratef45>Rutherther: worked like a charm, tysm!
<graywolf>Hello :) Does a channel with historical versions of node package exist by any chance? I see we package just 22.
<bjc>guix pull still hangs at 1%, even with a git checkout as explained in the blog post
<bjc>(hurd guix pull, that is)
<bjc>i'll let it run for a while, but it doesn't seem to be doing anything
<ieure>bjc, I believe it came up in the last couple weeks that `guix pull' doesn't work on Hurd.
<bjc>the blog says it works with a local git checkout, but it fails in exactly the same way
<Rutherther>ieure: but the blog post published few days ago mentions it works with local checkout, sort of contradicts that
<bjc>the blog has been wrong in a few places, unfortunately
<ieure>I see.
<Rutherther>it has been reviewed by yelninei who has tackled this guix pull issue quite a lot, so I would expect it to be accurate
<bjc>i've already sent in some corrections, but it seems like most didn't make it up
<bjc>so i know it's not particularly accurate
<bjc>the local git checkout was just the latest thing i've checked, and it also doesn't seem to work
<bjc>in fact, as it's currently written it won't even boot, since it'll still be looking for hd0 instead of wd0, as the command line is lacking the "noide" option
<Rutherther>🤦‍♂️
<Rutherther>that's quite an oopsie then
<bjc>there's still a lot of stuff needed after putting that in
<bjc>it's basically completely wrong, unfortunately
<futurile>It is beyond me why Github has a releases API that provides checksums for artefacts; except for the Source artefact
<futurile>It means downstreams almost never benefit from having checksums
<futurile>I assume someone in GitHub just thinks everyone should just curl all their binaries directly from each projects release page
<graywolf>futurile: The source artifacts are generated "on the fly" and the checksums do changes, so I do not see much value in providing them (the checksums)
<graywolf>The "correct" way is to package the source and explicitely attach it to the release. Then it stays static. Few projects bother.
<ieure>futurile, Are you positing that the company who bought and operates NPM might not take supply chain security seriously enough?????
<ieure>(Every extraneous ? double the sarcasm)
<futurile>Yeah, who'd have thunk they'd release a half-finished thing with a major use-case not covered and call it "done"
<ieure>futurile, My work uses GitHub, today is notable for being the literal first day since 2026-01-01 *not* to have a major incident that stopped or delayed our work.
<futurile>The only thing I've found so far is I can ask the upstream to create the 'xz' archive, this will get a checksum. Even though it's kind of silly as then there's a "Sources.gz" and a <whatever>.xz archive
<futurile>ieure: turns out that "running stuff" is really hard, and moving Clouds is a nightmare. I have no idea what happened to them over the last few years
<ieure>futurile, Yeah. Forced to move to Azure while also getting forced to ship llm slop to production. Seems like a bad time.