IRC channel logs
2026-02-09.log
back to list of logs
<kestrelwx>FuncProgLinux: Thanks for running build anyway, but I should've checked first thing what the instruction was. It's failing for me on an `rorx` instruction which is BMI2. Starting late 2013 most processors seem to have it. <sneek>Welcome back kestrelwx, you have 1 message! <sneek>kestrelwx, FuncProgLinux says: that I was able to build with these outputs, even with 3 rounds: /gnu/store/zmf2sxdl3iwzdp3w4sw0vwda0bm8cihw-mesa-25.2.3-bin /gnu/store/sd1hghag49mzwm8phwlf7xa0yf7vjmkq-mesa-25.2.3 <Tadhgmister>question: I am looking at packaging a server software for guix and it wants a local state directory specified at build time. I figure setting that to /var is fine but it tries to create subfolders there during the install step and that obviously fails inside the build daemon. <Tadhgmister>Should I try to edit the install step to not make those folders (they should be created in an associated system service anyway) or is there another option that I should be doing? <ieure>Tadhgmister, Patch it to not create them. Presumably it has some sort of CLI or config file option to use a different location, which you'll need to use in your service. <Tadhgmister>yeah I found it, wasn't as obscure to track down as I was expecting :D <Tadhgmister>there are definitely CLI options but I'd like to be able to run it locally on my system and play with it for a bit before nailing down exactly what the guix service should do <meati>does anyone have a script/quick way to check for updates to a set of packages after upgrading <meati>I'd like to know what new software updates I'd get after a reconfigure <ieure>meati, I don't know of anything like that. You could `guix system build' your configuration and compare the printed path with your current system generation. <ieure>but it'll do nearly all the work `guix system reconfigure' does, so you're going to be downloading or building everything. I suspect you don't want that. <Tadhgmister>if you want to know before the reconfigure `guix system build os.scm --dry-run` should tell you which packages would be rebuilt / substituted <Tadhgmister>after doing the reconfigure your best bet would be something with `guix graph` comparing current and previous revisions but would not be straight forward <meati>that helps. I guess I can pare '--dry-run' down to just the "user-facing" packages that get new version numbers <Tadhgmister>right because even then the version may not be bumped it may be that a dependency was updated... yeah between that and not having a clear notion of which packages are "user facing" (would enabling ssh service mean the ssh package is user facing?) makes a concrete widely usable version of that feature quite difficult <Tadhgmister>ugh FreeRADIUS wants to put an rsa certificate in the "read only single machine data" folder... so I guess that can't be in the guix store either :/ <Tadhgmister>I got so excited when it compiled inside a pure guix shell and had configure flags for reproducible builds and cross compiling that match what guix does... seemed like it would be *so easy* to package <FuncProgLinux>Is it possible to invoke a bash script inside a Guix herd service? <Tadhgmister>yes, if the bash script is part of a proper package it will have its shebang replaced with a valid absolute path to tehe /gnu/store otherwis you may need to do that manually on the script <Tadhgmister>actually shebang only matters if you need it to be invokable by something out of your control, `(invoke (string-append $#bash "/bin/sh") $#my-script)` would work too <FuncProgLinux>I use a script to run an AppImage (chromium based) but I sill update it by hand and have to call it from the terminal as an installed command. But :P that's not ideal. <Tadhgmister>"update it by hand" like to point to something in /gnu/store or some other update? If it depends on some guix package it should be easier to use `(mixed-text-file)` or similar from (guix gexp) package <FuncProgLinux>Nah is literally the most "duck-taped idea" I could figure out. <Tadhgmister>so it sounds like getting a guile script or your home config to perform the necessary updates would be a better first step than getting herd to invoke it <FuncProgLinux>Do a bash script that checks required directories in your home folder. And then run the appimage inside a guix container with FHS and some dependencies (gcc-toolchain), wait for the thing to boot up and use it. <FuncProgLinux>Then install it as a home-service-home-something I don't have access to the manual rn <ieure>Tadhgmister, What's the cert for? Is this like something self-signed for local TLS? <Tadhgmister>I assume? It is a RADIUS server but that doesn't mean very much to me yet <ieure>You can uh... simply not do that, lol <ieure>Let's Encrypt is happy to give you any number of real certs. <ieure>I'm genuinely not sure how to install a Guix System-wide cert. Replace nss-certs in %base-packages with a variant that includes it, maybe? <Tadhgmister>I'm so not there yet, right now just running their bootstrap script in the certs folder as a build step. I know that is absolutely the wrong way to do it but it now can run enough to load config files so I can actually experiment with what I can specify at runtime and not at compile time <Tadhgmister>Fairly certain I wouldn't even want this to be a system wide cert, feels more likely like the guix authorize private key or syncthings device id cert or ssh private key <rustyguix>Hi! Updated to the latest rust-team branch, and when trying to build rust-1.91, it re-builds all the way from 1.74. Am I doing something wrong? 1. baobit-factory hasn' <rustyguix>I need some help from the rust-team people to better understand how to be able to support rust-1.93 until it somehow makes it to master. So I am wondering, in the interim, what is the best practice to follow. <rustyguix>So one thing I notice, is that I had to do git reset --hard to resync with the rust-team branch as they push force. That is fine, but just wish to make sure that is expected. Secondly, now to build rust 1.91, it rebuilds all the way from rust 1.74, which I wonder if that is also expected. <efraim>rustyguix: the python-team merge caused a rebuild of the rust compilers so unfortunately it was expected that the whole rust bootstrap would need to be rebuilt <efraim>applying the update to rust-1.93 is on my TODO for this week <untrusem>rustyguix: yeah, I saw that, I forced pushed with changes from rust-team <untrusem>meanwhile you can get substitutes of rust-1.91-93 from ci.guix.moe <rustyguix>Generally speaking how stable is the rust-team branch. Do people use it for production? <rustyguix>Hmmm, so what do people do in general if they need to build something stable with a more recent rust version, not necessarily latest, but more recent than the one in guix stable. <efraim>we're not normally this far behind on rust. generally they'd use a "future rust" from rust.scm, which in this case is only 1.88 <efraim>the rust-team might get a rebuild again soon, I'm thinking of moving llvm-15 earlier so I can remove the riscv64 workaround <efraim>I'm pretty far behind on the riscv64 build, so it's not likely to be soon <Rutherther>what's the plan with rust-team, any estimates on when it will get merged and what needs to happen before the merge? <rustyguix>efraim: ok I see. So 1.88 is from master, right? Is master considered "stable", compared to say v1.5 <efraim>Rutherther: I wanted to get a newer compiler in, and 1.93 is the newest. Then it's time to sit in the merge queue <Rutherther>rustyguix: yes, master is what users use and developers try to make sure it's okay (of course sometimes there might be accidents) <efraim>after that we'll do any actual stabilization since almost nothing breaks with rust, it's more about removing extra crate versions and unbundling anything that was missed <Rutherther>well the queue is going to take quite a long time I presume <efraim>rustyguix: 1.5 is really a snapshot for installing, as a rolling release you really want the latest master <efraim>with how few things break on rust-team it's probably ok to use a newer compiler from there, but rebuilds due to rebasing or changes in the bootstrap aren't uncommon <efraim>mrustc isn't quite ready for the next jump so I don't see bumping it before the next rust-team merge <rustyguix>efraim: would you say there's a big difference in "stability" or reliability between using rust 1.88 from master versus 1.90 from the rust-team branch? <rustyguix>for our development workflows, we'll work with the rust-team branch, but for production, we now need to decide whether to use rust 1.88 from master or 1.90 from the rust-team branch <rustyguix>efraim: as a side note, so far our builds have worked using rust 1.93, on top of the rust-team branch <efraim>probably not, each would've had to build successfully to be added to rust.scm, but neither was really tested, especially with the rust test suite <efraim>rustyguix: that's good to know, and a sign things are working :) <rustyguix>has rust 1.90, on the rust-team branch gone through the test suite, speaking of which how can I learn how the test suite is run, and even run it myself? <efraim>it's just the rust test suite as part of building the package normally. for the bootstrap rust packages we skip the tests since we only need them for the next version, but for the "rust" package itself we run all the tests <efraim>so after test building the bootstrap rust-1.93 I'll update the rust package itself to 1.93 and go through building the full package <efraim>untrusem: can you tell me about your machine? I haven't had to set the RUST_MIN_STACK before <rustyguix>Ok, I see. Is there benefit from community members to run these tests as well? I'd be happy to run them, even if that is just for our development workflows, as we'll need add newer rust versions very frequently. <efraim>there is the user-reviewed tag on codeberg that says "I've built this (and tested it) and it works for me" <rustyguix>efraim: I ran into this issue that required RUST_MIN_STACK on a fedora 43 operating system. <rustyguix>efraim: sorry to ask again, but have 1.89 and 1.90 gone through the test suite, or this just happens when it gets merged into master? <efraim>on rust-team only 1.90 does run the test suite since it's the default rust package <rustyguix>ok, got it. So in terms of risk delta, using rust 1.90 versus using 1.88 on master, is there a big difference. I am new to guix, sorry for all these questions. <efraim>I would think 1.90 with the tests would be better than 1.88 without the tests, but both are probably fine <rustyguix>Ok, thanks! Happy to help with rust-1.91+! Please let me know if you have any questions! <untrusem>efraim: I have a refurbished ThinkPad T480, the RUST_MIN_STACK phase was because rustyguix got that error while building <efraim>I was able to build all the rusts without any issues <untrusem>efraim: yay although i knew that it will build 😛 <andreas-e>Hello Efraim! I am a bit confused about rust bootstrapping. mrustc claims to compile 1.74, and we have a 1.74 bootstrap package. But 1.74 itself depends on 1.73. <andreas-e>I am wondering because the 1.54 bootstrap fails to build on the bordeaux build farm, but apparently with very limited damage. <efraim>rust-bootstrap-1.74 builds with mrustc and rust-1.74 builds with rust-1.73 <efraim>in the native-inputs for rust-1.75 you can see it chooses rust-bootstrap-1.74 over the base-rust input of rust-1.74 <andreas-e>Ah, thanks! So I needed to look one package further. And I see the short bootstrap only works on x86_64. Is this still valid? <efraim>I haven't tested recently to see if the other architectures can use the 1.74 bootstrap path <efraim>or if ppc64le magically started working <Alavi_me>Hi everybody. I have a problem that when I try to install something with guix `guix install tokei` it hangs on `The following will be installed [] subsitutes:` why does this happen? <Alavi_me>>`guix install tokei` it hangs on `The following will be installed [] subsitutes:` . And it is fixed after restart. any ideas? <kestrelwx>Alavi_me: could be your connection to the substitute servers. <kestrelwx>I've had a request hang for indefinite amount of time when one of the servers used by me was having trouble. <venoflux>I've been using guix as my secondary package manager for emacs and treesitter languages and I am quite satisfied so far. However, I am stuck when it comes to non MELPA packages I am stuck. It was easy to use github link via elpaca package but I am not sure how to define an emacs package from github and use it without creating a custom package for guix, any resources or document sections I've been missing? <untrusem>creating emacs packages in guix is quite easy to be honest <venoflux>Oh another channel, alright I will take a look thanks. I am certain that it is easy and what I am having is a skill issue <untrusem>venoflux I don't mind helping you in that, though we don't a github discussion or discourse where one could make a thread <untrusem>there are templates provided in guix repo in `etc/snippets` but tempel has switch to using single file, so I will make a pr for that <orahcio>Hi, a question about python-build-system. Will the pyproject-build-system be merged into python-build-system? Maybe the issue https://codeberg.org/guix/guix/issues/5401 wouldn't happened if the gppoder package had udes the preferred python-build-system instead of the experimental one. <identity>«Eventually this build system will be deprecated and merged back into PYTHON-BUILD-SYSTEM, probably some time in 2024.» (info "(guix) Build Systems") <orahcio>I don't know why to use pyproject-build-system if that package can be built with the python-buil-system <identity>orahcio: as the name suggests, it supports pyproject.toml, but also PEP 517, and has a different API to accommodate them <identity>it is supposed to become the new python-build-system down the line <futurile>orahcio: there's a load of them from Nicolas Graves switching on the first page <cdegroot>Am I missing something? Isn't there a way to do a sparse checkout of channels? I'm setting up a new box, and downloading hundreds of megabytes of Git history... no fun with a rural internet connection :) <efraim>oh nice, got the tests passing for rust-1.93. now to mess them up and make the code presentable <untrusem>cdegroot, no idea but I would want that too <orahcio>futurile: If pyproject-build-system is not the preferred one, why to switch to it? <futurile>orahcio: I'm saying it *is* the preferred one; if you look at the actions of the python-team; pyproject-build-system is the *newer* one and reflects what python projects do these days <futurile>orahcio: I'm not on python-team - I'm just telling you what I see from their commits - and I've committed a few changes there <efraim>looks like a typo, "is not" vs "is now" <futurile>sorry orahcio I confused you with my type <venoflux>untrusem, sorry for the late reply and thank you so much, the package I was trying to build was in the channel repo you linked <untrusem>venoflux: if its on melpa, it will be on that channel <Alavi_me>kesterelwx: >could be your connection to the substitute servers... shouldn't that be handled? go to the next subsitute server? print an error message? something? <futurile>FuncProgLinux: not too much, run into a package that's using mercurial so trying to update that <futurile>for some reason our updater isn't playing nicely with it - maybe I can get url-fetch to just grab it <FuncProgLinux>I've never worked with mercurial :( I think I cannot help with that <FuncProgLinux>Danish translations breaking the build phase is the most random error I've ever seen in a while <efraim>you did say in a while, so I've been silently commiserating instead of trying to one-up <efraim>/gnu/store/f2l6dlvlcgvb8fx0sf0zdf7bhw1qq79g-zoxide-0.9.8/bin/zoxide: ELF 32-bit MSB pie executable, PowerPC or cisco 4500, version 1 (SYSV), dynamically linked, interpreter /gnu/store/c0gkmhh4gnncbgm5p547wjvvz2s7dnjp-glibc-cross-powerpc-linux-gnu-2.41/lib/ld.so.1, for GNU/Linux 3.2.0, stripped, built with rust-1.93 <Guest21>Someone know the current status of ZFS development in GNU Guix? <rrobin>is there a cli argument to build a profile in the store - like guix shell PKG but just prints the profile path (possibly offloading) <efraim>`guix shell hello -n` shows the derivation that would be built, and you can extract the path from there <FuncProgLinux>efraim: Guix in PowerPC? That's a new one for me. Is it on a Macbook? <efraim>FuncProgLinux: I mispoke, it's a cross-compiled binary that I use for testing the cross compiler in rust <efraim>but yes, I have an iBook G4 with guix on top of debian <rrobin>efraim: thanks that looks closer - basically trying to work on one machine but can only test on another - back and forth is driving me mad <rrobin>weird, it did work for hello, but not when using -L. <ieure>FuncProgLinux, I know some folks run Guix on Talos Secure Workstation hardware, which is a modern PPC machine. <rrobin>you know what, nvm, it is probably easier to pipe a socket to guix daemon in the remote system :D <kestrelwx>efraim: Was the crash for your GPU fixed eventually? <efraim>yeah but that's powerpc64le, not 32-bit powerpc :) <FuncProgLinux>I mean I've heard of it because the Gamecube and the Wii had PPC processors if I'm not mistaken <ieure>Many different systems have shipped with various PPC CPUs for decades. <ieure>FuncProgLinux, Probably easiest to look at the Raptor Computing Systems marketing materials. <bjc>isn't power9 still a thing? <ieure>As I stated, I believe there are folks running Guix on it. <bjc>yeah, it just disappeared from desktops. but, tbf, it wasn't really on desktops either <rustyguix>I'm running guix-daemon on a VM with a small root disk (38GB total, ~6GB free) and a 100GB volume mounted at /gnu/store. A git-fetch with (recursive? #t) fails with "No space left on device" when cloning submodules. I've tried: <ieure>Speaking of Guix on unusual hardware, I got it running on a 2011 MacBook Air yesterday. <kestrelwx>Is that how you'd got to see a build timeout? <ieure>rustyguix, How much RAM, and do you have swap? I believe builds land in /tmp, I've seen similar issues due to low RAM / no swap situations. <rustyguix>... the sandbox tmpfs fills up. I found --chroot-directory=DIR. Could I use this to expose my volume's tmp directory inside build chroots, then have builds use it via TMPDIR? Or is there a better way to configure the sandbox to use more disk space for temp files? <rustyguix>ieure: 3.7GB RAM, 4GB swap. /tmp is bind-mounted to my 100GB volume (ext4 on /dev/sdb), so host /tmp has ~82GB free. But git-fetch still fails with "no space left" - I think the guix-daemon sandbox creates its own tmpfs that ignores the host mount? <ieure>rustyguix, I'm not 100% sure. Should be easy to tell by adding more swap and seeing if you get the same failure. How big is the repo you're cloning? <rustyguix>I need to check now. I think I'll just change the logic to not clone the submodules. But let me check! <rustyguix>the issue may be with using (recursive? #t) with git-fetch which pulls in llvm-project as a submodule - that's ~2GB+ <Guest21>rustyguix: The guix daemon uses tmp as build dir. On some linux distributions, they just use the RAM for this. So you can run out of space with terabytes of disk space, because it uses RAM. You need to change the flag in the guix daemon, to use a dir that is actually on the disk. <rustyguix>Guest21: Which flag? I checked guix-daemon --help but didn't see --build-dir. Is it --chroot-directory or something else? <ieure>Guest21, ZFS is stalled, someone was interested in working on it, but has other stuff going on. <untrusem>do I need to get into a shell container to run python scripts <ieure>Guest21, It's barely usable for simple stuff as-is. No support for the root FS on ZFS. <untrusem>I wanted to test a program, I made a manifest.scm with packages but still get module not found error <ieure>untrusem, Is Python in the manifest? You have to have Python and the libraries. <untrusem>ModuleNotFoundError: No module named 'lxml' <untrusem>lxml is also in the manifest, actually I have a made a package for this, I can run that but wanted to try out manually 😛 <ieure>untrusem, `guix shell python python-lxml -- python3', then entering `import lxml' works for me. <ieure>untrusem, Can you share the manifest and command you're using to use it? <ieure>untrusem, `guix shell -m manifest.scm -- python3' can still import lxml for me. What's the script? Are you using a virtualenv or anything like that? <Guest21>rustyguix: I am using Fedora which uses RAM for tmp. Therefore I installed Guix through the binary installation (the script they provide in the manual). It installs a systemd service under /etc/systemd/system/guix-daemon.service. I added the following to the service: Environment="TMPDIR=/var/tmp/guix-build". Restarted the guix daemon service and it <Guest21>worked flawlessly. Doing "TMPDIR=/var/tmp/guix-build guix system build ..." didn't work for me. <ieure>untrusem, `python' is going to be the python from some other profile, which won't have those libraries. Might even be Python 2.x. <efraim>roptat_: were you hosting the videos? <attila_lendvai_>couldn't those videos be hosted by the project servers? are they too big? i remember failing to watch one a few days ago. <roptat_>efraim, I did, but after a few years, the server broke down <roptat_>I have a copy of a 2022 video from Arun Isaac "dreaming of better patch review" <roptat_>I also found guix-days-2020-christopher-baines-guix-build-coordinator.{mp4,webm} <roptat_>and my own presentation (maybe not the final version) <efraim>I have a copy of my own presentation <efraim>I think we tried to upload them to the gnu video server, if they have one <cdegroot>Given that I'm currently downloading a kernel package at 700kbit/s... I guess there's a common theme here: "the project needs bandwidth and mirrors"? <roptat_>the post advertises the chinese mirror, but it seems to be down <ieure>cdegroot, Yes, definitely, but also I think "the project needs to lean very hard into dumping as much stuff to disk so it can be served without application code in between the webserver and disk" is the harder thing which actually matters. <ieure>Same for stuff like CI, most page loads are returning dynamic page content, all that stuff seems slow / prone to jamming up. Needs to be writing updates to somewhere persistent and serving out of that instead of doing all the work on every load. <rrobin>downloaded almost all mp4 from webarchive, now trying to find a place to put them <cdegroot>Almost smells like putting IPFS into the plumbing could be an idea. Although it's been a while (decade?) since I looked at it, that sort of tech should help the project scale without begging for capacity or spending most of the funds with hosting companies <efraim>last time I played with IPFS it was a bandwidth hog, even if no one was using the files <rrobin>strange i thought ipfs was opt in in policy as in you only mirror what you pin. Don't remember if it has bandwidth controls though <ieure>cdegroot, civodul did some work on IPFS substitutes a while back, but it never landed. And doesn't solve stuff like CI breaking. <cdegroot>nope. I guess even on Guix, you still need central CI. That's hard to distribute. <roptat_>the substitutes can be distributed, but you still need some central authority to authenticate them <ieure>Some sort of trust relationship is required for CI/builder type machines, but the results could be distributed. <cdegroot>(I once toyed with the idea of assigning everybody builds at random and then some voting system to certify builds... but that gets hairy quick) <nemin>cdegroot: Sorry, I logged in recently, so I don't have half the context. Do you know about the volunteer mirrors? <cdegroot>(so you'd have 5 random servers build the same thing, and only if 4/5 return the same data, the build is signed) <cdegroot>nemin: I do, but - unless I've overlooked something - you need to run a build to get these substitute servers configured, and that build currently is pulling in a new kernel etc. There's probably a correct order to get substitutes and channels configured on a new box (in my current case, I'm setting up a new VM from the 1.5 ISO), but I'm not aware of it. <rustyguix>that is what many distributed systems do, they rebuild the same thing and then reach consensus <rustyguix>more generally, it is replicated state machines <cdegroot>I mean, I have a big box doing nothing here most of the time. When I'm typing in IRC, I'm sure that the other 47 cores can do something else ;-). And I guess I'm not the only one. <ieure>cdegroot, I started writing a thing to help with this, but it's not working yet. But, yeah, you have to add channel+subs, reconfigure, restart guix-daemon, add packages/services from those channels, reconfigure again. <rustyguix>you can think of git as a state machine, say the guix master branch transitions from commit to commit, and does it when a commit gets merged because all CI tests passed, etc. So the CI phase could be handled by a decentralized network of CI builders, which would need to reach consensus at whatever percent is prescribed (could be very high). <roptat_>ieure, you can use --substitute-urls <ieure>No way to avoid that unless the installer can add channels/etc when it installs, but you can at least ease things a bit so you're less likely to find yourself compiling the kernel out of nowhere. <nemin>cdegroot: I reckon you can just prepend your mirror of choice to --substitute-urls, no? That way you don't need to wait for the build to finish and re-generate the list. <ieure>roptat_, You have to trust the archive key first. <roptat_>not if it distributes the same substitute <ieure>The only time I've had an issue is when that is not the case. <roptat_>because then it falls back to the next substitute server, which is slow, or doesn't have the substitute at all <roptat_>if the mirror is faster than ci or bordeaux, it's worth it. You could also use it while reconfiguring ;) <rustyguix>efraim: untrusem: just saw rust 1.93 is now in! Amazing! Thanks! <rustyguix>Is the pattern used on team branches to push force hence, it does not keep the commit history? <rustyguix>In other words, using the rust-team branch, say current commit, in a channel, is likely to stop working as soon as another commit makes into the rust-team branch. Is my understanding accurate? <ieure>rustyguix, I think some teams rebase, others merge, I don't believe there's a hard and fast rule. <cdegroot>nemin: see? I'm dumb. I keep forgetting the commands have these command-line options, I'm so focused on keeping everything nice and tidy in my Git config repos and only running it from there. Thanks. <rustyguix>How long does it would it approximately take to bootstrap everything from `hex`, all the way to current master? <rustyguix>Sorry, by everything, I meant, say rust 1.88, which is in master at the moment. <ieure>rustyguix, Going to depend a lot on the hardware, but I'd guess somewhere between a few days to a week. <rustyguix>ah, yes, say, a thinkpad lenovo x1, with some linux distro (e.g. fedora) <rustyguix>and then, say for comparison's sake, on some relatively powerful machine -- is there some docs on what kind of machines the community uses to do these bootstrap builds? <ieure>I don't know if anyone is doing them regularly. <ieure>And I don't have a good sense of how scaling the hardware up would impact the performance, just ballpark extrapolating from the time to compile stuff on my ~average hardware system/s. <hugohugo>I'm getting a "server error: 500" on codeberg when creating a P.R. Is that just me, or does anyone else have problems too? <rustyguix>untrusem: yes, very cool indeed! Thanks for all your work! <untrusem>I did do much, thanks for bearing with me and testing things <rustyguix>does ci.guix.moe run the same test suite as the CI on codeberg? <ieure>rustyguix, It builds packages the same as CI, if that answers your question? <ieure>If the package has and runs tests, those will run no matter what machine is building the package. <ieure>If you're asking about something like the system tests, I'm not sure. But any test as part of a package will get run according to the package definition. You can't change whether/how they run without changing the derivation, so it's not really possible to build them without tests. <untrusem>Has anyone tried jujustu vcs for contributing to guix? As it doesn't have branches feature, you can work on different branches at the same time, I wonder would that eliminate the need to do `make` everytime when one changes branches? <ieure>I mean, you can. But since it's a different derivation, it won't substitute for one with tests enabled. <ieure>untrusem, Never tried it. Git worktrees (one worktree per branch) is another approach which helps with that problem. <untrusem>then, I will try it tomorrow and report back <clamshell>How would I make guix.scm also provide CA certificates and update those in similar way as update_ca_certificates would do on ex. debian?? <rustyguix>ieure: I meant whatever CI is being on codeberg, before anything is merged into a team branch, or on master <ieure>rustyguix, Same answer, packages are built according to their definition, you can't skip tests without changing the definition. <rustyguix>anyone knows what is the rough timeline for rust 1.89 and 1.90 to make it into master? <meatg1rl>hi, is r-miniui broken for everyone or is it just me? guix package tells me the substitution fails, as it gets a corrupt input by restoring the archive from the socket. <meatg1rl>besides, doing r-stats on postmarketos thanks to the guix package manager is super fun, and the pdf output is soooo nerd. <meatg1rl>my main issue is that i’m trying to install the r-questionr package. so i’m trying to install it thanks to r-guix-install <ieure>meatg1rl, I can't reproduce your issue with r-miniui substitution failing. <lilyp>rustyguix: there's five merge requests pending and none of them are rust-team atm