IRC channel logs
2026-01-28.log
back to list of logs
<muaddibb>Where does Guix store the main repository? Like, if I'm searching for a package and `guix package -s` points that it's defined in gnu/packages/something.scm, how do I find something.scm? Do I have to clone the repository separately? <mange>The easiest way to see the definition is "guix edit $packagename" which will open it in $EDITOR. <mange>Or $VISUAL, it turns out. If you want to print the path you can do something like "VISUAL=ls guix edit $packagename". <oliverD>Gnome doesn't update icons unless I restart where would I ask to add this feature <FuncProgLinux>I think you are able to restart the shell using Alt+f2 and then typing "r" on the prompt <oliverD>Also I keep getting an error from the glibc package saying: fatal error: linux/errorno.h: No such file of directory <mange>Where are you getting that error? <oliverD>Oh i just relized i get it when i compile with su but not if I compile my project normaly <FuncProgLinux>What's our take in third party packages for desktops? IE: Plasma applets NOT made by KDE themselves. GNOME Extensions, etc? <mange>I don't see any reason why not? There are a bunch of gnome-shell-extension-$x packages already in Guix. <FuncProgLinux>What if those third party packages are basically "abandonware but everywhere" packages ? <mange>According to the manual, packages which are unmaintained can be removed, so it really depends on that. <FuncProgLinux>Ubuntu MATE and Linux Mint are the two distributions who offer the most MATE-compatible software. It owns "mate-tweak" which is in a lot of distributions (and basically abandonware) since it hasn't been updated in a long time. Many of the extensions Ubuntu MATE provide are basically dormant. The forums report that both in MATE Desktop and Ubuntu MATE there's a volunteer shortage. <mange>If they're unmaintained, then we probably don't want them in a distribution. If they're maintained (even if nothing's changed for a while) then I think it's fair game. <FuncProgLinux>On top of that, those particular pieces of software haven't been exposed to the v1.28.0 APIs until recently, which means they are running 1.26.0 which is from 2021 i believe. <FuncProgLinux>Because debian didn't accept MATE into unstable until recently. So yeah. <FuncProgLinux>It's so sad because Linux Mint has libadapta and libxapp. Those apps look integrated to XFCE/MATE and Cinnamon, you could replace mate propagated inputs to change pluma for xed. That way everyone is happy if you don't like libadwaita <mange>Okay, well, it sounds like the issue here is around "is this software maintained" rather than "are third party desktop packages welcome". It sounds like you have a better handle on that than me, so I'll leave it up to your judgement here. <apteryx>lilyp: should GTK_DATA_PREFIX be an search path? <apteryx>using themes in a pure environment/container is not fun :-) <abbe>it's late for me, so I'll only follow up in few hours. thanks in advance. <JazzJackalope>Are there any issues I should be aware of when installing a DE on a not-guix distro? <oliverD>Just wondering is there a cheap card that is know to work with guix (I went down to my local hardware store and as expected none of the cards were listed on freenode) <daviid>apteryx: if you search, duckduckgo esarch for example, you'll have the answer to your quiz ... fwiw <ieure>sneek, later tell untrusem Do you want to do the LibreWolf 147.0.2 update? Should be simple. <abbe>thanks ieure, and sorry for the noise. <lilyp>apteryx: dunno, what'd be the use case? <lilyp>feel free to tag me in a MR if you feel strongly about this search path <change>I am starting to get a bunch of problems in lsp-mode when i installed it from guix instead of package.el, can someone help <change>how often are people here in this irc? <efraim>now is about the time people start to show up. I don't use emacs so unfortunately I don't have any suggestions <change>very sad, maybe I'll keep this buffer open and wait for people <change>csantosb I had ubuntu and installed guix preserving my /home, so first, since emacs.d was untouched, things seemed to work, but after that i decided to use lsp-mode inside a C file and got a vfork error, then i set all ensure t to ensure nil in use package and went to home-config.scm file to add all the packages I use in emacs in the list and now when I open emacs, I get messages like this: Unable to activate package ‘lsp-ui’. <change>Required package ‘lsp-mode-6.0’ is unavailable, and installed lsp-mode from guix seems to be version 9. IDK what the problem is so i came here to ask for help <csantosb>So using Ubuntu, and Guix as a foreign package manager, right ? Same as me (Arch in my case) <csantosb>Then, you install emacs using Guix, along with all emacs packages, correct ? <change>No, I had ubuntu first, then removed it. It's full guix install. I just wanted to mention that I kept the /home from ubuntu <csantosb>Can you share your `guix package --list-installed | grep emacs` somewhere ? <efraim>you said you kept your /home from before, do you have a mix of guix installed emacs packages and package.el installed packages? <efraim>you should use a paste site like paste.debian.net <csantosb>Even better, `guix package -p $GUIX_PROFILE --export-manifest > /tmp/emacs.scm`, we will be able to reproduce your setup <change>csantosb, I don't my $GUIX_PROFILE env variable is empty <Rutherther>change: you seem to be missing the package 'emacs' in your home profile <efraim>for rust you probably also want cargo, in rust:cargo. to do that you'll need to change specifications->packages to specifications->packages+output <Rutherther>Try looking into guix home describe --list-installed. In case you are missing it, add it and then relog <efraim>since it's a home config we should be able to test it with guix home container <light`>I'm the change guy, my buffer just died <csantosb>change: `guix shell -C --preserve='^TERM$' -m manifest.scm -- emacs --batch "(progn (require 'lsp))"` <csantosb>I put all of your `my-emacs-package-list` in manifest.scm <light`>by manifest.scm, you mean the home configuration file right? <efraim>in case you missed anything when the buffer died <csantosb>From emacs-pgtk to emacs-macrostep-geiser <light`>so like I keep the list definition in that file? <csantosb>In manifest.scm you only put `(specifications->manifest (list "emacs-pgtk" ... "emacs-macrostep-geiser"))`, replacing ... by the list of your emacs packages <csantosb>`guix shell -C --preserve='^TERM$' -m manifest.scm -- emacs --batch "(progn (require 'lsp))"` will return any eventual problem with your emacs setup <quassel-guy>I gave up on emacs erc rn, now the buffer should not die <csantosb>Executing this emacs config in a shell container I cannot reproduce the problem <quassel-guy>I opened it using: ` guix shell --container --pure --no-cwd --network --preserve='^WAYLAND_DISPLAY$' --preserve='^XDG_RUNTIME_DIR$' --expose=$XDG_RUNTIME_DIR --manifest=manifest.scm -- emacs` <csantosb>Then, I execute previous and launch emacs with no issue for me <quassel-guy>in the next one i added clangd as well, to make lsp work. <quassel-guy>yeah, I'll send it somewhere and test... yours works with lsp with that manifest right? <csantosb>I just updated emacs-lsp-mode, by the way; previous one was 8 months old. <quassel-guy>maybe it's emacs.d because it has gone through like 4 distros now <csantosb>To latest commit; last tag (9.0.0) is like 2 years old. <quassel-guy>like you didn't guix pull? it automatically pulls latest after you reconfigure right? <efraim>the dynamic-wind tests in guile-fibers are after guile-fibers-1.3, so that's why that test starts to fail only on 1.4 <futurile>Rutherther: I emailed guix-devel about the Nix dev flow thing <untrusem->futurile: where is public archive for guix mailing list? <sneek>Welcome back untrusem-, you have 1 message! <sneek>untrusem-, ieure says: Do you want to do the LibreWolf 147.0.2 update? Should be simple. <danlitt>Hi! I'm having some trouble with icecat extensions. When I go to install e.g. ublock origin (a GPL extension) from https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/ I get the error "Installation aborted because the add-on appears to be corrupt." whether I click "Add to Firefox" or download the ipx file and install it that way. Has anyone seen anything similar / know how to get around it? <emacsomancer>does anyone have good 'recipes' for running icecat and ungoogled-chromium (under Wayland) in a guix shell container, restricting what directories are accessible to the browser but allowing access to fonts? <futurile>emacsomancer: it was a while ago so it may not still work <noe>emacsomancer, for icecat “guix shell --container --network --no-cwd --share=$HOME/.mozilla/icecat --preserve='^WAYLAND_DISPLAY$|^XDG_' --expose=$XDG_RUNTIME_DIR --expose=/dev/dri icecat -- icecat” works <futurile>emacsomancer: if you are worried about security you probably are better off using flatpak than guix-shell - guix shell wasn't really designed for security as such <ekaitz>futurile: how are you? getting better? <emacsomancer>futurile: then maybe bubblewrap rather than guix containers? <futurile>emacsomancer: yeah, I don't have any direct experience with bubblewrap but it seems like the right kind of choice <futurile>ekaitz: yeah a bit bored and very tired all the time. Pain is down which is great and I see the surgeon this week so hoping that it's healing well. <futurile>ekaitz: I see you're still streaming - what are you working on? <cbaines>futurile, I'm around if you want to chat more about build farms/substitutes <ekaitz>futurile: GNU Mes... I'm rewriting its core to a bytecode interpreter <cbaines>I was considering trying to explain how substitutes are provided, but it was a bit complicated to write down in an email <futurile>cbaines: yeah, I don't understand what happens after the build farm 'builds' something and how it becomes available as a 'substitute' for end-users <cbaines>so for ci.guix.gnu.org, things end up in the store, then guix publish makes the substitutes available <ekaitz>maybe those things should be written *somewhere* <cbaines>Cuirass builds most things, it has it's own mechanism for offloading, but things can also be built directly through the guix-daemon (and maybe offloaded to another machine) <cbaines>for bordeaux.guix.gnu.org, the build coordinator handles performing the builds, the nar is generated by the agent (the agents run on the build machines), and then send back to the coordinator on bayfront <cbaines>the build success hook for the coordinator then generates the signed narinfo and hands this off to the nar-herder <cbaines>have you encountered narinfo's futurile ? <cbaines>so taking a store item like /gnu/store/cs56i9digj9qg1bd383cmxc6xrfpdn9n-hello-2.12.2 <cbaines>so the metadata (narinfo) and data (nar) is separated, when guix says it's looking for substitutes, it's requesting narinfos <cbaines>guix publish offers a simple way of making store items available as substitutes <cbaines>and the bordeaux build farm uses the nar-herder, which is more a way of managing a bunch of narinfos/nars, while also making them available <cbaines>it's the nar-herder that supports the mirroring functionality and moving nars between machines <futurile>you run/ran some mirrors right - so to run a mirror you run <something> that downloads narinfo and I guess that tells you which 'nars' to also grab? <cbaines>the mirrors that I run are effectively reverse caching proxies <cbaines>the nar-herder mirrors all the narinfo information, that's in a SQLite database, then NGinx handles the nars, either serving from the local cache, or going to bordeaux.guix.gnu.org <cbaines>but yeah, you can serve substitutes just by serving the narinfo and nar files, there's nothing fancy about it <futurile>cbaines: we get users complaining about 'guix pulls' speed right. Some of that might be local processing, but some might be download speeds. Do we think that having 'mirrors' would help with that? AFAIK, users have to manually define their substitute server we have no way to redirect them. <cbaines>there's quite a lot of work done locally before guix looks for substitutes <futurile>but at some point it starts downloading substitutes, and I've seen users say they get slow downloads. One thing is we could redirect the user to download from a closer mirror? <cbaines>yeah, something to better serve people with slower connections to the main servers would be good <cbaines>since I'm not even sure the things that the bordeaux build farm builds for guix pull even are usable for end user substitutes <futurile>cbaines: OK, so going back. For either build farm the outcome of a successful build is that a substitute is created. And it's correct to say that we build each commit and all the dependencies for that commit right? So that's why you're saying if we reduced the package's dependencies we'd avoid "churn". <futurile>cbaines: and the impact is that with large dependencies for each package, we land-up doing big 'rebuilds' for each package update. So if I update 'perl' I rebuild everything; if I commit an update to a connected package that will also rebuild everything. There's no way to not rebuild 'all dependencies' for each commit. <cbaines>yeah, take sqlite for example, we've got a sqlite package with a lot of dependencies, and a sqlite-next with fewer <cbaines>this allows for infrequent updates to sqlite when there are other big changes happening, and for some packages with fewer dependencies to use the newer sqlite-next <cbaines>(it should probably be sqlite/pinned and sqlite, but that's just a naming thing) <cbaines>having just one sqlite would either lead to too much churn, or just be an old version most of the time <cbaines>something I'd like to see is some kind of measure for this, that way we could have something similar to an error budget, a churn budget <cbaines>this is python related I guess, but 2/3 of packages are affected, which seems excessive to me <elevenkb>Hi, when I try to add a rootless-podman-service-type service to my os, newuidmap doesn't work unless I use my user ID 1000, instead of my username elvenkb. <futurile>cbaines: we also don't split packages in the sense that in Debian/Ubuntu we used to have `lib` and `lib-devel`, end-user facing packages often only needed the `lib` package. AFAIK we have the ability to split a variable into multiple outputs, but we don't use those as inputs in packages. Generally our packages are 'big' so you land-up with a lot of connections in the dependency graph. <cbaines>it looks like all common lisp, haskell, java, julia, ... packages are also being rebuilt on python-team, which defeats the point of having more targeted branches <cbaines>we're just doing core-updates, but calling it python-team <futurile>cbaines: how else can we reduce the graph then? And, what would we do with a 'churn budget'? (only accept updates that reduced the churn budget?) <cbaines>I guess the idea with a churn budget is that it would be a tool to help people avoid doing what's being done on python-team <cbaines>say we set a limit of 30000 changes a month, then python team is 2/3's of that budget <cbaines>I'm sure there's a more useful way to group changes and results in getting more updates out to users, with less affected packages <futurile>cbaines: Maybe that's where I'm missing something. For each package update I think we rebuild the whole graph. So the only optimisation to how long the builds take is to reduce the graph right? It doesn't matter how I group my commits, each one will atomically cause a rebuild. <futurile>cbaines: so for example if you want to rebuild perl-tex-blah and perl-tex-blah2, you basically land-up pulling in all of Tex (which is a monster) and then you're going to rebuild it twice. <cbaines>futurile, say you have 10 changes, 5 are minor changes to leaf packages with no dependnecies, 5 are changes to packages with 1000's of dependencies. If you push all the 5 leaf packages together, then all the 5 changes which impact 1000's of packages, then that's quite efficient, you only rebuild most things once <cbaines>if however, you push 1 change that impacts 1000's of packages along with one change to a leaf package, but then do that 5 times, then you're going to end up rebuilding quite a lot of things multiple times <cbaines>what we're doing is more of the latter, just scaled up <cbaines>so you want to group testing and pushing changes based on the packages they affect <cbaines>e.g. if a change affects all of python, texlive and perl, then you want to group that together with changes that affect those packages too <cbaines>for a practical example of this, I was looking at the ruby-team branch last year, and with a few minor tweaks, got the impact down from over 10,000 packages affected to just over 1000 https://issues.guix.gnu.org/78676#12 <cbaines>obviously those changes I removed maybe still want making, but they can be grouped together with other changes that affect over 10,000 packages (e.g. core-updates) <futurile>cbaines: OK. I'm totally not understanding that. I thought that for each commit 'it' analysed the dependencies and rebuild everything? So if it's your 5small and 5large change commits, the order doesn't make any difference? How do we land up with "you only rebuild most things once" <cbaines>the build farms don't look at each commit, just each state of the repository, so if you push 20 commits, it's just the new HEAD that's built <cbaines>it's not the order, but how they're grouped <futurile>cbaines: ah, OK. So I group my 10 changes as one push. It sees the new state of the repository and rebuilds at that point. <cbaines>so taking python-team as an example again, if it rebuilds all of texlive, and then we have a texlive team branch that also rebuilds all of texlive, it would be more efficient to move the changes affecting texlive from the python-team branch to the texlive branch <cbaines>if you don't you end up rebuilding all of texlive when the python-team branch is merged, then all of texlive again when the texlive-team branch is merged <futurile>cbaines: OK got it. And making point of common linkage small/pinned packages would be a good way to prioritise <cbaines>and this feeds back in to build farm speed. Testing python-team would go faster if it was a more reasonable ~5000 packages rather than ~20,000, and it's not just python-team, lots of the -team branches are similar <futurile>cbaines: I guess the Q is whether [team] branches will optimise their flow, OR whether we should just assume that we need to build a lot faster so teams get instant feedback <futurile>cbaines: I personally think it's an interconnected problem - but trying to have a complex conversation about - the size of the archive; the dev flow; package dependency tree; the build infrastructure - isn't that fruitful over email. Maybe something you guys can push on at Guix Days. <cbaines>it's very interconnected, and you're trying to balance multiple factors <futurile>yeah, my primary 'factor' (aka bias) is the feedback that users want packages updated faster. For that I'd be willing to 'focus' on a subset of packages and throw build resources at it on the important architectures. At the moment I think the 'dev' experience of getting feedback on builds (success or failure) is not that great. <futurile>cbaines: when you push a commit you used to be able to look it up QA/Info and you can look on Cuirass. There's no commandline tooling though right? (I kinda wander around on some pages a bit) <ieure>futurile, That's definitely a problem, but I think in most cases the real bottleneck is dev resources. Like, I don't think we have Python 3.14 sitting waiting on CI, right? It's just not packaged in the first place. <sneek>ieure, untrusem- says: yes I will <cbaines>futurile, look at what specifically for a commit? <ieure>futurile, A secondary issue is communication, I see people come in asking about foobar v69.42.0 or whatever, and someone who knows is like "that's in foobar-updates." As a user, there is no easy way to get this information. <futurile>cbaines: so for a commit/push I want to see if the build was successful, and if that commit has broken any other packages. <ieure>vs. like Debian, you can see what's in unstable vs. stable, so have an idea what's going to land. <cbaines>futurile, you can kind of do that, but I'm much more interested in trying to test things before they're pushed <ieure>sneek, later tell untrusem Great! Let me know if you need any help. <futurile>cbaines: yeah, so I do that - I build the dependents. But if I'm going to update 'perl' that means waiting while my laptop now builds everything for two days. <futurile>cbaines: and sometimes there seem to be odd interactions with things where you land-up causing a rebuild you weren't expecting. I personally would love to 'notified' in some way if things are bad. <cbaines>futurile, right, which is where I think having automated patch review is key, testing things locally can be useful, but it's also hard, like are you building perl for aarch64-linux as well locally? <cbaines>this is what QA was doing, the data service was checking what the effects of some patch series were, and then the affected packages were being built <yelninei>also for perl that rebuilds basically everything as it is used in commencement.scm <cbaines>we lost that with the switch to Codeberg, but I think it's important to get that functionality back <futurile>yelninei: yeah I know. It's a bad example, as I'm waiting on you ;-) <futurile>cbaines: ok, but basically you are the only person who's working on this so we have a bus-factor. I mean if you're happy with carrying that weight more power to you - but it seems like we're just burning you out on it constantly. And no-one else has the time/energy (or in my case capability) to help! <cbaines>futurile, that is an issue, and I don't think it's just the bus-factor, I think I consider testing patches/branches to be more important than other people consider it to be <yelninei>futurile: My branch is waiting on a gash update to unlock newer coreutils. But the perl update breaks curl and texinfo and probably a lot more I dont know yet <futurile>cbaines: what is important to you in testing patches/branches that you don't think other people consider important - expand on that a bit? <futurile>cbaines: my focus would be that we shouldn't ship anything to a user that (a) doesn't build on our primary architectures (b) causes a regression on the build status of other packages <cbaines>futurile, I don't think it's specific, I think some people consider local testing to be sufficient, but I don't see how that can work when Guix purports to support multiple architectures, and that's not even considering things like issues because of filesystems, linux-libre versions or timebombs. <cbaines>maybe some people don't get bored when reviewing patches when they're checking the same things over and over again, but personally that's not how I like to spend my time, I'd much rather use automation to take care of the repetitive parts of patch review <futurile>cbaines: I would love to automate patch review and get to where Nix is on this. I'm fairly unconvinced by the project's <cbaines>and what we had before with the qa-frontpage was even better than Nix in some ways <futurile>cbaines: I would love to automate patch review and get to where Nix is on this. I'm fairly unconvinced by the project's "rewrite X thing in Guile" - I don't really see the value in writing our own build daemon, or CI trigger etc. It just seems like a super long road and potentially risking burn out / bus-factor. I mean build daemons and CI systems are so standard. But equally, this is a bunch of <futurile>volunteers and it's none of my business what people spend their spare time on - so no criticism / insult intended on all your hard work on QA and I've been very happy to benefit from it <cbaines>I don't think the builds nix does for Pull Requests go to provide substitutes <cbaines>(although there are some good reasons to be careful about this) <futurile>cbaines: I mean to really start a fight - I wouldn't have rewritten inetd daemon even hah hah hah hah <cbaines>futurile, I think it's something that has to be considered on a case by case basis. In the case of tooling for CI, if I new of something that wasn't riddled with containers, then that would be great <cbaines>Laminar is pretty good, and maybe I should look at using that again <futurile>cbaines: heh heh "riddled with containers" - ah the state of the <futurile>cbaines: heh heh "riddled with containers" - ah the state of the 'professional dev' sphere right now <futurile>cbaines: what's the status of getting QA back towards where you had it before? Is it do'able or are you blocked? <cbaines>it's back in the "how long is a piece of string" territory, I'm reluctant to try and bodge it to fit Codeberg, which means a bigger rethink to shape it in to something that wouldn't look out of place when used with Codeberg/GitHub <cbaines>I spent quite a bit of time trying to get/keep things working, in the hope that people getting value from QA would help make it sustainable, but that didn't work <elevenkb>Is anyone using rootless-podman-service-type without any problems? <sneek>Welcome back untrusem, you have 1 message! <sneek>untrusem, ieure says: Great! Let me know if you need any help. <futurile>cbaines: do you think doing the 'mirrors' thing would be worthwhile for improving users experience? We don't really have a way to measure it AFAIK, but it seems like something we could do if we have some budget <cbaines>this is something hard for me to test, because I've got a good connection to bordeaux.guix.gnu.org <cbaines>I'd welcome the Guix Foundation hosting a set of mirrors <futurile>cbaines: ok I'll open an issue and we can discuss it there a bit <muaddibb>cbaines: how expensive it is to run a mirror? <futurile>cbaines: I really want to prioritise improvements we can make that will be visible to users who've stepped forward to donate. So been trying to think of things that would make a difference both now and maybe longer term things <cbaines>muaddibb, I'm paying around 10€ a month for the server I think, it probably could be cheaper <cbaines>this isn't something that's useful for people to do though, I think hardly anyone uses the mirrors I run! <futurile>cbaines: we really would need something on the users end to pick the mirror that appears to be best for them unless they override it <gabber>(how) can i find the commit that builds both of two packages (on aarch64)? <ieure>gabber, I don't believe there's any mechanism to tell you why a package is getting rebuilt. And for two packages, it may be different causes. <Kabouik>Has anyone continued working on the Waydroid patch? Last time I heard about it was before the Codeberg migration, and I see no related ticket on Codeberg. <untrusem>Kabouik: I use waydroid but through some hacks <loquatdev>Is there a way to use `guix import crate` alongside its input flag while still outputting the changes to the terminal? I'm packaging a rust application in my own channel and I'd like to avoid defining several crates. That, and I'm just a little confused about how exactly this command works. <Rutherther>loquatdev: you have to declare all the dependencies which usually are several crates (or more :) ). So I don't really understand what your goal could be... as for outputting to terminal, doesn't it do that by default if you do not give it the --insert flag? But again you typically want it to insert to your rust-crates.scm file so that you actually get the definitions for cargo-inputs <Kabouik>Interesting untrusem-. Are those hacks publicly available on some git repo somewhere? <untrusem->I have just migrated some python package to new build system <Kabouik>Thanks untrusem-. I'll give it a go soon. <loquatdev>Rutherther: Apologies for the delay. I'm trying to use upstream crate definitions when available and only generate new ones to store in my own channel, so that any crates with strange packaging quirks are pulled from upstream. <attila_lendvai>i'm trying to run a binary in guix shell -F and even though libbz2.so is there in /lib, the binary doesn't execute and ldd says libbz2.so is not found. any hints? <loquatdev>I thought that if I simply generated a new rust-crates.scm file for my channel some of the crates might be broken since some of them seem to depend of packages from rust-sources. Am I thinking about this the wrong way? <Rutherther>loquatdev: why does it matter if they depend on rust-sources? I don't undertsand <loquatdev>I'm trying to avoid redefining crates that are already defined upstream. I'm sorry if I'm not making sense. I'm trying to figure this out for the first time. <Rutherther>loquatdev: there is no point in doing that I would say, what's your goal? save a few (kilo)bytes of text? <loquatdev>Don't some crates have certain packaging quirks that are properly handled upstream? Or is that not how it works? <Rutherther>loquatdev: if it's a workspace, it is indeed managed separetely and you can just refer to gnu packages rust-sources if that is the case <Rutherther>or you could reexport the ones from gnu packages rust-crates if you prefer that <loquatdev>I ended up just copying the rust-packages.scm from upstream and running the import command on it. My rust application compiled with no issues. I also tried just using the import command on a skeleton scm file and my package still compiled. I suppose I was just overthinking it. <loquatdev>On an unrelated note, how do y'all recommend picking commits to update to? I'm setting up some system that will pull/reconfigure on an interval and I'm wondering if there's an established method to picking commits. <parra>ieure: I discovered the issue, the mechanism I'm relaying for fixing the commit is completely wrong <parra>I was relaying on guix ci but the output is total garbage, it returns sometimes commits from 2020 XD <parra>maybe I'm not understanding it properly but I'm gonna try to find an alternative <parra>I have been building riscv for 12h before it failed and I checked the commits and they were nonsense <parra>I was downloading it from here:. <ieure>parra, Not sure how this mechanism works, maybe you're getting the latest eval for all specifications? <ieure>That definitely wouldn't be what you want, and wouldn't increase monotonically. <parra>exactly, I thought this mechanism, passing a channels.scm without commit, do a guix pull and then fix the commit to the latest pulled <parra>do you think it's a good solution? <parra>or there's a better way to generate the channels.scm with a fixed commit to the current? <parra>oh guix describe has that option <ieure>Yes, that's the right thing.