IRC channel logs
2025-03-12.log
back to list of logs
<apoorv569>OK if I remove install-plan it its install it.. and it seems like it is start from the `build` dir.. thats why it can't find node_modules dir.. <apoorv569>why does it start from build dir? and not root of the extracted tarball? <lfam>anticomputer: Thanks for writing everything up. I'll send it to the bug ticket for you. By the way, there's no "set up" for sending messages to debbugs. If you have email, you just send a message to NNN@debbugs.gnu.org, where NNN is the ticket number <eikcaz>lfam: btw I think my email made upgrading sound harder than it is. 99% of the time the only thing that will happen is their device ID changes exactly once. Moving things from ~/.config/syncthing to ~/.local/state/syncthing will prevent the ID change. Otherwise, steps to fix are obvious (guix complains about mispelled record name). <eikcaz>^ even record names being "misspelled" is unlikely <eikcaz>(I'll watch irc logs like a hawk until the patch goes through) <potatoespotatoes>how do you pin a package definition to a versioned dependency? Can someone point me to an example of this? <lfam>potatoespotatoes: I'm not quite sure what you're asking. Is it like, you have package foo that depends on bar, and you always want to foo to depend on bar-2, rather than bar-1, bar-3, or any other version? <lfam>eikcaz: I got your email with more detail about the upgrade process. Are you familiar with Guix news? The mechanism that pops up after `guix pull` gives information about changes that might be interesting? <potatoespotatoes>ifam: exactly. I am trying to build foo which depends on bar-1, but only bar-2 is in guix-devel and the derivation is very complicated, so I can't just inherit and downgrade <lfam>I see. It sounds like you know what you would need to do. Either make a new package definition that inherits from bar-2 and defines bar-1, or use Guix's package transformation options to try automating that action. But, Guix can only change the version for you, it cannot make complex changes to the package definition, if they are required, and it sounds like they are. <lfam>"Guix inferiors allow you to achieve that by composing different Guix revisions in arbitrary ways. " <potatoespotatoes>ah! that sounds about right. I was just looking into inferiors right before you mentioned them <potatoespotatoes>there's something weird about the inferiors thing I'm doing, but I might ask in a minute -- lots of things trying to build right now <lfam>Alright! I've never actually used inferiors, but of course this is the place to ask about them. Other people are skilled at using them. This channel is most active during the European day <potatoespotatoes>I started building the inferior package and it took way too long, so I just spiraled in a million ways to do a bunch of package variants. <eikcaz>lfam: I'm familiar with the existence of Guix news, but I've never looked into it. Do you want me to figure out how to add an appropriate notification, or is that an easy task for you? <lfam>We should try to write a news entry about the incompatible upgrade of the service that will help users who might be affected. We need a list of concrete steps that users can take to perform the upgrade <lfam>The news entry can link to something comprehensive on the mailing list (like your recent message), but I'd like to make the news entry as direct as possible. <lfam>Actually , now I see you wrote something like that here: <https://issues.guix.gnu.org/76379#4>. But for some reason, that text did not complete the journey into my Git commit when I applied your patch <lfam>But still, I think we can make it more clear and concise <emacsomancer>(just sent a message to help-guix, but on the off-chance:) <emacsomancer>on a newly re-installed system, I'm getting an error with `guix home reconfigure ...` <emacsomancer>specifically: `guix home: error: open-fdes-at: Not a directory` <emacsomancer>tried setting verbosity and debug levels, but these didn't produce any more information on `open-fdes-at` <anticomputer>lfam: yeah I did that as well as including the bug# in the topic per the mailinglist instructions, but things never arrived on-list, I need to untangle my old gnus still, I'm using a webmail client which presumably is doing something mailman doesn't like with the message headers <lfam>anticomputer: Slow day for the mailing list. Your messages should be delivered now <anticomputer>lfam: d'oh, all three iterations arrived at the same time just now, well, crud. <emacsomancer>lfam: that doesn't actually illuminate me on what file-path it's unhappy about, or how to get guix home to let me know this <lfam>Yeah, sorry I can't help more emacsomancer <lfam>emacsomancer: Regarding "make Guix Home expand on which thing is not a directory", I would try running the command again with strace: `strace -f -e abbrev=none guix [...]` <lfam>You should be able to look for the error string in the output, and then figure out which lookup failed <wakyct>emacsomancer might you just have some parens mixed up so the dir argument is not where it's supposed to be? <eikcaz>lfam: trying to peg down a news entry is difficult because of the complexity of covering all possible scenarios. See my latest email. (brb 15 min) <lfam>eikcaz: That's pretty good. I'll work on it a bit and get it in <meati>What should I put in the shebang for scripts on guix? <meati>e.g. the guile reference manual suggests #!/usr/local/bin/guile -s, but obvs guix doesn't have that <lfam>You can use #:/usr/bin/env <meati>how would that work for guile scripts? <anticomputer>meati: "#!/usr/bin/env guile" will translate to the path of guile in your guix profile/env <eikcaz>lfam: cool. Again, 99% of the time it will "just work" with the device ID conveniently reverting to what it was before setting `config-file' in the "broken" version <anticomputer>ah that unfortunate moment where you guix pull a new kernel and there's no substitutes yet *warms himself by the cpu fire* <apteryx>I'm confused by the logic used when using a release-moni <sneek>Welcome back apteryx, you have 1 message! <sneek>apteryx, lechner says: / Hi apteryx & Rutherther, I packaged both GNU's and Debian's Debbugs for Guix and deployed both on my own equipment, but at the time the GNU folks were committed to Trisquel so we never upgraded. There is a lot of inertia at FSF. I have admin access on debbugs.g.o <apteryx>release-monitoring-url; in in `import-html-updatable-release', BASE-URL gets set to that URL; DIRECTORY gets set to "". And then inside `import-html-release', the current edge case that does dirname on BASE-URL happens, and this makes it work <lfam>anticomputer: Huh, there should have been substitutes! <apteryx>lechner: awesome, did you submit the packag for review already? <apteryx>it's understandable that the FSF/GNU sysadmins won't jump to Guix cold turkey; they've been invested in Trisquel for probably decades. <lfam>I kicked off another build <lfam>eikcaz: I'm out of time to finish this tonight. But I'll get it done tomorrow <apoorv569>like you can build a package in the REPL by doing `,build PACKAGE` is there a way to build a service-type in the REPL? <Rutherther>apoorv569: you can just use local file as input of your package. No need for thia copy build system package. If you set name via the second argument of local-file, you should be able to reference it with something like this-package-inputs. <Rutherther>apoorv569: as for the build error you got, could you check paths in the tar? I can't tell why it happens. Two things come to mind: I dont know how recursive behaves with, files. And the tar could have ../ at beginning of names of files <apoorv569>Rutherther: what do you mean I can just use local file as the input of package? the package is the archive that's being download.. ATM I was testing with local-file because I don't have CI/CD setup so I just build it locally and archived it my self. that's using local-file for the source.. then using copy build system to just copy the files in the store. <Rutherther>apoorv569: I mean the actual package youre making, not this dep download. You can put local file or origin with url fetch to inputs. No need for a full package. Local file / origin already do copy stuff to store. Of course if you want something to unpack it copy build system is fine for that. <apoorv569>Rutherther: Sorry I am a little confused.. ATM I just have this, https://0x0.st/8SwX.txt I am not using gexp-compiler any more just this package definition.. you saying I can use tarball as a input using local file? but then what will be the source for the package definition? <apoorv569>later when I setup CI/CD the source will be a url-fetch instead of local-file rest should be same. <Rutherther>apoorv569: if your goal is to use this tar.gz as fukl source with source code as well, then sure. I thought you wanted this to contain only the dependencies and that the source of the package will come from elsewhere. <Rutherther>apoorv569: if your goal is to use this tar.gz as fukl source with source code as well, then sure. I thought you wanted this to contain only the dependencies and that the source of the package will come from elsewhere. <Rutherther>apoorv569: or is the foo package final package to be used? I expected you will still have to make a package that will process the source, ie. wrap scripts, copy the to bin, but with node modules available before hand so you dont deal with obtaining them. <simendsjo>I feel I sorely need Guix swag like stickers, coffee mug, t-shirt etc. to start representing. I found stickers at redbubble.com, but it doesn't look like that money will reach Guix, GNU or FSF in any way. Are there some semi-official way to get Guix swag? <apoorv569>Rutherther: no, tar.gz is just the built project files needed to run the server.. full source is not required for production server. the tarball contains, node_modules directory with all the dev dependencies removed, the build directory where all the code for the server and client is the package.json and package-lock.json which sveltekit docs recommend to copy along side as well.. <apoorv569>I did add foo-cli scripts in tarball as well.. that will be used to interact with the server if needed. <mra>potatoespotatoes: hey! yes, I'm working on it. my current goals are getting 55231 merged, which requires getting 41602 merged. I'm waiting on either ludo or maxim to take a look at 41602. I haven't been able to test it myself <mra>oh, sorry, didn't realise how old that message is <apoorv569>I started with node-build-system actually first, used guix import to add node packages for dependencies.. but it got way out of hand so many package definitions.. I don't want to manage 50+ package definitions <apoorv569>then decided to maybe build it somehow and just copy the build files to store.. <apoorv569>BTW other than this package definition I have a foo-service-type with nginx, certbot and shepherd-root-service extensions.. <apoorv569>so it will run the copied build using `node build` command and put it behind nginx reverse proxy to serve it <Rutherther>apoorv569: okay, if this is the final built package, I misunderstood, sorry for that. It seems good like this. <apoorv569>Rutherther: no worries.. BTW what do you think? will be good for reproducibility instead of that `gexp-compiler` thing? only the tarball's hash should change now on every new release <apoorv569>and because tarball contains pre build files everyone should get the same files <jcabieces>Rutherther Hi. Thank you for your help yesterday, pre-inst-env is now working nicely and I manage to see my search-paths (and the package has been rebuilt when search-paths have been modified) <Rutherther>apoorv569: It is definitely fine, since it never changes, there is nothing that would make it non reproducible. I personally don't like this approach, because it completely side steps the reproducibility of the build. The whole point of guix/nix in terms of reproducibility are reproducible builds, here the build itself is not reproducible. What makes it so is that you always use the same result. But as I am saying, it is completely fine to use <Rutherther>jcabieces: good to hear that. But I think what you encounteted must have been a bug. Or the manual is just wrong. Seems that the compiled guix from system has been used rather than the sources from your worktree. So may be worth reporting to bugs. <apoorv569>Rutherther: I agree with you.. but ATM as you know I was failing to make gexp-compiler work.. I'll keep trying different things, so I can make the build reproducible as well, for now I guess this is fine. <jcabieces>Rutherther whell the documentation page "Running Guix Before It Is Installed" doesn't mention to type make, but it mentions to go to "Building from Git" which mention it. So it's a bit my mistake. But if you think so it's work reporting, I can do it <jcabieces>Rutherther make check has errors and make authenticate doesn't exist so maybe I'm gonna make a global "first contributor experience" issue <Rutherther>jcabieces are you using latest docs? Make sure to append /devel after manual in url <jcabieces>Rutherther actually no. And the devel documentation has fixed it all... Thanks <sfermigier>Hi. I'm totally new to Guix (and this channel). I'm trying to use Guix in a Container, as an alternative to Nix. <sfermigier>1) Is there a recommended container image to get started? I've tried <sfermigier>metacall/guix but it doesn't seem to support the architecture I'm using for <sfermigier>2) I have also tried a base Ubuntu 24.04 image with the `guix` package <yelninei>janneke: managed to get to gnu hello on i586-gnu on core-packages-team. The same 4 gnulib tests are failing across various core packages (coreutils, grep, libunistring, sed, diffutils, findutils) <efraim>is core-packages-team the next branch to get merged? <yelninei>also what is up with the --disable-year2038 option? I had to readd in the various packages in base.scm to get the normal/final packages to build <janneke>yelninei: great work; do you have patches for those packages and tests, or how did you get past them? <janneke>ACTION looks at the code to see if that refreshes their mind <efraim>yelninei: If you use git-blame and it says I added them then you can assume I was guessing and wasn't sure it was the correct move <janneke>efraim: i have no idea...it was but saw talk about some other branch(es) jumping the queue <efraim>I need to do more testing of whatever branch we have queued up next, but currently I'm back to backporting riscv64 support to node-bootstrap <janneke>yelninei: it seems several people disabled the 2038 test for various platforms <janneke>i guess that with something as experimental as hurd, it's generally OK to disable failing tests <yelninei>janneke: I mean the --disable-year2038 configure-flag for tar and findutils in commencement.scm <yelninei>janneke: I xfailed the tests to continue (with a lot of care to not mess up the gcc-final because it takes 4-5h or so). I dont know why these are failing (they succeed on master currently which makes me sceptical) <yelninei>e.g. the test-symlink and test-symlinkat fail with ENOTDIR instead of the expecrted EEXIST || EINVAL || ENOENT <yelninei>The only patch that I am confident in is using automake@1.16.5 for automake-boot0 needed for gnumach-headers-boot0, the rest is just me trying to get something to work <janneke>yelninei: getting it to work is great in my book; it takes a lot of effort to find out all those failing tests <h4>How to know what binaries produce derivations? I mean, idk how to get the binary I want, from which package to produce it <yelninei>janneke: maybe it would help if the childhurd was cross compiled from core-packages-team aswell instead of master s.t it runs the same libc version? But this would require building my normal desktop as well <yelninei>i guess that will ahve to wait post merge. In the meantime ill try to build up the hurd manifest <jA_cOp>h4: something like apt-file, pkgfile, yum whatprovides on other distros? <h4>jA_cOp: I don't know those but probably <h4>Like for example to locate `wayland-scanner` <jA_cOp>but I only have a few weeks of experience so far, maybe there's more to the story <h4>jA_cOp: Afaik yeah there isn't yet an official way. But I search for any way possible. Idk maybe like search into the substitutes? <h4>it's `wayland` that provides it, but it was just an example <h4>ftp://ftp.freedesktop.org/pub/mesa/demos/mesa-demos-8.4.0.tar.bz2 <apteryx>efraim: I've pushed a fix for 'guix refresh --update' failing on file URIs <apteryx>that odd case in canonicalize-url couldn't be removed, it's still required (the tests/gnu-maintenance.scm would break without it) <apteryx>let me know if you still experiment issues! <apteryx>hm, vue dans l'aide de GNOME: "Thincast est un client propriétaire. La version Linux est disponible sous forme de flatpak. Les paramètres par défaut sont recommandés." <apteryx>the GNOME help is mentioning a proprietary application, along other free choices to access the remote desktop via RDP <apteryx>ah, I guess it's to cover macOs and Windows... hm. <Deltafire>i've got a weird problem.. since a recent upgrade, i've been experiencing the "Oh no! Something has gone wrong." error screen on gnome. I've managed to narrow this down to having the 'pdfarranger' package in my home configuration <Deltafire>even if i'm not using it, it somehow crashes my desktop <apoorv569>is there a way I can build and test my service-type in the REPL? <apoorv569>like for packages you can do `,build PACKAGE`? <futurile>apoorv569: the package one is a special meta-command, I'm not aware of one for services. Really interesting question! <apoorv569>futurile: I see.. how would one test their service-type then if not from REPL? <futurile>apoorv569: I haven't done so honestly. I guess create a guix shell and start it within there. Or worse case create a VM etc. <omar_b`>Hi I am setting up a remote server on guix and trying to add the docker-service-type it required the containerd-service-type which I added but then it is erroring saying guix system: error: service 'dockerd' requires 'elogind', which is not provided by any service <omar_b`>I found that elogind is actually marked as deprecated <omar_b`>Any one can help or share a config example? <omar_b`>I don't have any desktop service as this is just a remote server <podiki>maybe you tried just elogind-service? (the -type are current, just -service aren't used anymore directly) <apoorv569>futurile: Yea, basically I need to run it in a container or VM with a minimal os configuration. <Rutherther>omar_b`: where exactly have you found that it is deprecated? <podiki>elogind-service is deprecated, i assume that is what they saw <df>that's a lot of bashes (even discounting the completion package at the end), afaics bash isn't even grafted atm <podiki>(all -service are, or should be) <df>and there's only one version running: <df>which isn't even in that list, it turns out it's from /run/current-system, which also has several bashes <podiki>why is that surprising? in guix often have many different hashes of a package that are "used" (referenced by some package) <omar_b`>podiki: But elogind is part of (gnu services desktop) do I really need this if I am only running a server with no desktop environment? <df>podiki: are they different versions then? <df>I mean, I get the difference between -static, -minimal etc <podiki>omar_b`: you do, as per your error you got (the specific reasons what is used i don't know without looking) <df>there are even duplicates between the system ones and my personal profile which seems weird (again, to me) <df>wait no hang on, they can't be different versions if there are multiple bash-static-5.1.16 entries <podiki>df: depends on what you mean by "version". you can see they all have the same version as in "bash --version" would report, but the hashes (/gnu/store/<hash>-<package>-<version>) are different. the hashes are from how they are built, the inputs, package definition, ...it can change without any functional difference <hako>omar_b`: elogind is responsible for session and seat management, it's not desktop-only. <podiki>this is part of making guix reproducible <Rutherther>omar_b`: alternative is seatd. Currently you need one of them and I am not completely sure if seatd has been tested with containerd <df>podiki: so are they different builds that have been installed as dependencies of other packages? <df>I suppose I can investigate with guix graph or similar <df>or, well I suppose some of them could be from historical generations <podiki>df: sort of, i wouldn't say "installed" though (as a user you are not accessing them directly). you can look at graph yes, but more exactly look at "guix size" of a package and see everything it refers to <Rutherther>df: no, most of them are probably from your earlier generations. Guix doesnt havd that many variants of bash <df>they wouldn't show up in my current profile if they were would they? <df>can guix package -I be persuaded to show packages which were installed as dependencies of other packagesa? <podiki>to go back to the earlier point, each different hash means it comes from a different derivation, which captures everything needed to output that package; so it changes if anything related to that package building changes (even, and most often, not functionally changing anything) <df>cos guix package -I bash doesn't show anything for my personal profile, which is kinda what I would have expected <podiki>you may be misunderstanding something about how guix works, we don't "install" dependencies <podiki>you install <package> you just get <package> in your path for instance <df>well, the dependency goes into the store, yes? <Rutherther>omar_b`: where exactly have you found that it is deprecated? <podiki>everything it needs to work is in /gnu/store, but those don't go to the user <df>is there a different hash for every package that has bash as a dependency? <podiki>Rutherther: i'm guessing they tried just "elogind-service" which was deprecated in 2023 <podiki>df: not sure what you mean; everything that affects a package build is captured in the hash, any changes to any of those ingredients changes the hash <df>podiki: I'm not too sure how guix size can help me, do you mean by running it on packages that depend on bash, or bash itself? <podiki>i was just saying if you are curious about what a package depends on <df>ieure: thanks, I will... start reading it and see how heavy going it is :D <df>ok thanks all, looks like I have some learning to do <df>can I use guix gc to tell me which ones are used by my current generation (without deleting older generations first)? <podiki>yes, the --references option i think is what you mean <podiki>or -R which i think you used before <df>so I'll try running that directly on the bash dirs in the store <podiki>then maybe you want --referrers? depends on which direction you want to explore, you see that manual page for details <podiki>but you'll find bash everywhere, as many programs are "wrapped" with a script to start them which requires bash-minimal usually <podiki>or packages that have scripts included, they will refer to a bash often <df>well, I'll just generally have a poke around <podiki>poking around is a good way to learn! <df>I guess what I'm not getting is why eg those programs wrapped with bash-minimal use different hashes of it <podiki>that will be a difficult question to answer precisely, but the basic idea is that the hash depends on everything used to produce that package <podiki>any change will be reflected in the hash, but i don't think there is any easy way to know exactly what <podiki>and likely there are tons of different hashes for a package that is functionally identical <df>ah yes, I think I have that blog open in a 'to read' tab somewhere <podiki>right now there are a lot of grafts (that you were reading about), so that is probably part of what you are seeing <podiki>part 1 of that blog series is exactly on derivations, might be useful <Rutherther>omar_b`: alternative is seatd. Currently you need one of them and I am not completely sure if seatd has been tested with containerd <df>oh... I think maybe I misunderstood the page about grafts, I thought if bash needed grafting then it would be in the bash package definition <df>or are you saying that some packages are grafted to use different builds of bash? <Rutherther>podiki: I know. I actually didn't send anything, my Matrix bridging is misbehaving and resending messages twice, sorry for that, I will try to figure out what the issue is <podiki>Rutherther: no worries. i also bridge, using hisenbridge if i remember <podiki>df: grafts you can see with "replacement" fields but everything has grafts right now due to glibc being grafted <Rutherther>podiki: me too, but this is not actually heisenbridge error, seems some kind of bridge<->home server error. Since I am experiencing this with all the bridges I use which use different backends <podiki>df: so anything that depends on glibc will get grafted to refer to the fixed version of gilbc. grafts propagate up the dependency chain, so to speak <podiki>Rutherther: ah. good luck, hopefully something useful in the homeserver logs. or in nginx/whatever you use <df>how does stuff eventually get un-grafted? it seems like there'd be no time when it's convenient to modify glibc and rebuild everything <df>although... I guess glibc must be upgraded on a regular basis, so doesn't that require rebuilding everything anyway? <ieure>df, That stuff typically gets staged on the core-packages-team branch, most stuff gets rebuilt, then it merges to master and most substitutes are ready to go. <vagrantc>TIL guix build --with-branch=PACKAGE1=BRANCH1 --with-branch=PACKAGE2=BRANCH2 ... when PACKAGE2 depends on PACKAGE1 ... works nicely :) <podiki>yup, same --with-latest or the git options <vagrantc>oh wait, have to use --no-grafts too to avoid getting a graft of PACKAGE2 <vagrantc>--with-latest grabs the last tagged release (or tarball if not git?) <vagrantc>which seems ... not the latest in my mind. :) <vagrantc>i more-or-less knew aboiut these before ... but it never really dawned on me how useful it was ... but i have a project that has two dependents and it is nice to be able to give a spin on testing them all together at the head of their git development. :) <tschilp`>I'm just playing around trying to use ~treesit-install-language-grammar~ for bash, which fails with ~treesit-error Command: gcc -fPIC -shared parser.o scanner.o -o libtree-sitter-bash.so Error output: ld: cannot find crti.o:~. I've put ~gcc-toolchain~ into my home-config and it looks as if crti.o is there: http://paste.debian.net/1362712 (multiple times actually). I understand there's multiple things that could go wrong -- any ideas? <podiki>vagrantc: yeah --with-latest will get a release, while --with-git-url is latest commit, or can specify commit/tag/branch as you saw <podiki>all very handy, i love transformations <Rutherther>tschilp`: you're going to have much easier time if you just install treesit grammars from guix instead <ieure>Agree w/ Rutherther on that, tree-sitter-bash is the package you want. <ieure>Having recently set up treesit stuff on this cursed Mac laptop my work requires me to use, the Guix way of doing this is so, so much better. <tschilp`>Rutherther: I actually ended up at this point, as ~tree-sitter-bash~ is in my config already, but trying to ~treesit-inspect-mode~ I get ~Error in post-command-hook (treesit-inspect-node-at-point): (treesit-no-parser #<buffer test.bash>)~ <tschilp`>maybe I do not understand some essentials ;) <eikcaz>sneek: later tell lfam: I think I figured out the path with the least friction: if someone sets `config-file' then the service will just do (approxiately) "mv ~/.config/syncthing/config.xml ~/.config/syncthing/config.xml.bak", then the news entry can just explain that their device ID might change and how to get it back (and maybe a similar note in the docs for users that used ~/.config for unrelated reasons). <tschilp`>I will at least remove gcc-toolchain from my home-config package list again, turning messy already again... <Rutherther>tschilp`: my guess is that you didn't relog to get the new env var that is exposed in your profile as long as you have emacs and tree sitter installed there. The same goes for your gcc-toolchain issue. <tschilp`>Rutherther: that is true, I just restarted emacs... <tschilp>Rutherther: right, all good now -- I honestly did not expect I have to relog! neat. <pastor>Hello. Is any of you using `gnome-keyring' to handle SSH passphrases? <pastor>I've seen that since version 46, which is the latest one we have, SSH support has been moved to the `gcr' package so now using `gnome-keyring-service-type' is not enough. <rurmeister>I spent an hour trying to figure out why I couldn't boot a new guix install with LVM on LUKS until I noticed that %base-initrd-modules doesn't include dm-crypt <acrow>I'm trying to understand some unexpected guix behaviors. I'm running a guix system serving syncthing. So system installs the syncthing package. I've found that the syncthing service has a problem in the newest version. So, I want to keep the old (still in /gnu/store) version. <acrow>I put the old package def of syncthing at the top of my config.scm and reconfigure. But the newest version continues to appear. I'm puzzled. <acrow>I'm afraid to restart the syncthing-<user> service because I believe it will use the newer problematic version. <Rutherther>acrow: are you actually using the package definition in your service specification? <acrow>The way you put that makes me think, no..... I assumed that the service would rebuild based on the <acrow>syncthing package that was visible to the operating-system. Not so? <Rutherther>acrow: no, not at all, that is not how guile works. The service uses the syncthing in syncthing-configuration-syncthing. <Rutherther>acrow: so just set the syncthing field in your syncthing-configuration to your package and you're fine <acrow>Rutherther: So, I'm looking at the syncthing-configuration (service). All I need to do is add (syncthing syncthing) to my configuration to get it to pick up the older package? Hmm, giving it a try. <Rutherther>acrow: no, not at all, that is not how guile works. The service uses the syncthing in syncthing-configuration-syncthing. <acrow>ACTION looks around for syncthing-configuration-syncthing <podiki>i would also give it a different (variable) name to be clear it gets used <acrow>I'll dig for that. This also means that I don't need to and ought not install the <acrow>syncthing package to use the service, right? <Rutherther>acrow: it's up to you, if you need the package's cli you probably want to install it. If not, then it's unnecessary. <podiki>generally in guix anything needed to run should be handled by the package/service, never the user needing to install something else <podiki>as Rutherther says, only if you want the package outside of the service for use do you need to install it <acrow>Nice! So, it looks to be almost trivial .... it's a record getter.... that shouldn't be too hard... let's see. <acrow>Now if I use an unusual package name -- does the service know what name to use for the binary/executable? <acrow>Oh, it's a file-like object so we will have to do something like (file-append syncthing-old "/bin/syncthing")... I think. <Rutherther>acrow: the service doesn't care about name of the package. But I don't understand why you would change the packages name at all <Rutherther>acrow: this file append where? the syncthing field expect a package, not an executable file. It will do the appending by itself. <vagrantc>cwebber: any chance you did some work on the mnt/reform2 rk3588 ... i recall you writing with excitement about the mnt/reform next though i cannot find the post and wanting to work on a guix channel for it ... <vagrantc>cwebber: i was thinking of at least trying to get a kernel building for the mnt/reform2 rk3588 <vagrantc>oh yeah, I have a WIP branch for the older reform variant and it is still there... <podiki>acrow: all you do is add (syncthing <your-package>) to the configuration of the service <acrow>Package name ... (define <package-name> (package ....)). Is this the one we mean? <acrow>Thanks guys, giving it a whirl. <podiki>yes <package-name>, aka the variable that represents the package <podiki>most services give you something like that to specify a particular version or different package than the default used by the service <Rutherther>acrow: that is not package name, that is just variable name. The service isn't even able to know what that is <acrow>Thanks people. Before I restart the service -- is there a way to verify what the 'future' executable path will be? herd status syncthing-<user> returns the old executable but it would be good practice to do a dry-run sorta thing. <acrow>The system reconfiguation went smoothly. that delete method is shadowed warning seems to be common and never <acrow>Of course, I can just jump in the water. Rollbacks are a nice thing. <acrow>Smooth... thank you for the insights. <eikcaz>sneek: later tell lfam: Thinking about it more, I think it's more important to get the current version of the patch through, and worry about the .1% after the new bindings are pushed to master <vagrantc>hrm. the linux-libre@6.13 and linux-libre@6.12 source tarballs have been garbage collected ... <itrsea>I like the idea of reproducible environments. <dstolfa>i will let others give you the long story, as i'm sure many here have tried in the past <podiki>these days your main potential issues are wi-fi and gpu <podiki>if it is a server or similar, then really not much to worry about (ethernet and headless) <ieure>itrsea, How Free are you looking to get here? <podiki>but for a desktop if you want more than like 800x600 or 1024x768 and any accleration, i think you need quite old hardware <itrsea>I can try to install as a server, posthaste. <ieure>RISC-V is probably the closest we're going to get to truly free hardware. Any Intel/AMD CPU has non-removable non-free components. <ieure>I don't believe there is a daily-driveable RISC-V machine. <ieure>If you're talking commodity hardware, you can libreboot a ThinkPad T480 or T480s; those are still good machines. <ieure>You'd also have to swap the WiFi card. And nothing you can do about the graphics. <itrsea>ieure: supporting daily driving, with some interaction with a Linux phone. <ieure>itrsea, LibreBooted T480/s is probably your best bet. <[>The iGPU on a ThinkPad T480/T480s will work without requiring nonfree firmware to be loaded, except for H.265 decoding/encoding <[>and you don't really need that anyway <mightysands>hey, I just wanted to ask if full-disk encryption is possible on guix system ? <mightysands>and to what degree, if it is, might it be comparable in difficulty to something like dmcrypt and luks <ieure>mightysands, It supports LUKS. <mightysands>ieure: Might you know where I can find more documentation on it? For some reason search results find me forum posts from circa 2016 where it was either impossible or buggy <mightysands>I've setup cryptsetup on slackware before, but I was wondering if full disk encryption with GNU Shepherd on guix might add complications or not (forgive me if to suppose like this is plain ignorant) <ieure>mightysands, I don't know if there's documentation for this. It's an option in the graphical installer. <ieure>mightysands, I run FDE on Guix. It was as easy to set up as Debian, both support it in their installers. <ieure>It is slower to boot than a Debian LUKS machine, and you have to enter the password twice on Guix. <mightysands>Thannks ieure. You've annulled some rather silly pessimistic worries on my part <mightysands>though what's this about having to enter the password twice ? <eikcaz>though as ieure mentions, the graphical installer will get you to a good starting point <ieure>mightysands, Once for GRUB to unlock /boot, once in the initrd to unlock /. Debian doesn't encrypt /boot, so, you don't have to unlock it. <mightysands>Wow. I'm not even sure Slackware encrypts the boot section <ieure>mightysands, Once the system is booted, nothing cares whether you're using FDE or not, it's way beneath the concerns of Shepherd etc. <ieure>mightysands, No. The init system can't run until the disk is unlocked, since the binary is on the encrypted filesystem. So all that stuff has to happen first. <graywolf>Hi Guix :) If I want to print a deprecation warning from a service, is there a special way or can I just do (display "..." (current-error-port))? <erik``>Hi! The irc logs at logs.guix.gnu.org does not seem to be updated, search results end at 2023-01-31 <podiki>there are certainly newer logs, but search may not surface new results? <erik``>podiki: Indeed, the newer logs are there if I know what date to look for, but I seldom do when I try to find if someone else have had the same problems that I have... <podiki>yeah i'm not sure what is up with the search, would be nice to improve that (not sure where to find that code) <vagrantc>i swear "guix build linux-image-arm64-generic" used to realize that it needed to cross-build out of the box, but now it tries building an x86 image?