IRC channel logs
2025-05-06.log
back to list of logs
<weary-traveler>emacs package addition (~2 weeks ago) in case someone is looking to close some issues: #77990 <jlicht>efraim: nothing spectacular yet, but the goal is to skip all the old nodes and use a more minimalistic js runtime to generate llhttp sources (and subsequently use that to build undici, which may still be an issue due to requiring wasm stuff). I’ll flesh it out a bit more if the project is accepted <jlicht>thing is that I saw a way forewards requiring many (more?) days of drudgery, so here’s to hoping for some funding :) <sham1>When using package-input-rewriting for elisp packages, do I replace emacs-minimal or just emacs with the emacs variant I want to use <gabber>the package i am defining won't build with: "Unbound variable: substitute*". the build phases are in a gexp. neither adding (guix build utils) to #:modules nor #:imported-modules fixes the issue. what am i missing? <ruther>gabber: can you send how it looks? <zuki>is there a easy quick and dirty way to package rust programs with non packaged crates? <zuki>oh well shit I was attempting that while trying to package river-luatile and ended up realizing the scale of what i needed to package was way to big for a first packaging project. I was hoping that I was just missing a option or something. <gabber>ruther: it gets to live in bootstrap.scm (: <ruther>gabber: you have a typo there, substitue instead of substitute <ruther>modules and imported modules aren't necessary here as guix build utils is in most build system's modules, I think only trivial build system doesn't have it <flypaper-ultimat>sham1: you replace emacs-minimal, see the section 'Emacs Packages' in (info "(guix) Application Setup") <sham1>…I'm not sure how I missed that one. Cheers! <sham1>It even says "--with-input=emacs-minimal=emacs" <zuki>I'm trying to install rust up in a guix container but getting a <zuki>mktemp: failed to create directory via template '/tmp/tmp.XXXXXXXXXX': Read-only file system <zuki>error. And I'm unsure how to fix this. <nikolar>you should check how other rust packages are packaged <flypaper-ultimat>zuki: try doing a guix pull and running again, 15 minutes ago ludo made the 'guix shell -C' /tmp/ writeable. , alternatively run your guix shell command with '--writable-root' <old>anyone still has offlineimap broken? <old>I'm at guix main HEAD and I still have this Python 3.11 error <old>I still get: ERROR: AttributeError: module 'importlib' has no attribute 'machinery' <ruther`>old: yeah, it is broken with python script option <ruther`>old: there is a patch in mailing list <old>time to write a guile variant <old>I'm tired of these Python things that break on minor version bumping <old>can't be that hard to write a offline imap <lechner>old / i use mbsync. if you rewrite, please also rewrite the IMAP protocol <ieure>I've used both offlineimap and mbsync and don't really love either. <csantosb>For some reason, `guix shell -CN offlineimap3 python python-recommonmark -- offlineimap -c ~/.config/offlineimap/offlineimaprc -a ACCOUNT -o` fixes the offlineimap issue <csantosb>Removing the python-recommonmark package, brings it back <old>csantosb: Python's weirnest at it again <old>lechner: will see if I have time. I don't even have enough time to put in the blue build system right now .. <csantosb>old: If someone else could confirm, ... I have it in my profile, so I haven't noticed the issue <old>csantosb: hold on. downloading texlive .. <old>I get: ERROR: AttributeError: 'str' object has no attribute '__suppress_context__' <old>the problem lie within local_eval in Python <old>because I have this in my rc file: remotepasseval = get_pass("olivier.dion@polymtl.ca") <old>def get_pass(account): <old>return check_output(f"pass {account}", shell=True).splitlines()[0] <old>anyway. Looks like local_eval in Python is shit <csantosb>Right, but the problem with 'machinery' dissappears, right ? <csantosb>To be exact, we need python-sphinx, which is propagated by python-recommonmark <csantosb>... which, in turn, propagates 21 packages <csantosb>... the one we need is 'python-sphinxcontrib-websupport' ... <ieure>offlineimap needs Sphinx at runtime? <csantosb>python-sphinxcontrib-websupport propagates python-sphinxcontrib-serializinghtml <csantosb>Which makes that 'guix shell -CN offlineimap3 python python-sphinxcontrib-serializinghtml -- offlineimap -c ~/.config/offlineimap/offlineimaprc -a ACCOUNT -o' works <old>and they say NPM is a mess <ieure>Surprising that anything would need Sphinx at runtime. <old>is this not a documentation generator? <old>looks like a bad inputs from Guix package definition then <old>should really be native-inputs and not propagated-inputs. But hey maybe offlineimap generate doc on the fly <csantosb>This serializinghtml thing is at 1.1.5 (4 years ago) in guix ... whereas latest 2.0.0 is from 9 months ago. Glupx. <nomike>How can I list all tiles within a package? <nomike>ieure, I'm looking for an equivalent to `dpkg-querly -L somepackage` <ieure>nomike, `find $(guix build packagename) -type f' <sham1>old: I mean, NPM is a mess, doesn't mean Python isn't also one. Like, just look at wheels <old>I think all languages providing a package managesr end up like a mess <old>Rust and Go also the depenency nightmware <old>I wonder how common lisp is doing with quicklisp <sham1>It's almost as if letting programming languages have their own package management is a recipe for a disaster. Could have instead used Nix or Guix, but *nooo* <sham1>I guess it's because they need MS-Windows support, but still <old>it promotes using dependency for the sake of not re-inventing the wheel. even for trivial things like printing hello <sham1>At least MacOS, I'm not sure about the BSDs <sham1>FreeBSD probably is supported, I'm not so sure about Open or Net <old>I tried Guile once on a BSD, I think it was FreeBSD <old>but it would be nice to have Guix on BSD and MAC <ieure>Not going to happen on macOS, probably. <old>because of none-free? <old>guile is working on macOS <old>so I don't see why not <sham1>It's non-free and you can't compile stuff without accepting that stuff <old>the Operating system is non-free, but a free software can run on it no? <old>GPL softwares are packaged for macOS that's not an issue <old>GCC is even available I think <old>lechner: not yet. It's one of the next step we have for it <Tadhgmister>how does guix gc decide which store items to delete first? does it use access timestamps at all? <Guest66>I have a guix package that isn't supported on my architecture. I'm able to produce a cross-compiled binary locally and wanted to host a private binary distribution server so that I can make it available to my local network. When creating a package in this scenario can I keep the existing package id `(define-public ghc-9.2 ...` or would I need to <Guest66>make a new one to disambiguate `(define-public ghc-localbin-9.2` from the old one? That is, would guix be able to choose the correct source based on the architecture with when directed to use `ghc-9.2` with to different sources exposing the same package? <Tadhgmister>Guest66 when you say "binary distribution server" do you mean guix substitute server? if you are just serving the binaries directly the package definition doesn't matter and if you are serving to other systems using guix the package definition is secondary to having the others generate the same hash based on build options so it downloads the <Tadhgmister>also do note that `guix copy` is a thing that may be of use to you if both the machine doing the cross compiling and the machine the package will be used on are both running guix, it gives you a workflow to copy over the package manually and then hard-code the store path into what ever application to use that package as a patch fix while you figure <Guest66>It would be an x86 server cross compiling for aarch64, so not a substitute server for a build with an existing hash. I suppose a different way to ask would be what happens if I request a `guix shell "ghc@9.2.8"` from an aarch64 machine and there's no support in the public server? Will guix continue checking in other channels for a ghc supporting <Guest66>aarch64, or will it fail once it finds a package served for ghc that only supports x86_64 and i686? Thanks for the tip on `guix copy` Tadhgmister <Tadhgmister>if you do `guix shell ghc@9.2.8` it will look up the corresponding package definition, determine a derivation based on native compiling for the current architecture as that is the default when you do not specify `--target` or `--system` and then if that is not present in the local store it will check sub-servers and if no sub-servers have it then <Tadhgmister>at no point will it try to obtain a derivation that was generated by `guix build --target=aarch64-linux-gnu` <ieure>Right.... cross-compiling doesn't produce the same hash. <Tadhgmister>but if you specify `guix shell --system=x86... --target=aarch...` then it probably can identify the same derivation from the server and if it isn't on that one it will try to emulate an x86 system to do the compilation locally, I am unsure if there is a way to say "prefer failing than trying to build locally" <Tadhgmister>oh but there is a way to offload the build right? I haven't looked into that since my case I want to do everything from `guix deploy` but there is probably a way to get it to offload building stuff to your x86 machine when `--system=x86...` is specified from the aarch machine <Guest66>Interesting, I'll check out the manual for that to see if I can find it. I have an avenue forward even if I'm not able to get that set up in the short term <ieure>Yeah. You can also use qemu to build ARM stuff on x86, but it's very, very slow. <Tadhgmister>ieure we are talking about an aarch machine offloading cross compiling builds to an x86, no emulated compiling <ieure>I'm not sure that will work. <Tadhgmister>why wouldn't that work? the very beginning was that native compiling wasn't working but cross compiling was, if the x86 device can cross compile successfully and both `--system=x86..` and `--target=aarch..` are specified wouldn't that do the same thing? <ieure>Hmmm. Well, try it and see, I guess. But I expect you'll have issues from the hashes not matching between the native and cross-compiled packages. <ieure>Probably the best case scenario is the x86 machine will compile your package all the way down to the root of the graph, and you'll end up with a bunch of duplicated packages on your aarch64 machine. <ieure>ex. cross-compiled package foo needs glibc, but the crossbuilder computes a different hash for glibc than the aarch64 machine has, so it builds another copy, and that gets pulled down in the closure of foo. <vagrantc>cross-compilation uses an actually different compiler, library, etc. <ieure>But, you know, try it. I'd be interested to see what happens. <vagrantc>so --system and --target will never do the same thing ... <ieure>vagrantc, Right, so different inputs, so different derivation. <Tadhgmister>absolutely a system that was built with native compiled stuff will not share dependencies with cross compiled stuff, that is a good point to mention <ieure>We will solve this only shipping cross-compiled substitutes for aarch64. If you want to build natively, you have to set up an x86 qemu to run the amd64->aarch64 cross-compile. <vagrantc>though offloading does add an interesting angle on it ... e.g. offloading from aarch64-linux machine, to an x86_64-linux machine ... with --target=aarch64-linux ? <Tadhgmister>vagrantic but if both `system` and `target` are specified the same on both machines it would end up with the same derivation right? there isn't any other state that goes into how the derivation should be built right? I know `--tune` is a thing but that doesn't get automatically detected based on anything right? <vagrantc>ah, i re-read it ... specifying both --system=x86_64-linux --target=aarch64-linux ... wheee! <vagrantc>yeah, i guess the thing to do is try it :) <Tadhgmister>`/* Randomise the order in which we delete entries to make the collector less biased towards deleting paths that come alphabetically first ... */` ok cool, `(flags '(no-atime))` is going on my guix store mount <identity>i want to make a script that contains a store path, and put that script into my PATH. i got a path to the script file in the store via the scheme-file procedure, but need help with the "putting it into PATH" part, any ideas? <vagrantc>sounds vaguely like one of the wrap thingies <Tadhgmister>I'm sure there are cleaner ways, especially if you aren't just porting a previous folder of shell scripts to guix <Tadhgmister>it works like `mixed-text-file`, it takes a list of strings but they can also be gexps so packages resolve to their store path etc <identity>Tadhgmister: it is slightly to the left from what i wanted, but it gives me ideas, thanks <Tadhgmister>I figured the full package definition was overkill for you but the little routine in the `source` field to put the file in a `/bin` subfolder and mark as executable is probably useful to you <vagrantc>futurile: congrats on your first self-signed commit! (I just noticed because I forgot to update my keyring branch and guix pull failed! :) <ruther>zuki: so you aren't using -E flag with sudo, right? Then pulling as root is irrelevant, your user's guix instance is used <vagrantc>even if you are using sudo -E ... it uses your user's guix <vagrantc>sudo --login would probably use the root's guix ... but not sure <vagrantc>ACTION looks up -E having used it all the time, a.k.a. --preserve-env <ruther>definitely, or just /root/.config/guix/current/bin/guix directly to make sure <ruther>yeah, -E passes the env, I just got confused for a sec and somehow flipped it in my head <ruther>vagrantc: are you using it to use the same checkout for root and user? <ruther>sure, but you still need checkouts in root for guix system reconfigure <vagrantc>so: guix pull && sudo -E guix system reconfigure ... <ruther>so you do it because of the checkout I presume? <ruther>there is a forward update check that will run as root. Of course if you passed the cache env var with -E, it's put to your user's profile <vagrantc>though once in a while i see /root/.cache/guix filled up with stuff that i have never quite managed to figure out <ruther>yes, that is exactly what I am talking about <ruther>there is a forward update check on guix system reconfigure that makes sure you're making an update to newer commits, rather than older ones <vagrantc>but i think that is from when running as: sudo guix system reconfigure ... (e.g. without --preserve-env) <vagrantc>which i did for a while, and then switched back to sudo -E|--preserve-env <vagrantc>because i didn't want gigabytes of cruft in /root <ruther>so the answer to 'so you do it because of the checkout I presume?' is yes then <vagrantc>still not understanding your question, so i cannot confirm your answer :P :) <ruther>Because yesterday I got the idea on how to make a better workaround than -E, since using -E means checkouts in your user's directory can end up root owned. (they won't as long as you do have the commits and that is typically the case). The idea is to use bindfs to share the directory between users and in their home they see they own the folder themselves. <Tadhgmister>is there anything like `guix home init` to just copy over the files needed for a home config like `guix system init`? If the answer is no I am 100% going to submit patches to `guix copy` to support a flashdrive as a viable target <ruther>vagrantc: the guix checkout (and other channels if use use any) is what you saw created in /root/.cache/guix <ruther>Tadhgmister: no there isn't. Guix copy works only between two guix hosts <vagrantc>ruther: i understand what is in /root/.cache/guix and largely why, and that i do not need it. :) <ruther>so then I don't understand what you don't understand about my question <vagrantc>well then there's a lot of hopefully inconsequential non-understanding going around. :) <yul>I'm updating a package that in its newest release has forked a rust crate in order to add some functionality they needed. the non-fork already exists as a defined package in crates-io.scm. Is there an established pattern for how forked packages like this should be named? <futurile>vagrantc: thank-you, a bit scared of messing it up heh <futurile>vagrantc: still trying to figure out how I can mostly commit to some branch/train and not potentially mess-up every user on master immediately after I fat finger something! <ieure>yul, Probably want to make it a hidden package to avoid confusion, and name the variable "original-package-name-for-the-package-name-that-needs-it" <yul>ieure: thanks, I'll look into how to do that <lechner>Hi, how may I, in a build phase, select an input different from that suggested by search-input-directory, please? <ieure>lechner, (assoc-ref inputs "package name") <vagrantc>you have multiple inputs with the "same" contents? <lechner>vagrantc / no, just an include folder <ieure>I am also curious to know what situation led you to ask. <vagrantc>just trying to figure out why search-input-directory would get it "wrong" <lechner>that project does not yet exist on codeberg <lechner>search-input-directory also adds a slash at the end of the stem, while search-input-file does not <ieure>lechner, Yeah, none of the search-* procedures are a good choice here, since many packages will have an include/ directory. <ieure>lechner, Do those .ffi files just reference /usr/include? Or do they point to a specific header file? <ieure>If the latter -- I'd use search-input-file on the specific header. <lechner>ieure / great idea, but both ffi files point at different headers <lechner>ieure / vagrantc / thank you both for your help today! <lechner>vagrantc / moral support is worth a lot! <lechner>that's also true for GCDs, by the way, which you gave