IRC channel logs


back to list of logs

<quidnunc>lilyp: The Ubuntu/Debian package didn't seem to create them, the README.debian doesn't mention anything either
<quidnunc>lilyp: Doesn't GUIX_PROFILE handle the location?
<lilyp>GUIX_PROFILE is a meta-variable
***Server sets mode: +Ccntz
<jackhill>Calling folks who have opinions about go packaging, I have a question about how to handle the /v2 (etc) version tags:
<the_tubular>Anyone tested bcachefs?
<lilyp>quidnunc: yep
<nckx>the_tubular: It's been my root file system for over a year.
<the_tubular>Damn, is there any problem I should know, like btrfs (raid 5-6) trying bcachefs?
<the_tubular>I will replace ZFS with bcachefs if the risk are very low
<quidnunc>lilyp: thanks
<nckx>Well, it's a file system in early development, features are still being written (snapshots landed yesterday). While it's usable and I think it will end well, ‘risks are very low’ is simply not realistic yet.
<nckx>the_tubular: ☝
<the_tubular>Also, guix home has landed! Congratz to the team!
<the_tubular>I'll wait a few more months then nckx, data I have on one of my pool is very important data
<the_tubular>I do have backups, but still I don't want to risk a pool corruption
<nckx>Oh, yeah, please don't use it for VID, I'd feel guilty by proxy :) It's promisingly stable *for an fs that's still being written* but it's not finished!
<nckx>I keep hourly back-ups. Just to sleep at night: I've not yet had to use them through any fault of bcachefs.
<the_tubular>VID : Very Important Data ?
<the_tubular>Is Kent still alone working on it?
<Guest64>vivien, Thanks a lot for sharing your process. I think this can potentially work for me. Lets say I create a channel repo like you, where the build and release process happens with Cuirass taking continuous look at it and building binaries, and my source repo updates the channel repo upon every single commit. However, I do want my tests to run on
<Guest64>EVERY branch/commit of the source repo. Any pitfalls you see there?
<Guest64>for reference I was Guest33
<quidnunc>Why am I still seeing "hint: Consider installing the `glibc-locales' package and defining `GUIX_LOCPATH', along these lines" after installing glibc-utf8-locales and setting 'export GUIX_LOCPATH="$HOME/.guix-profile/lib/locale"' in my .zshrc ?
<vivien>I don’t use Cuirass, I use a git update hook. This hooks lets me know the exact set of commits that have been pushed, so the server can build every one of them before they are accepted. I don’t know if you can run cuirass on demand, so maybe you will need to change a lot of things for your CI use case.
<Guest64>vivien: Looks like Cuirass has the capability to scan repos every so often, and build upon changes to them. At least that's what I understood from the "period" flag to it.
<vivien>If you store only the latest commit of master (or of all active branches), maybe Cuirass won’t build all of them and will skip some.
<vivien>Maybe it’s intended?
<Guest64>Maybe .. good point
<quidnunc>If I use glibc-utf8-locales does my system locale need to be one of those included in that package?
<quidnunc>How does guix know which locale to use on a foreign distro?
<Guest64>vivien: I could technically automatically create a separate "package" for each branch, and watch it build upon every commit to each of them?
<Guest64>But just getting it working with the master branch maybe a good start
<vivien>Yes, but if cuirass runs every hour and you push 2 commits on the same branch in the hour, only the latest will be built
<vivien>Maybe that’s what you want
***charles_ is now known as char
<Guest64>But looks like creating a channel-repo is really the only way to go about things, there is really no two ways about it. I can't really invoke a package build by Cuirass without having a channel in any sane way. Am I seeing this right?
<Guest64>vivien: I think Cuirass can run a lot more frequently than that. But I agree with you, I don't like it that Cuirass may not see commits in between.
<Guest64>vivien: Nevermind, Cuirass will run upon every single commit to the channel-repo
<vivien>You can use the same repo, if you add a .guix-channel file at the root it will be considered a channel. But to define your package, you need to write down the commit ID, and you can’t know it before the commit is closed. So you need to push 2 commits for each change: one with the change, and one updating the package. I think using 2 repositories is cleaner.
<vivien>(there’s specific content for the .guix-channel)
<vivien>Oh OK
<awb99_>does someone have a sensible bash .config that makes sense to be used with GUIX ?
<quidnunc>Why does something like "guix pull; guix gc --delete-generations=5d; guix pull;" download packages on the second "guix pull"?
<vivien>Because guix gc deleted the build system for guix, as it was not needed.
<vivien>That’s what I think, but maybe I’m wrong.
<lfam>It's basically what vivien said
<lfam>There are programs that are used to create profiles, and they are not protected against garbage collection
<lfam>So, if you decide to make a new profile with `guix pull`, you need to have the software available to do it
<zacchae[m]>would you look at that
<zacchae[m]>guix home was finally pushed to the main repo
<quidnunc>vivien, lfam: Is there a way to avoid that and keep those "essential" profile-creating packages around? I don't understand why you would want them deleted
<lfam>quidnunc: There are a few methods to protect them from garbage collection, but nothing that's like a simple `guix protect packages` command
<lfam>You can manually protect things from garbage collection by making them "gcroots". You symlink the store item that you want to protect into /var/guix/gcroots
<zacchae[m]>congratz everybody, we made it
<lfam>You could also install the relevant packages into your profile
<quidnunc>lfam: Am I crazy to think that this is strange behavior?
<jgart>oh awesome, congrats!
<jgart>does guix home work on a foreign distro?
<lfam>Or, you could use `guix environment --root=my-stuff --ad-hoc the packages you want to keep`
<quidnunc>lfam: (thanks for the workarounds)
<lfam>quidnunc: I understand that it's counterintuitive
<lfam>It's typical in Guix that build-time dependencies are not protected against garbage collection, but the decision about what to garbage collect is not made based on "build time" vs "run time", but something much simpler
<lfam>After building a package, Guix scans the built output in /gnu/store for strings that look like a directory in /gnu/store. When it finds one, it records this in the Guix database as a "reference" of the built package
<lfam>Then, when you install a package, Guix knows to also download the references. And when garbage collecting things, this database of references is used as well
<lfam>So, your profiles are also store items. (You can do something like `ls -l /var/guix/profiles/per-user/quidnunc` to look at that`)
<lfam>If the built profile does not include references to the things used to create it, those things are not protected against garbage collection
<lfam>Now, maybe we should invent a special case for profiles to protect these un-referenced build-time dependencies from garbage collection. You are not the first person to ask about this!
<nckx>Don't --{gc-keep-outputs,gc-keep-derivations}=yes address this? (Not tried.)
<lfam>But, the tricky thing is, the next time you build a profile, the same programs may not be used, especially if you have run `guix pull` since the last time you created a profile. Since `guix pull` updates Guix itself, it's expected that it may also change the dependencies of Guix, and the programs used to create profiles
<lfam>Maybe nckx!
<lfam>Worth a try
<lfam>I never really dug into those
<nckx>Those are guix-daemon options, by the way, quidnunc.
<quidnunc>lfam: Thanks for the explanation, I see why it may be tricky to implement
<lfam>I hope this helps explain the situation a bit. If you are still curious, I recommend reading the docs of `guix gc`, specifically the --references, --requisites, and --referrers options
<lfam>Those options let you inspect the graph of references that are recorded in the database
<nckx>For the record, I don't agree that this is strange behaviour, or that protecting un-referenced things makes sense.
<lfam>I feel the same way but maybe I have Stockholm syndrome
<lfam>I think it's a common question from new users
<jgart>poor man's guix.el:
<jgart>wanted to share in case anyone finds that one-liner useful
<nckx>Yes, we love our bondage discipline since it gives us so much in return.
<jgart>in their own workflow
<quidnunc>nckx, lfam: I can understand that there are cases where you want the current behavior but most of the time you want a "get rid of packages/builds I'm not going to use anymore"-command
<quidnunc>and that doesn't include the stuff I need to run guix pull when nothing has changed
<lfam>quidnunc: I think it's a case where the behaviour emerged out of "how Guix works", and "how Guix works" is mostly designed for `guix install` and `guix remove`
<lfam>I'm not sure this particular behaviour was "designed"
<lfam>The current implementation of `guix pull` is only a few (?) years old
<lfam>It used to be a special case and wasn't handled by the same machinery as `guix package / guix install`
<lfam>It was a big improvement to make `guix pull` just another case of `guix package`, but maybe there are some more rough edges to sand down
<lfam>(If you ever feel like `guix pull -l` is too slow, try `guix package -p ~/.config/guix/current -l`)
<nckx>the_tubular: I think the relation's best described as ‘Kent, with several people helping’ (and not just with bug reports).
<quidnunc>I would note too that most other package managers "garbage collection" doesn't work this way (nix, apt, etc)
<nckx>Yeah, very true, ‘guix pull’ used to be a very weird and special beast (not in a good way).
<nckx>It's the exact same algorithm as Nix.
<nckx>Apt doesn't have garbage collection like this.
<quidnunc>nckx: I don't think I need to download anything when running "nix-env -u" after running a garbage collection
<nckx>I mean, there's no concept of myriad different builds in apt, it's trivial to say ‘this needs binutils, so keep binutils’; that's nonsensical in Nix/Guix land.
<nckx>quidnunc: That's v different from saying their garbage collection works differently.
<quidnunc>nckx: It's true that apt is different but "clean" and "autoclean" don't result in apt-get update re-downloading binaries
<nckx>See above.
<quidnunc>nckx: Right, it is a different system/design. I'm talking about the "use case" or user experience
<quidnunc>nckx: I'm not talking about implementation
<nckx>Hm :)
<nckx>I do wonder if one or both of said daemon options would make sense to include by default.
<quidnunc>though nix has a similar/same design but doesn't do the redownload "build essentials"
<nckx>(Then begins the ‘why is Guix so huuge’ other side of the argument, but still. It would throw away fewer things likely needed later.)
<nckx>quidnunc: That's because Nix doesn't have a counterpart to ‘guix pull’.
<quidnunc>(I'm also talking from the point of view of someone mainly interested in using substituttes)
<nckx>There's no packages-as-code base that gets built.
<quidnunc>lfam, nckx: Anyway thanks for the suggestions about workarounds. I'll probably install the packages that are "always" needed, maybe into a special user profile
<lilyp>On the front of pulling packages just so itself can run, Guix is still the minor offender if we compare it to language package managers :P
<lilyp>or gradle, lol
<awb99>I am trying to add USER SERVICES to my guix config. I cannot figure out how to pass the script dir for this. I guess I have to create a .config/.bashrc file, where I export some kind of path.
<nckx>quidnunc: That will get you 95% of where you want to be! I think it will still happen that you have <foo> installed into your hacky profile, and ‘guix pull’ will download another <foo>, and that just bugs me. 😛 That and the manual aspect (same with hard-coding some ‘treat these specially because guix pull tends to use them’ packages — yuck).
<jgart>hi in order to compile dwm from source without writing a package definition I need to do the following in an environment containing dwm: make CC=gcc FREETYPEINC=/gnu/store/j602xc2fnj15rqnj8x1vnmpq6qzv0n35-freetype-2.10.4/include/freetype2/
<jgart>I first ran guix environment dwm --pure
<jgart>in the dwm git repo checkout
<jgart>is there a shortcut for referencing FREETYPEINC=/gnu/store/j602xc2fnj15rqnj8x1vnmpq6qzv0n35-freetype-2.10.4/include/freetype2/
<jgart>at the terminal?
<jgart>without having to look for it manually in /gnu/store?
<jgart>without having to look for the full store path, that is
<jgart>something like readlink -f $(which emacs)
<jgart>or readlink -f $(which freetype) but to find a useable /gnu/store/j602xc2fnj15rqnj8x1vnmpq6qzv0n35-freetype-2.10.4/include/freetype2/ directory
<qzdlns[m]>hi guix, should i be able to access arbitrary web services from other lan devices on the standard networkmanager config (I would assume permissive iptables defaults)? trying to access jupyter from another machine with no joy
<lispmacs[work]>qzdlns[m]: I believe the answer is yes
<qzdlns[m]>lispmacs[work]: bugger, i thought as much. thanks for responding
<lispmacs[work]>how much do you know about computer networking (ip addresses and such?)
<qzdlns[m]>comfy to introspect iptables
<lispmacs[work]>okay, sounds like you know enough to check your ip addresses, subnetting, etc.
<qzdlns[m]>yup other boxes can run servers totally fine
<qzdlns[m]>no vnet or layer magic happening on the device, and i can stat local services from this guix box
<qzdlns[m]>just ‘connection refused’ on jupyter and other arbitrary web servers
<lispmacs[work]>sorry, is jypter on your guix box or another box?
<qzdlns[m]>ssh is fine though
<qzdlns[m]>jupyter is on the guix box
<lispmacs[work]>are you able to access it from the same (guix) box locally?
<qzdlns[m]>yeah no troubles on the lo address or localhost
<lispmacs[work]>connection refused sure sounds like a firewall issue, unless it is a jupyter config issue. Maybe if you provided a link to your config.scm somebody would see something
<jgart>related to jupyter, I'm running jupyter in a guix container and I need to execute some python code (from python-notebook library) inside the container environment running jupyter
<jgart>I had no luck with --expose or --share flags of guix environment
<jgart>here's the code and particularly the guix command I'm running:
<jgart>notebookapp.list_running_servers() does not find any jupyter notebooks because they are running inside of `guix environment --container`
<jgart>any suggestions are much appreciated
<qzdlns[m]>lispmacs[work]: for the config, but basically, standard %desktop-services
<qzdlns[m]>lispmacs[work]: thanks for the help, I will try again tomorrow
<lispmacs[work]>qzdlns[m]: okay. my next thought was to wonder if it had something to do with the port number involved
***califax- is now known as califax
<char>I am creating a package. It builds fine but the test fails. When I do guix environment --pure. It builds fine and the test pass. How to debug? When when auto testing, it says cannot be found.
<char>It is cmake build system
<apteryx>char: build with --keep-failed (-K)
<apteryx>then 'cd /tmp/guix-build-*'; . environment-variables; cd build; <whatever debugging you wish to do>
<char>apteryx: I would think if it can't find the library then the error is related to the environment variables.
<apteryx>actually if it builds fine and fails in 'guix environment --pure' you probably want to get closer to the build environment by using 'guix environment --container'. This will disable networking. If it still passes, try 'rm /bin/sh' once in the container
<char>everything works in guix environment --pure
<apteryx>is the library it doesn't know about built by the package?
<char>I'm not sure how to see that, but even in guix environment --pure I cannot locate the .so
<apteryx>the check phase typically occurs after the build phase but before install
<apteryx>perhaps you can workaround it by setting LD_LIBRARY_PATH to where the .so lives in the build
<apteryx>else reorder the check phase after the install phase
<apteryx>not sure
<char>I would like to set LD_LIBRARY_PATH, but I don't know the location of the .so
<char>apteryx: is there a best way to set environment variables? I am planning to add a build-phase
<apteryx>in your failed build, try using the "find -name '*.so'" command
<apteryx>then add a phase before 'check, that does (setenv "LD_LIBRARY_PATH" "/path/to/built/libs")
<apteryx>what is the package you are building?
<char>apteryx: the .so is supposed to come from an input (wxwidgets), but the .so is located in the store and not the path. the only output is "out".
<iskarian>sneek, later tell civodul: I don't really have an opinion w.r.t. ocl-icd; I don't use (or know anything about) the software, I was just fixing bugs!
<sneek>Got it.
<Guest64>Is there an example somewhere of a working git-fetch over ssh? I can't seem to get it to work from a non-public repo
<Guest64>Actually, I haven't tried a public repo
<Guest64>I don't think that even matters, I can't authenticate using ssh
<jsoo>does anyone know how the emacs-build-system works? seems like the themes from emacs-base16-theme are not installed properly
<dongcarl>Hmmm, any reason why guix's glibc-dynamic-linker for powerpc64(le) is under /lib instead of /lib64?
<dongcarl>It seems like glibc and gcc both expect them to be under /lib64
<char>apteryx: thanks, the tests are working now. The install is not quite working though since there is not ninja install.
<jsoo>ah, found the #:include argument. I think that'll fix it.
<civodul>Hello Guix!
<sneek>civodul, you have 1 message!
<sneek>civodul, iskarian says: I don't really have an opinion w.r.t. ocl-icd; I don't use (or know anything about) the software, I was just fixing bugs!
<civodul>"just fixing bugs"... that's a lot!
<the_tubular>Anyone is doing networking stuff with guix ?
<RRRRedEye[m]>what does networking stuff mean?
<the_tubular>Umm, I want to configure a firewall and also use PPPOE
<the_tubular>On a guix VM
***littleme_ is now known as littleme
<futurile>morning all
<the_tubular>Hey futurile
<rekado_>on a machine with CentOS 7 I booted with user_namespace.enable=1, but “guix environment --container --ad-hoc coreutils” throws this error:
<rekado_>guix environment: error: clone: 2080505873: Invalid argument
<rekado_>did I forget to enable something else?
<rekado_>ah, /proc/sys/user/max_user_namespaces is set to 0
<makx>anyone have a hardware recommendation for a USB Wifi dongle that works with libre linux? ideally obtainable in the UK
<civodul>rekado_: did it eventually work?
<civodul>makx: hi! i have one but i don't even remember what brand that is and it's not written on it
<civodul>it's kinda big, like 4-5cm, but it's reliable
<makx>civodul: getting a bit frustrated, I have bought 3 dongles and none of them works (even though of couse i tried buying targettedly)
<makx>but then it turns out that it's not exactly the correct model and they use a chip that needs non-free firmware
<vivien>I’ve been in the same situation
<PurpleSym>Is there a way to fix CI evaluations that failed with ‘cannot build missing derivation’?
<PurpleSym> is not helping in determining which packages were broken by my Haskell updates.
<makx>is there anywhere where I can find out what this guix-home stuff is?
<jgart>wip darkhttpd service:
<jgart>Any feedback is much appreciated
<civodul>mjw: hi! apteryx & i are looking at .gnu_debuglink vs. build IDs:
<civodul>perhaps you could shed some light :-)
<civodul>PurpleSym: it's a bug we should discuss with mothacehe
<rekado_>civodul: yes, it worked after setting max_user_namespaces
<rekado_>we’ll soon have this enabled on all cluster nodes
<rekado_>we’ve been having problems with rstudio, which would load system libraries when it really shouldn’t. I’d like to tell people to start rserver in a container to bypass this class of problems.
<mjw>civodul, long bug, which comment is the key issue?
<mjw>civodul, the .debug_link crc is calculated over just the separate .debug file content.
<mjw>civodul, the build-id is captured over the "full build", before any "separation".
<mjw>civodul, which usually means doing a sha1 over all the sections in the ELF file
<mjw>it should capture the .debug files too because those hold most information about the build environment (like compilation patch, compiler flags used, etc.)
<civodul>rekado_: nice that you can turn that on
<civodul>mjw: so the hash is calculated over debug info as well?
<mjw>If you then separate the file in code/debug parts you keep the build-id as is, to match the unique parts together again.
<mjw>civodul, yes
<civodul>so gdb just checks that the separate debug file has the same build-id as the code, right?
<civodul>it doesn't matter whether it really is the sha1 of all those sections, does it?
<mjw>that is kind of the whole point of the build-id, that it captures the whole build environment, not just the generated code, but also how it was generated (which is what the .debug sections kind of represent)
<mjw>civodul, no, it doesn't need to be a hash over the actual bits produced. It can be a completely different hash, it can be a different number of bits (but not too short, they need to be globally unique).
<civodul>ok, so we could have our own way of caculating build IDs
<mjw>civodul, all that really matters is that it uniquely identifies this binary blob. If any input, source, compiler, flags, etc. changes, it should be unique.
<mjw>which probably reminds you of something :)
<civodul>certainly :-)
<mjw>but you need a unique one for every (loadable) output binary
<mjw>Now, how that works out with grafts however... I am unsure I really understand the interaction.
<civodul>grafting rewrites store references in binaries, but the code remains the same
<civodul>so i think it should be possible to use debug info from the ungrafted code with the grafted code, and vice versa
<civodul>which would mean we can keep the same build ID across grafting
<civodul>i guess we'll have to give it a try
<PurpleSym>civodul: Not just annoying, pretty serious. I think there’s no substitutes for openjdk right now due to this bug, leading to local builds of the entire openjdk chain, which needs a shitton of diskspace and usually fails if /tmp is a tempfs.
<cbaines>PurpleSym, should have good substitute availability, for example, I think it has substitutes for openjdk currently
<PurpleSym>cbaines: I can see it with `guix weather`, but `guix build` is trying to build it locally. Strange.
<cbaines>PurpleSym, are you on Guix System or a foreign distro?
<PurpleSym>Foreign, cbaines.
<cbaines>PurpleSym, right, substitutes from will only start working if you authorize the signing key
<cbaines>while for Guix System, this change would happen automatically if the default signing keys are used, that isn't the case for foreign distros
<PurpleSym>Ah, right, it silently discards unauthorized servers. I forgot…
<PurpleSym>Still, bordeaux having substitutes does not solve the initial CI problem unfortunately :(
<civodul>PurpleSym: by default recent Guix fetches from both bordeaux. and ci., so it kinda solves the issue
<civodul>that said, let's do two things: 1) report the issue, and 2) restart the builds
<civodul>i offer to take care of #2 :-)
<cbaines>As noted here though, I'm guessing a large proportion of users on foreign distros aren't getting substitutes from, since they haven't authorised the signing key
<PurpleSym>civodul: There supposedly is an issue/fix already according to
<PurpleSym>Thanks for #2 :)
<singpolyma>What is the difference between ci and bordeaux?
<ixmpp>Is there a way to add paths to the list of paths mapped to profiles?
<ixmpp>I wanna try and prevent some collisions
<ixmpp>/share/mime seems common
<civodul>PurpleSym: i don't think has anything to do with the issue at hand; missing .drv files are caused by insufficient GC protection, i think
<civodul>singpolyma: bordeaux.guix was announced at
<abrenon>hi guix
<abrenon>why are there a pandoc and a ghc-pandoc package in haskell-xyz ?
<abrenon>is it a common pattern when a CLI tool is implemented in a language with a specific prefix for it, and exists as "the package in the language" + "the package as a language-agnostic CLI tool" ?
<abrenon>or is it just due to the particular needs of pandoc ?
<roptat>hi guix!
<roptat>abrenon, I think it's specific to pandoc
<roptat>usually, when a package provides a binary, we have only one package that has the same name as the binary, independently from the implementation language
<pkill9>abrenon: it's common to use the prefix for CLI tools written in that language, from what I've seen
<pkill9>i guess if it also provides a library idk
<roptat>yeah, I think in the case of pandoc, it's a bit difficult. my understanding is that it provides a haskell library, that you get with ghc-pandoc, and a binary that you get with pandoc (statically linked, so you don't need to install haskell libraries too, maybe?)
<abrenon>hi roptat and pkill9 !
<abrenon>thank you for your answers
<abrenon>ok, so it's a matter of compiling option, I understand
<abrenon>yeah, pandoc exists as a library, it's distributed with hackage
<abrenon>but I think building the package also provides you with an executable, as shown in the package description
<abrenon>(there's a library section but also several executable sections)
<roptat>I think it's to reduce the number of runtime dependencies?
<abrenon>that'd make sense, it has a crazy number of deps
<abrenon>so I'll strip the ocaml- prefix for the package I'm working on
<roptat>abrenon, yeah, like bap instead of ocaml-bap
<roptat>there is actually a short section on package naming in the manual:
<roptat>maybe that can help?
<abrenon>very helpful indeed, thanks again ! : )
<dhruvin>can anyone build a guix home-environment with a single service that extends home-bash-service-type?
<dhruvin>(simple-service 'failing home-bash-service-type (home-bash-configuration)))
<dhruvin>this fails somehow
<dhruvin>any ideas on what I am getting wrong here?
<zimoun>cbaines, PurpleSym: yes, the missing key authorization on foreign distro is annoying.
<zimoun>civodul: no, recent Guix does not fetch from both Bordeaux and Berlin, as reported here
<cbaines>zimoun, do substitutes from work for you if you authorize the relevant key?
<nckx>Good morning, Guix! I regret to inform you that you're dead, because something *ackage-related broke tests/lint. Does that ring a bell with anyone?
<dongcarl>Question: Any reason why guix's glibc-dynamic-linker for powerpc64(le) is under /lib instead of /lib64? It seems like glibc and gcc both expect them to be under /lib64
<nckx>The ‘reason’ (well, lack of) is there's no good reason for /lib64, and Guix has never used it on any architecture. Can't you tell glibc/gcc to look in (standard?) /lib?
<nckx>Ha, just received a bug report for that exact test failure :) Time to dedupe…
<nckx>dongcarl: Is this only for the linker-loader, or for all dynamic libraries?
<nckx>(And hi! Long time no IRC.)
<dongcarl>Just the linker loader
<zimoun>cbaines, I have not tried yet. :-)
<dongcarl>I'm looking at, and it seems the "standard" is for powerpc to be under lib64, which differs from all the other architectures
<nckx>Interesting dongcarl.
<dongcarl>glibc also lists the /lib64-prefixed paths under SYSDEP_KNOWN_INTERPRETER_NAMES in sysdeps/unix/sysv/linux/powerpc/ldconfig.h rather than the /lib-prefixed ones
<dongcarl>nckx: Did you say you have an existing ticket for this? Perhaps it's better discussed on the ML
<nckx>I wonder if it would be a good return on investment to just force /lib at the root of the toolchain, to avoid having to carry (if riscv64 "/lib64" "/lib") in places. But I don't know if we would.
<nckx>dongcarl: No no, that was for my previous line about an unrelated issue.
<dongcarl>nckx: Ah okay I see
<dongcarl>I think forcing /lib might make sense for the dynamic libraries, but it might be unnecessary work for the linker-loaders
<dongcarl>I will open an issue
<nckx>We've definitely discussed /lib64 before, in general, and why it's not a good idea. This is more specific though. I dunno… I mean, don't forget that the standard says ‘/lib64/’, not ‘$prefix/lib64/’, so we're already using a different name 😉 (Is this a strong argument? Probably not, but I feel like there might not be any in either direction.)
<nckx>Yeah. Dunno. Good plan.
<Aurora_v_kosmose>Can Guix manage services on a foreign distro?
<nckx>I'm just weary about it being a permanent gotcha for that One Weird Architecture we'll have to carry around forever.
<dongcarl>Ah okay I see
<dongcarl>It's what I'd call a Guix-ism :-
<nckx>Aurora_v_kosmose: No.
<nckx>Guix won't really manage a foreign distro at the ‘admin level’ at all.
<nckx>You can have a Shepherd instance running as your user, which runs ‘user service’, but then these aren't currently managed by Guix. The new & unfinished ‘guix home’ might change that.
<nckx>*user services
<Aurora_v_kosmose>nckx: Ah, Guix Home is specifically what made me wonder about that.
<nckx>dongcarl: We do like our Guixisms!
<Aurora_v_kosmose>Well that and "Guix has a broad definition of “service” (see Service Composition), but many services are managed by the GNU Shepherd (see Shepherd Services)." Many suggests not "all".
<Aurora_v_kosmose>(From 10.8 Services)
<nckx>Yes, but the services that aren't are even more Guix System-specific.
<nckx>Stuff like ‘create and populate /etc’.
<nckx>‘Running‘ those things on a foreign distro is more akin to an total invasion 😉
<Aurora_v_kosmose>Guess I'll have to see where Guix home goes.
<nckx>I'm excited.
<Aurora_v_kosmose>Huh, that branch has a lot of unrelated commits.
<Aurora_v_kosmose>ah nvm, graphing view glitched out
<Aurora_v_kosmose>That makes more sense.
<Aurora_v_kosmose>Guix has pretty much become my main dev environment these days, outside of work anyway.
<nckx>Is ‘glitched out’ a polite phrase for ‘user error’ or did git actually return bogus data?
<singpolyma>Is there a guix package containing `dig` ?
<singpolyma>oh, I found it. it's an output from bind. Can I use only a single output as an input in another package?
<Aurora_v_kosmose>nckx: A mix of user error and the way I had my Emacs buffer setup making it confusing.
<nckx>("foo:bar" ,foo "bar")
<nckx>singpolyma: ☝
<singpolyma>nckx: thanks
<nckx>As always the label is merely conventional.
<nckx>Aurora_v_kosmose: Spooky. Glad you noticed.
<nckx>singpolyma: Your need for dig as an input makes me curious as to what you're packaging.
<Aurora_v_kosmose>nckx: Qubes sometimes has some minor graphics distortion when you resize things too quickly. Resizing again fixes it.
<singpolyma>nckx: private package for a uacme hook script
<nckx>I've been using dehydrated but I'm not entirely confident it will be updated when required (hard to tell if upstream is just resting, or dead). Uacme sounds like an interesting alternative. Thanks!
<nckx>Also, thanks.
<singpolyma>I have been using a combination of certbot and dehydrated, but for my new setup uacme seemed the best fit, it's really very single-purpose unix-y
<nckx>Are you planning on packaging it for Guix soon? Otherwise I probably will as side-effect of trying it out.
<singpolyma>I hadn't actually planned on it, since we're using guix as foreign and uacme is in our base distro
<nckx>If the blurb is true it won't take long :)
<singpolyma>Why does guix use bash-minimal for /bin/sh instead of something like dash?
<nckx>Dash doesn't support bash scripts :o)
<nckx>Bash is the GNU Bourne shell, and has a lot more features than mere POSIX shs.
<singpolyma>Sure, but this is for /bin/sh obviously should use base for /bin/bash :)
<vivien>I’ve heard that a few people were using /bin/sh as if it were bash
<nckx>Is there a reason to use Dash, though? If we did, you could ask why we don't use Bash… :shrug: With bash being more featureful, arguably more widely used, and GNU.
<nckx>Test: 🤷
<vivien>nckx, what IRC client are you using?
<nckx>The best chat.
<vivien>And you just type :shrug: and it shrugs?
<vivien>Doesn’t work on my side :(
<vagrantc>back in the day, ubuntu and later debian switched to dash as /bin/sh because it actually demonstrated a significant speedup at boot time, but that was a land of init scripts and so on...
<nckx>(Settings → auto replace and a boatload of emoji 🌈 pilferred from other channels over the years.)
<noisytoot[m]>My computer decided to do a btrfs check when booting, ran out of memory, and crashed
<noisytoot[m]>How can I cancel the check?
<vivien>I only have teh replaced to the :(
<nckx>noisytoot[m]:, see fsck.mode.
<vivien>I guess I’ll have to collect emojis
<nckx>🐠 🐡 🐟 🎣
<nckx>noisytoot[m]: But your system should not decide to do a btrfs check at all. Which commit are you on?
<vivien>Also unrelated, but does HexChat display correctly underscores?
<vivien>Mine does not :(
<vivien>Your penultimate message was blank to me ^^
<nckx>Which font do you use? Mine had a font rendering bug (I don't think it was in HexChat directly, but HexChat may well use, er, ’seasoned’ APIs that few others do) where the bottom row or so of pixels got cut off with some fonts (notably Cantarell), leading to gs looking like qs and would probably lead to what you describe. But it also seems to have magically fixed itself (which further implies a cause & fix outside of HexChat).
<nckx>So maybe play around with fonts to see if it's related.
<vivien>So it was a font issue
<vivien>It came with "Monospace 9"
<nckx>Actually, it hasn't fixed itself, it's just very subtle and doesn't affect underscores [enough] here.
<nckx>There's still a bit missing from my gs.
<vivien>I’m on Sans 13 and I have the underscore
*nckx ᕕ( ᐛ )ᕗ → Thai food.
<vivien>(but the channels list is broken lol)
<\a>nckx: I don't know because I can't boot
<wigust>Hi Guix
<\a>nckx: booting from an older generation works
<Aurora_v_kosmose>At least the fallback mechanisms work. Nice.
<\a>nckx: the latest guix boots
<nckx>\a OK, that's what I expected.
<nckx>The check is launched due to a typo since fixed, but btrfs OOMing is just… well, btrfs being btrfs a I know it TBH.
<attila_lendvai>it would be really nice if union-build were a little smarter and did at least an an alphanumeric sort at collisions... but the required information is not available at that low level. and parsing the paths feels fragile, although any breakage would only revert it back to the corrent behavior (first in an unorderred list)
<attila_lendvai>or can i use e.g. %store-prefix to drop it from the front, and then drop the hash?
*attila_lendvai has found store-path-base, which is exported, and embarks on a journey with it
<Aurora_v_kosmose>nckx: \a: Actually, I'm curious about that. btrfs ran out of memory while checking a filesytem?
<Noisytoot>Aurora_v_kosmose, yes
<nckx>Oh, you're back ☺
<nckx>How'd you manage to get K-lined?
<nckx>Noisytoot: Out of curiosity, how big was this file system and how much RAM do you have?
<nckx>I will do nothing with this info beyond saying ‘huh’.
<Noisytoot>nckx, it's a part message, not a real K-Line
<nckx>I thought it would be prefixed with Quit: if it were fake.
<nckx>HexChat being friendly I guess.
<Noisytoot>that's quit messages, not part messages
<Aurora_v_kosmose>So, apparently it's a known problem, and a workaround for --mode lowmem exists.
<Noisytoot>according to df -h: the entire filesystem is 466G, I have used 180G, and have 286G left, but that may be inaccurate because I'm not sure how it deals with compressed subvolumes
<Noisytoot>I have 4GB of RAM
<Aurora_v_kosmose>Now the issue is that forced recovery doesn't apparently use the lowmem option...
<Aurora_v_kosmose>Is it something Guix-side that forced the run or btrfs-side?
<nckx>I actually read about that when writing the fscker but had no idea it was that likely in practice.
<nckx>I should probably (if repair "--repair" "--lowmem"), then.
<nckx>Aurora_v_kosmose: Guix bug.
<Aurora_v_kosmose>Ah, got it. I won't bug them with it then.
<nckx>The OOM is all theirs, but I suspect they're aware of that minor issue 😉
<Aurora_v_kosmose>It's pretty much a guaranteed issue past a certain filesystem size (unless you can afford 100s of gigabytes of ram). Hence lowmem.
<Aurora_v_kosmose>The issue that's still WIP though is that --repair doesn't work with lomwme.
<nckx>”The possible workaround is to export the block device over network to a machine with enough memory.”
<Aurora_v_kosmose>They probably mean iscsi. But yeah, past a point... I doubt my server would have enough to check a full shelf raid.
<Noisytoot>my server with the most RAM has 6GB
<Aurora_v_kosmose>Oh. That's unfortunate.
<nckx>Don't ruin my mental image of spinning up remote highmem VMs through API calls performed by btrfs check.
<nckx>It's just so on-brand…
<Aurora_v_kosmose>I doubt btrfs will implement that... but we could.
<nckx>April's always such a long way away.
<Aurora_v_kosmose>For the record it's a terrible, if amusing, idea.
<Noisytoot>It's enough to run ZNC and UnrealIRCd (unless someone on pissnet links and squits thousands of servers within one second, causing 100% CPU usage, which happened once)
<Aurora_v_kosmose>I use mine as a VM server and NAS, so it has pretty hefty requirements.
*dstolfa has a 40TB ZFS array for storage, mostly virtualization-related storage
<Noisytoot>it has a 1TB hard drive, so I could create lots of swap
<Aurora_v_kosmose>I suppose if you don't mind a check taking a year, that's an option.
<nckx>The 0.000000040 zettabyte file system. Anyway, I've added --mode lowmem by default to check-btrfs-file-system…
<singpolyma>Is it possible to have a guix package that clones over ssh instead of http? I get `error: cannot run ssh: No such file or directory` even if I add openssh as a native input
<roptat>singpolyma, native-inputs are not available for the source
<roptat>I think you would need to create your own source method (like git-fetch and friends)
<singpolyma>hmm, ok
<Aurora_v_kosmose>nckx: I take it the check is more intended in Guix to only check and not repair?
<singpolyma>Hmm, and it seems guix archive is not able to take --with-source either
<nckx>Aurora_v_kosmose: It can do both, but repairs must be explicitly requested and tends to engage caveat emptor mode.
<Aurora_v_kosmose>nckx: Alright. Just wanted to check given their docs explicitly mention --repair is incompatible with lowmem
<Aurora_v_kosmose>At least until someone implements/fixes it.
<nckx>Hence the if above, which is basically what the final patch looks like.
<nckx>But thanks for pointing it out. I'm familiar with btrfs only from past usage.
<Aurora_v_kosmose>I'm slowly moving stuff over to it now that it has implemented features I was waiting for. My ZFS storage is basically waiting for it to include a native caching layer before porting time.
<Aurora_v_kosmose>Needed because SMR is likely to the way for the next decade and ZFS hasn't even started implementing zoned-storage support.
<Aurora_v_kosmose>That or they're going cathedral style and the implementation work isn't publicly visible yet.
*Aurora_v_kosmose doesn't actually know how ZFS is developped.
<Aurora_v_kosmose>*to be the way
<dstolfa>Aurora_v_kosmose: a scrub on ZFS doesn't take nearly as long as mdraid does
<dstolfa>it knows about block-level things and is CoW, so it's infinitely faster and better in many ways
<Aurora_v_kosmose>dstolfa: I meant btrfs raid. I don't trust blockwise raid.
<nckx>Pity, I thought SMR had peaked a while ago.
<dstolfa>and ZFS is developed as part of OpenZFS, which is essentially the base for ZFS on linux, freebsd, illumos, windows and macOS
<Aurora_v_kosmose>nckx: Dumb dm-smr might have, at least I hope so.
<Aurora_v_kosmose>Because not even all dm-smr implements TRIM, so you've got drives that basically die once you've written their full size once, unless there's some way to trick the controller into garbage collecting anyway.
<dstolfa>Aurora_v_kosmose: if you want to browse or whatever, it's here:
*nckx builds a RAID out of CD-R drives for similar levels of read/write performance for a fraction of the price; profits???
*dstolfa quickly grabs all the flash drives and puts ceph on them
<nckx>Noo, you'll use up all the ceph.
<Aurora_v_kosmose>Heh. Not a huge fan of Ceph. It has pretty absurd requirements...
<dstolfa>Aurora_v_kosmose: i agree with you, but it's the easiest thing to deploy on EL-base :P
<nckx>What's EL-base?
<dstolfa>nckx: enterprise linux, all those weird redhat things
<nckx>I didn't know that was a ‘thing’ as such, let alone slang.
<nckx>I mean, different world. Yes, I was aware of the existence of ‘the Red Hat’, thank you.
<dstolfa>nckx: it's usually used to talk about rhel, alma, rocky and sometimes centos (since stream is... really weird)
<nckx>Wow, I'd legitimately never heard of two of those. I… don't know what that says about me.
<nckx>‘Happy’, probably.
<nckx>OK, but they're both ‘just’ binary compatible rhelalikes.
<nckx>Those things do come & go.
<dstolfa>yeah, pretty much. they're there to replace CentOS
<dstolfa>since redhat decided to uh, "repurpose" it
<attila_lendvai>srfi-43 is not available in the builder? or am i doing something wrong?
<attila_lendvai>i'm trying to use it in guix/build/union.scm
<nckx>attila_lendvai: You need to add it to #:modules, see (gnu packages …) for many examples with other srfis.
<nckx>Ah, not a package.
<attila_lendvai>nckx, i get "no code for module (srfi srfi-43)"
<attila_lendvai>nckx, is that expected? where can i read about the limitations?
<attila_lendvai>i'm already at a test of a union-build collision resolver that picks the alphanumeric last
<attila_lendvai>it works in the repl, but when i run make check TESTS="tests/union.scm" it fails
*attila_lendvai is staring at the other tests, and realizes that it's a bit more complicated
<jonsger>dstolfa: you forget oracle linux :)
<sneek>Welcome back jonsger, you have 1 message!
<sneek>jonsger, nckx says: XFS ready for bare-metal testing on master :)
<dstolfa>jonsger: ah yes, i did
<dstolfa>it's actually a viable EL
<jonsger>it its os-release its shorted as "el"
<jonsger>nckx: XFS is up an running. I'm on core-updates-frozen plus your XFS patches :)
<nckx>Perfect. Thanks for reporting!
<nckx>attila_lendvai: Something tells me you're not talking about adding srfi-43 to the define-module at the top?
<attila_lendvai>nckx, i want to use srfi-43 in guix/build/union.scm. i have added it as a #:use-module (srfi srfi-43), and all is well: in the repl it works, and my code works. then i run the tests, and i get this error.
<nckx>Well, I have to go, but I will probably be back.
<attila_lendvai>nckx, if i remove the use-module, then i don't get that error anymore
*attila_lendvai does a clean build
<attila_lendvai>nckx, if i do something (probably make clean), then the test does more work, including the downloading of stuff. it prints: unpacking bootstrap Guile to '/path/guix/guix/test-tmp/store/qky0jf68rr7pnsvmhj0ay42rzh4qk6r9-guile-bootstrap-2.0', and a long list afterwards that does *not* containt srfi-43.go
<attila_lendvai>nckx, and i have found a comment: "Use the bootstrap Guile when running tests, so we don't end up building everything in the temporary test store."
<attila_lendvai> maybe this is only an issue with the tests
<nckx>attila_lendvai: Interesting, I didn't know (or forgot :) that the tests use the limited bootstrap Guile. The solution is simply not to use srfi-43 there if at all possible.
<attila_lendvai>nckx, "there" meaning the tests, or union.scm?
<nckx>The tests, I think. I don't think the union.scm code is actually broken, right?
*nckx needs to → zzz; good night all.
<attila_lendvai>FTR, i've submitted the patch and asked for advise: