IRC channel logs


back to list of logs

<Aurora_v_kosmose>mala: If it logs to systemd, there's systemd-journal-remote
<civodul>mala: did you try "guix offload test"?
<civodul>teddd: "guix size $(readlink -f ~/.guix-profile)" for instance
<nckx>‘It is necessary to register on freenode and to log in everytime you connect to show it is you’ was never true. Oops.
<nckx>Should I update the subtitles/transcript as well, despite lack of matching audio?
<karrq>anybody using guix on foreign distro with arch? recently the aur package became super hard to install... built manually in /tmp, had to disable check(), but now when I guix pull I get mkdir permission denied... the guix-daemon service (systemd) is enabled and started...
<Aurora_v_kosmose>Sounds like it's doing way more than it needs to be doing.
<nckx>Hi karrq. mkdir what, exactly?
<nckx>Yes the AUR package is not in great shape:
<karrq>nckx: unfortunately it doesn't specify what :(
<ngz>I have a question above texlive packaging. In want to package kpfonts, so I'm contemplating <>. But how do I know what files to copy, and where? Is there some informal guide to get started?
<karrq>Updating channel 'guix' from Git repository at ''...
<karrq>guix pull: error: mkdir: Permission denied
<nckx>karrq: Yow :(
<karrq>I just pasted the output here since it's very short anyways
<nckx>Did you pull as root by accident? I'd start with chowning (or deleting) ~/.cache/guix
<nckx>karrq: Sure, 2-3(max) lines are fine.
<nckx>To be clear, I don't use Arch, I was just expecting a better error from Guix ☺
<karrq>nckx: I don't even have a ~/.cache/guix, this is literally the first pull after install. I even tried rebooting or restarting the service
<nckx>I guess you could strace guix and look for EPERM.
<mroh>karrq: try the AUR package "guix-installer". Seems to work much better for me.
<nckx>So we can fix that useless error.
<nckx>Oh, there are multiple competing packages? Great…
<jonsger>any idea how to get static-networking to be more verbose: netlink-response-code: errno: 17 is all I get :(
<karrq>mroh: i tried that aswell but I had another error and also I didn't like that it doesn't track or provide any way to cleanup guix... I mean not that I'd uninstall guix much anyways... but there's some extra steps with the script anyways
<nckx>(strerror 17) → "File exists" ; duplicate $something?
<karrq>nckx: oh yeah it's trying to access ~/.cache/guix with 0777 and getting access denied...
<karrq>mkdir("/home/karrq/.cache/guix", 0777) = -1 EACCES (Permission denied)
<nckx>How does that work?
<jonsger>maybe %base-service does define static-networking itself, so I guess I need to set the config via modify-services %base-services?
<nckx>karrq: Do you have restrictive permissions on ~/.cache?
<nckx>jonsger: %base-services provides loopback.
<karrq>nckx: oh oof, it seems it's owned by root now. yikes
<karrq>I don't even know how that happened! thanks for helping me debug the issue
<nckx>Happy to.
<jonsger>nckx: but I don't define loopback in my network config
<jonsger>the strange thing is that e.g. the /etc/resolv.conf is correctly set, like I defined it
<nckx>Then I don't know why you get an error from netlink (which makes me think it's not a Guix-level ‘duplicate service’).
<karrq>guix pull: error: getting attributes of path `/gnu/store/01b4w3m6mp55y531kyi1g8shh722kwqm-gcc-7.5.0-lib': No such file or directory
<karrq>what could this be caused by?
<karrq>also happens if I try `guix install gcc-toolchain@7.5.0`
<asdf-uiop>nckx: I'd rather see the correct information in the transscript/subs. Maybe with a comment that the audio does not match?
<asdf-uiop>Is the person who recorded the audio track known/reachable?
<nckx>I think it was Paul Garlick…
<asdf-uiop>Since I'm not a native speaker I won't suggest I re-record it if he isn't available, but I could ask a friend if that would help.
<asdf-uiop>And: would it make sense to create e.g. with a redirection to to future-proof the documentation or would that be overkill?
<nckx>It would be very nice not to re-record the entire thing.
<nckx>Maybe rewrite the script not to mention the network at all, although I think that's overkill.
<lfam>KarlJoad, rekado: My opinion about depending on nss-certs is that maybe it's okay for services, but it's not okay for packages. Because certificates have expiration dates, we can't let packages depend on them directly, or the packages themselves would see their functionality "expire"
<lfam>Instead, we have to set up these dependencies dynamically
<lfam>Does that make sense?
<lfam>It's a case where the functional packaging model is not useful
<nckx>I missed the original question but agree with lfam. These are facts about the outside world (similar to tzdata); functionally modelling them is a bug, not a feature.
<lfam>It's a shame we missed the boat with tzdata
<lfam>It's hard to remove these dependencies once they are in place
<lfam>It makes me wish for a package property like disallowed-references, but that you would set in the package that is not allowed to be referred to
<lfam>I noticed that there are a huge number of uses of xorg-server-for-tests that don't prohibit keeping a reference to it :(
<lfam>Ugh, even nss-certs has too many dependents now
<lfam>I wonder why
<nckx>It does feel a bit upside-down that it's the abuser's responsibility.
<lfam>Yeah, it's not scalable
<ft>Has something about channels changed? I have this private channel to try out things and with it in the configuration, for a couple of days now I get "guix upgrade: error: integer expected from stream" — if I remove it, things work.
<ft>The channel repository is accessible, and hasn't changed for a bit.
<KarlJoad>lfam: This is a point of confusion for me due to naming, when you say "maybe it's okay for services", do you mean daemon services or service-types (which bundle packaging and configuration together)?
<ft>The error message seems to originate from the guix-daemon. But the message is not very helpful.
<nckx>No, but that error message is out of date, because it now includes the offending line <>. Hopefully that will eventually lead to a fix. Are you sure your daemon is up to date?
<nckx>KarlJoad: Guix services.
<nckx>A daemon is not a service.
<ft>/proc/845/exe --version → guix-daemon (GNU Guix) 1.3.0-4.4985a42
<KarlJoad>nckx: It is not, but shepherd calls its tasks "services", and shepherd usually manages daemons and long-running tasks. It might just be a me being confused about nomenclature thing.
<nckx>I find the Shepherd confusing.
<nckx>ft: June 2021?
<jonsger>the problem is that gateway isn't covered by the subnet mask. So its required to use something called "pointopoint", no idea how to do that with the "ip" command...
<KarlJoad>I personally feel that "service" is a bit of an overloaded term in the Guix environment because there are Guix services, Shepherd services, and probably another service I am forgetting.
<Kolev>How's that login bug going?
<nckx>‘Service’ in Guix does not imply a long-running process (maybe even ‘a process’) at all. It is a very… ideosyncratic term, even though it's not wrong.
<KarlJoad>It is just a nomenclature thing. A Guix service is roughly equivalent to a NixOS module. I just need to wrap my head around all the different uses of "service".
<nckx>Kolev: I'm now locked out of a box too, so: swimmingly. Fixing it, not so much, possibly because it locks one out of boxen.
<nckx>KarlJoad: FWIW ‘module’ isn't better so at least we're in good company…? Yaay.
<nckx>(Module is worse.)
<Kolev>nckx: Yay! You got the bug! :p
<Kolev>At least it's not COVID
<ft>nckx: Hm, indeed (May judging by "git show 4985a42" in guix.git) — Maybe I should have updated that more regularly. This is a foreign guix system. Updating now.
<nckx>I might be able to hook up a screen to the thing tomorrow but possibly also not.
<KarlJoad>nckx: I really don't know which is better, but I will get around to figuring it out.
<nckx>ft: Don't feel bad, I suspect it's very common, since it's easy to forget to pull as root and unlike Guix System it doesn't give you other goodies like a new kernel/services.
<lfam>KarlJoad: The "maybe" part indicates that I just don't have an opinion on that. Basically, Guix code should not be bound to certificates, which expire
<lfam>Old Guix services and packages should be able to be used with new certificates
<nckx>And the guix-daemon is pretty… ‘finished’, feature wise, if only because nobody wants to add features to the C++ thing.
<aeka>wasn't guix-daemon getting a rewrite?
<lfam>Does that make sense?
<aeka>C++ -> scheme
<ft>nckx: Indeed. :)
<lfam>akea: Yes, it proceeds piecemeal, when people choose to work on it
<nckx>aeka: It didn't really go anywhere:
<nckx>lfam: …oh?
<lfam>Didn't some functionality get replaced?
<nckx>Maybe some very peripheral stuff (but, granted: that's something).
<lfam>Like, I think that (guix register) was part of that effort
<KarlJoad>So this comes down to someone using `time-machine` to get an older version of a Guix service, but using newer certs, which have not be invalidated yet. Ok.
<lfam>KarlJoad: I mean, it would require some doing
<lfam>But it should be possible, and changing nss-certs should not have to cause packages to be recompiled
<jonsger>ah "ip" calls it "peer"
<KarlJoad>It is not changing nss-certs, just changing the way the `cuirass-service-type` is defined to automatically depend on nss-certs. That way SSL-required connections happen automatically.
<lfam>Basically, software provided by Guix should look up its certificates at run-time in unversioned and well-known locations such as /etc/ssl/certs or via $SSL_CERT_DIR, etc
<mroh>Kolev: this glibc-dl-cache.patch segfault thing you found... could that be the cause of the login bug? (just hoping... ;)
<lfam>Hm, I don't know exactly how it would work KarlJoad, but it sounds like the wrong approach
<KarlJoad>Disregard what I said, I just thought through what I was about to explain.
<lfam>Please don't hesitate to keep going if you think we are mistaken!
<Kolev>mroh: *i* found?
<KarlJoad>If nss-certs were part of the `cuirass-service-type`, then whatever version of the service-type you pull from the substitutes would include _that_ particular version of the certs, which may be invalidated.
<mroh>Kolev: oh, sorry. Ivan Kozlov found...
<KarlJoad>However, if nss-certs were a propagated input, then couldn't a graft happen to update the certs without needing to change the definition of the service-type.
<lfam>I guess, but how would that be better than using the system-wide certificate store in /etc/ssl/certs?
<KarlJoad>I'm not sure. I personally expected `cuirass-service-type` to allow SSL connections be default. I am just pondering my way around the Guix system to see if there would be a way to support my assumed default behavior.
<lfam>What I would suggest is that Guix System should include nss-certs by default
<ft>nckx: Updating helped. Cheers!
<KarlJoad>Part of `%base-packages`? I agree. I don't really see a reason why they aren't included there to begin with. HTTP seems to be in fall-back support, rather than the mainstay.
<lfam>When Guix was created, HTTPS / TLS was not considered something that was typical. Since that time, the world changed
<lfam>That's basically the reason
<KarlJoad>Fair enough. Is there any major reason why it could not be included? The default behavior of the Guix installer is `(append (list (specification->package "nss-certs")) %base-packages)` anyways.
<nckx>Major Browsers is deprecating HTTP on a very swift schedule. Soon it will be rare.
<lfam>People have also talked about wanting to allow users to choose between certificate store providers, but they could still choose. And regardless, it's a monoculture anyways
<KarlJoad>Rather should not.
<nckx>KarlJoad: I don't think there is a good reason…
<lfam>Someone should suggest that we make this change for the upcoming release
<lfam>Not including a certificate store by default would be considered strange in 2022
<nckx>And there's nothing preventing the type of person who knows what roots are and wants to change them to do so, trivially by their standards.
<nckx>Another 1.4 feature settled 👍
<lfam>I'm not sure it's settled yet :)
<nckx>‘Now supports HTTPS’. ✨
<KarlJoad>Having _a_ cert-store is pretty much a requirement nowadays. But, being able to change the contents of that store might require more thinking.
<lfam>Root can always change the contents of /etc/ssl/certs
<lfam>As a last-ditch non-Guixy fallback
<nckx>I don't see what's so hard about replacing %base-packages you don't like.
<aeka>Agreed, nss-certs should be something in %base-packages!
*lfam pushes a webkitgtk update. This time, tested
<aeka>somebody, open a PR ;p
<KarlJoad>nckx: I am not really familiar enough with the Guix ecosystem to make a claim about how easy it would be to customize the properties of nss-certs. Making nss-certs part of `%base-packages` is pretty reasonable.
<jonsger>roptat: does guile-netlink support peer/pointopoint connections?
<lfam>I agree aeka! Somebody should do it
<aeka>should be fairly easy as Guix uses scheme for its core language
<lfam>Emphasis on "somebody" :)
<nckx>I'm not sure what you mean by ‘customise properties’ but if you don't want to substitute your own package from scratch, you can always inherit nss-certs and add your favourite root.
<aeka>you just filter over %base-packages
<lfam>We'll have to update our manual about this. We can put an example of customization in there
<aeka>lfam: ;^)
<mroh>lfam: yay, ty for testing and pushing!
<KarlJoad>nckx: I don't really know what I mean either. Changing nss-certs is not something I will go about doing too much in the near future.
<aeka>there might even be an SRFI that makes it as easy as (remove nss-certs %base-packages)
<aeka>IIRC that's a thing
<nckx>It's just a package.
<nckx>aeka: Yes.
<lfam>Well, it's just something that was raised as a concern in the past, KarlJoad. So we might as well try to help those people with their concerns
<aeka>ah, it was called delete
<nckx>It might be delete, but there is one.
*lfam eyes the packages depending on nss-certs
<nckx>The srfi-1. There's another delete, that works completely differently.
<KarlJoad>lfam: Fair enough.
<nckx>It's a prank.
<aeka>the fun of fragmentation!
<lfam>"Linux is about choice"
*nckx deletes fun.
*nckx (define defun delete-fun)
<aeka>common lispers have all the defun
<aeka>I messed it up
<lfam>So, there are 3 packages that depend directly on nss-certs. ldns, icedtea-6, and pypy3
<nckx>Yes, but that itself was funny, so it worked aeka.
<lfam>For pypy3, they are apparently just for the test suite. But that package doesn't build anyways
<aeka>and probably hundreds that depends indirectly!
<lfam>For icedtea-6... I will cover my eyes
<lfam>Yes, most of the indirect dependents come thru Java / icedtea aeka
<lfam>I will look at ldns now
<lfam>Hopefully we don't have to resort to creating nss-certs-for-tests
<KarlJoad>Thanks for looking at this guys. It was just unexpected behavior on my end that prompted my question.
<lfam>These sorts of questions are extremely important
<lfam>The rest of us are in the terminal stages of Stockholm Syndrome. We need questions from new people
<lfam>So, who will send the email about adding nss-certs by default?
*lfam not it
*aeka touches nose
<aeka>not it
<KarlJoad>I am slowly trying to make the switch. Moved my website to haunt. Looking at making my VPS instance a Cuirass & nginx instance to host the site.
<lfam>Nice, did you take a look at how we use haunt?
<KarlJoad>I did. A bit more advanced than what I need right now, but definitely something good to know about. First I need to migrate my current VPS instance to Guix System and host the site properly. Then I can look into structural improvements.
<lfam>Hopefully the process goes well. It can take more effort to get started with Guix System than with other distros, but we hope the effort is worth it
<KarlJoad>Also trying to switch away from NixOS at the same time, so I am investigating how far down the rabbit hole I can go in VMs before I destroy everything.
<lfam>And like I said, your questions are very valuable
<Kolev>KarlJoad: can you migrate my site to haunt?
<KarlJoad>Kolev: I could, but don't have time. I need to start working on my research project...
<lfam>I opened a ticket in the patch tracker about removing direct dependencies on nss-certs: <>
<nckx>FTR, since you don't mention it in the bug, I agree with your (properties '((disallow-references . #t))) or similar suggestion too, lfam.
<nckx>& thanks!
<lfam>Yeah, I didn't mention it. Not sure how to implement it
*nckx was thinking about that too…
<nckx>The think ended with ‘should I just add a lint rule “for now”,’ which turned into ‘do people actually respect the linter?’
<lfam>Many people do!
<KE0VVT>I’m tired.
<nckx>You should be it's 02:14.
<KE0VVT>19:14 here, but still.
<KE0VVT>the_tubular: Are you still locked out?
<the_tubular>Locked out of ...?
<KE0VVT>the_tubular: Are you not stricken by the bug that stops one from logging into the system?
<the_tubular>Ohh it went really downhill from there ...
<KE0VVT>the_tubular: How so?
<nckx>I don't think is updating.
<mala>Aurora_v_kosmose, no the remote build is a guix system
<nckx>lfam: You linked to bugs.gnu earlier. Was that just because it's faster, or is this known?
<lfam>I got tired of waiting for issues to update
<lfam>I think you're right
<the_tubular>KE0VVT, That was just the tip of the iceberg
<the_tubular>I decided to reboot, and coudln't even boot to a guix system
<the_tubular>Then I wasted a few days trying to install from different ISO.
<nckx>mumi is using ~300% CPU (fluctuates).
<the_tubular>I gave up, couldn't install guix :(
<nckx>It's doing… stuff. I can never read Guile straces.
<drakonis>the_tubular: oh?
<the_tubular>I bought a new SSD, might gave it a shot later. But I spent probably 50 hours just on install alone ...
<the_tubular>But yeah, my experience wasn't the greatest with the installer.
<drakonis>i typically avoid the installer because its fairly trivial to write up my own config now
<drakonis>but i get the problem
<the_tubular>Yeah, writing config is one thing the part I hate is disk partitionning
<the_tubular>I always hated that part
<the_tubular>I always fuck up my UEFI partition and get stuck on grub :(
<nckx>Rebuilding the videos repository on my laptop was probably not the best idea I've ever had. It's rendering PNG frames at several Hz…
<mroh>even worse than tune2fs -O large_dir /dev/noMoreGrubHere ? ;)
<Aurora_v_kosmose>mala: I meant more logging to syslog/stdout (pid1 usually redirects stdout of its child programs to whatever local logging there is), but point taken.
<Aurora_v_kosmose>I'm simply used to using Guix atop Debian
<nckx>mroh: No way I could've known that, and that affected others, not me, so good thing I have no empathy.
<nckx>Launching ffmpeg on an IvyBridge is the real tragedy.
<nckx>(It's not that bad. Berlin isn't actually faster, much to my surprise, but it has 96 not so fasts instead of 4.)
<mroh>I wonder how much we can gain for videoencoding after ludo's --tune patchset is merged. Some encoders have avx2 optimized code, no?
<PotentialUser-13>hello. i installed guix distribution and i did guix pull and sudo guix system reconfigure /etc/config.scm
<PotentialUser-13>but now it fails in the boot process.
<nckx>mroh: Good question, now I'll pay attention!
<PotentialUser-13>i am using thinkpad x200 so i thought there won't be any problem.
<nckx>PotentialUser-13: What's the error (if any)?
<PotentialUser-13>modemmanager complains about unsupported plugin.
<nckx>And this breaks boot?
<PotentialUser-13>and boot log says that freedesktop service couldn't started.
<PotentialUser-13>i'm actually reinstalling guix from latest image so i can't remember the exact output.
<PotentialUser-13>but i guess there won't be any difference since they both will instantiate the same generation.
<nckx>Aie. Could it be <>?
<mroh>I seriously hate this error msg...
<nckx>It seems to target X200 owners.
<nckx>And we're very sorry, but AFAIK nobody's found a fix yet.
<PotentialUser-13>nckx: yeah that was it. :(
<PotentialUser-13>so. this serious bug hasn't been fixed for like one month.
<nckx>Possible, I didn't check the date.
<KarlJoad> Just so I am not being stupid, when building personally-packaged software `guix build` is what I want, right?
<mroh>PotentialUser-13: you get a modemmanager error _and_ "desktop service not found"?
<nckx>KarlJoad: Probably! It will be a bit more verbose than, say, guix install, and will print the store file name on success.
<PotentialUser-13>what are the x200 users doing? are they just not updating their systems?
<nckx>I'm just guessing here, but the modemmanager ‘error’ might well be one of those many harmless errors that litter the average boot.
<KarlJoad>nckx: I am just looking for something that will build a derivation from a definition, like `nix-build`. Getting the store file name might be trouble later on, but will be handled later.
<nckx>*average Freedesktop boot.
<nckx>KarlJoad: If you only want the .drv file name, guix build -d will give it to you.
<nckx>Of course that doesn't ‘really’ build anything of substance.
<nckx>I don't understand the ‘trouble later on’ bit.
*nckx wonders if raghavgururajan has updated their X200.
<KarlJoad>I am going to write a definition for building my site with Haunt. I will need a way to access the store-path later on for nginx to host the new site.
<nckx>‘Build a derivation’ is ambiguous. Do you mean ‘build a package into a .drv’ or ‘build a .drv’, as in compile and install binaries?
<nckx>I suspect the latter, which ‘guix build’ does.
<nckx>It will print the store file name at the end whether you want it or not 😛
<KarlJoad>From the definition, elaborate and build the derivation (.drv), then pass the derivation to the build daemon and build the software (configure, compile, copy to $out). I will need the $out path to give to nginx somehow.
<mroh>PotentialUser-13: yes, we are all booting older generations. It's not only X200 users but also HDD users (it seems).
<nckx>Great, then I'll probably hit that when rebooting my reconfigured server.
<PotentialUser-13>mroh: so the problem is i guess booting up is very slow with shepherd that it literally thinks that there must be an error.
<nckx>mroh: Are HDDs *that* slow?
*mala wonders if i could commandeer my old X220 as a veeeery slow core-updates test machine for next time
<KE0VVT>mroh: Rollback did not help me.
<nckx>Which is weird.
<lfam>I also reproduced #52051 on my x200s. But, I don't usually use the software that is affected, so I'm able to stay current
<lfam>I've had several weird racey bugs with it over the years
<mroh>I really don't know, I/we spend hours to replicate this thing in a vm (w/ io throttling), but no luck, because booting a working/desktop machine all the time is no fun. Currently I'm thinking about activationg lvmcache (again), as a workaround or some --init=/bin/sh kernel boot param hacks to strace logind or so... idk
<nckx>mala: Somebody tried to reproduce it in QEMU with ridiculously low IOPS but it didn't trigger.
<apteryx>I have an X200 running on post-merge master fine (has SSD though)
<lfam>Yeah, I have an HDD in the x200s
<apteryx>everyone having issues with the elogind race are on HDDs
<lfam>It's just really slow overall
<PotentialUser-13>nckx: other distributions don't have that much boot times actually. i don't know why guix is that slow. is it starting a lot of service or is it starting normal services very slowly. i don't know which one is.
<nckx>It's not slow here.
<nckx>5s or so? I haven't timed it TBH.
<nckx>(Not HDD!)
<apteryx>ssd, right?
<apteryx>yes, that's it
<the_tubular>Talking about FOSS hardware, anyone tried Guix on a Asus C201 ?
<KE0VVT>Sorry I don't have an SSD.
<apteryx>Guix is fast on solid state storage, but it's very slow to boot on HDDs. I think part of it is the stat storm
<KE0VVT>Anybody want to send me an SSD? :P
<nckx>KE0VVT: I don't have enough SSDs to go around all my machines, so looks like I'll be picking favourite children this week.
<apteryx>my hard drive sounds like it wants to die on each boot
<nckx>Sorry little Tommy, but you're no Sally.
<KE0VVT>Guix makes this computer sound like a jet engine all the time, TBH.
<nckx>apteryx: I'd expect it to be equal or better since the ld.cache merge, but I'm obviously wrong. Maybe it's somehow counterproductive? (Can't imagine how, just thinking aloud.)
<lfam>The Asus C201 is nice because it can be used with all free software, but it's going to be fairly slow. Maybe comparable to the x200?
<nckx>KE0VVT: Every idle cycle is waste and an affront to the gods.
<vagrantc>C201 is very, very, very slow...
<apteryx>nckx: ld.cache should help, but as Ludovic explained in the blog post, it doesn't fix every other stat storm (Python, Guile, etc).
<lfam>apteryx: I also suspect the fragmented storage layout
<lfam>As compared to a layout where everything is in /usr
<nckx>Who's starting Python at boot though.
<apteryx>yes, that's what the stat storm is about I believe
<apteryx>nckx: probably some services i use though, haven't checked
<lfam>The stat storm is like icing on the cake of the unusual storage layout
<vagrantc>the_tubular: it's been a while since i booted guix on the C201, but there aren't really armhf substitutes available and it is a bit slow to build on, and eventually started getting kernel panics with newer kernels.
<apteryx>nckx: actually, no for Python (no process are using it)
<nckx>I get that it was just an example. Just made me wonder.
<the_tubular>Damn, that's sad to hear, I wanted to buy one a try guix on it
<PotentialUser-13>is using guix with an ssd going to "waste" my ssd faster than other distros?
<apteryx>it's going to use it well, not waste it ;-)
<vagrantc>the_tubular: guix was the first real distro i managed to get working on the C201 reliably, though ... heh.
<the_tubular>How did you do it ?
<Kolev>lfam: X200 is better than C201
<PotentialUser-13>apteryx: well, ssds gonna die one day eventually.  but i want it to live as long as possible. :)
<lfam>I see, thanks for the feedback Kolev and vagrantc
<nckx>PotentialUser-13: Only way is to measure, but I doubt Guix will waste writes. Identical files will be deduplicated. If you update packages more on Guix than you would on other distros, well, as apteryx says that's use, not waste.
<vagrantc>but yeah, any x86-based machine is going to almost certainly be faster than an ASUS C201
<the_tubular>I doubt that Kolev, are you sure about that ?
<the_tubular>Really ?
<nckx>If you want to optimise for SSD life use one of those read-only ‘image’ type things. OStree? I forget what they're called.
<lfam>Guix will definitely use your SSD faster than comparable use on something like Debian. However, I bought my SSDs so that I could use them. It's not a waste in any sense
<nckx>You'll pay in features, of course.
<vagrantc>the_tubular: i mean, unless you're talking about a computer 15+ years old, the C201 is pretty slow.
<lfam>The time I'll save over my lifetime by using SSDs could never be paid back
<PotentialUser-13>is there any way to see the health of the disk on gnu/linux.
<Kolev>An OS that wears out hardware faster. Sounds like Gentoo
<nckx>I definitely build more crap on Guix because Guix makes it so much easier to build crap. My crap (and hence I/O) output is no longer limited by my distro. This is great problem. Bad Guix.
<lfam>PotentialUser-13: Yes, for disks you can use SMART and the smartctl program
<lfam>SSDs do support SMART too, but it's kind of a different beast
<nckx>You generally care about 1 number anyway.
<lfam>The metrics are just totally different
<PotentialUser-13>i don't need to download some crap from the hardware's manufacturer's website, do i?
<lfam>No, the CPU in the disk itself does that work
<lfam>There is an operating system running on every storage device and your computer talks to it
<lfam>Sometimes that operating system is even upgradable
<vagrantc>the_tubular: there's an example c201 configuration in ./gnu/system/examples/asus-c201.tmpl ... you could try cross-building it from another computer perhaps
<nckx>I thought I'd check my EVO 960 from 2017. 100% Guix System since. 99% writes remaining. I am really not concerned.
<Kolev>Should I get the Libreboot Gigabyte or the D16?
<lfam>nckx: Did you check with smartctl? Which attributes should I look at?
<the_tubular>But technically guix would use more I/O than a "standard" distro
<the_tubular>Doubt that means any RL improvement on a SSD though
<nckx>Well, I was looking at its VALUE for Total_LBAs_Written, but then I thought better of that and manually compared the raw value with the manufacturer's stated endurance. Which is 400TB. I've used ~65TB of that, assuming I did the maths right, in 4 years. Still not worried.
<nckx>the_tubular: I think it's easy to overestimate that, too. The only way to tell is measure :)
<lfam>Wow, I'm about halfway through this 1 TB SSD from 2016
<lfam>I do a lot more than Guix though
<lfam>155 terabytes written
<the_tubular>Guix gc on a cron job, you'll get this number higher :P
<lfam>I doubt it's to do with Guix. That partition is fairly small
<lfam>It's a little hard to believe...
<nckx>My 65TB includes what is probably a higher than average human amount of raw video.
<nckx>the_tubular: Only if you GC stuff you then need to rewrite? Otherwise there's no difference. Frequent (justified) GCs extend the life of your SSD.
<nckx>More free space = more options for the wear leveller = longer life.
<nckx>Or: if you run your SSD at <10% free space for years, don't blame Guix when it dies.
<lfam>When I bought this SSD, Sandisk was still and independent company
<lfam>Still an
<lfam>So now it's WD and Samsung? Does anyone else actually manufacture flash?
<lfam>Like, at scale?
<the_tubular>To be fair, I never understood how GC worked, everytime I GCed, it downloads the same stuff on the next update
<samplet>I just found an old HDD with Guix already on it! I’m gonna put it in my X200 and see if I get a login error.
<nckx>As a DeVeLoPeR I set --gc-keep-{outputs,derivations}=yes, which might help.
<nckx>samplet: Upgrade it first.
<nckx>Would be a great test case to see if the X200 part even matters.
<samplet>nckx: It’s going as fast as it can! I can hear it grinding away.
<KarlJoad>What is the proper way for a Guix package to reference its local directory?
<nckx>Mmm, crunchy.
<nckx>KarlJoad: What is ‘its local directory’ exactly?
<nckx>There's (getcwd) if that's what you mean.
<KarlJoad>I an defining a `(package ...)` for building my website with `guix build -f guix.scm`. I need to specify a `source`, but the `git-fetch` procedure is failing due to SSH issues. I want the `guix.scm` file to use the directory the `guix.scm` file is sitting in as the source.
<nckx>I think (local-file FILE #:recursive? #t) is what you want here, but it's not something I've tried myself.
<nckx>It will copy all of FILE to the store, recursively, so it can be used during the build.
<nckx>Yeah, that's it. See (guix)G-Expressions for details.
<KarlJoad>That's it. That's what I was looking for. I am still used to Nix's ability to provide paths directly.
<KarlJoad>I will say, I _love_ how much documentation Guix has built into it. I can eventually find everything I want offline!
<nckx>I find that very valuable as well.
<the_tubular>Where do you set that nckx ?
<the_tubular>Every time you run guix gc ?
<nckx>I doubt that would work, it's a guix-daemon option. I set it in my (guix-configuration (extra-options …)).
<nckx>Along with --cores, --max-jobs, the like.
<nckx><Some Libera oper> bonus poke on the subtext, please do poke us about upcoming important releases or the likes of your software, then we can wallops that. Also we'd still be interested in a libera staff <> group contacts voice chat of sorts at some point, but that planning is still wip.
<nckx>I'll just leave that here so someone can perhaps think of it before we release, as I always forget.
<nckx>I meant to paste only the first sentence but HC is being a derp.
<nckx>lfam, bitcoins: Goood point.
<nckx>As a very side note, almost as soon as I sent the quote above I wondered ‘…but would we want that attention?’. Hm. Pity.
<lfam>I had the same thought. But, this is the new place
<lfam>Still something to consider
<nckx>I didn't mean outright malice as much as workload, and misunderstandings.
<nckx>But yes.
<nckx>I can mentally handle about one ‘what's this Nix fork then why doesn't it run Steam flatpaks ugh this sucks bye’ a week, max.
<lfam>We would definitely want to coordinate the release date such that many people could be in the channel for a couple days
<lfam>To answer questions and such
<lfam>But, that's a good idea regardless
<nckx>WTF, after rebuilding the Guix video for what is surely the 4th time, it suddenly no has sound.
<KarlJoad>Maybe I am just being stupid, but why might I have "install" be an unbound variable in a package definition when I am intentionally replacing `'install` as an argument? Using the GNU build system.
<lfam>That sounds unexpected
<nckx>lfam: Turned out to be MPV's ‘fault’, --no-resume-playback fixed it.
<nckx>Sounds like insufficient quoting.
<nckx>KarlJoad: Share the code if you're lost.
<lfam>If you can share your package definition at <> we can look at it
<nckx>Join us now ♪
<nckx>KarlJoad: ' → `
<nckx>Wow, that was premature, sorry.
<KarlJoad>I can't use graphical systems on the VM, because the mouse is not working for some reason.
<nckx>What I meant to say was: add 'something after 'install.
<nckx>No that's also just wrong. I need to stop typing.
*lfam tries it
<nckx>I'm sorry, I'm extremely tired but can't go to bed.
<KarlJoad>Oh wait... I think it is complaining about the install procedure, NOT the install phase symbol.
<lfam>I was about to say that :)
<lfam>copy-recursively may be what you are looking for
<nckx>KarlJoad: I don't understand how you can get an unbound variable error here?
<KarlJoad>Now I need to figure out why `#$output` is raising an issue with `ungexp` being unbound.
<lfam>KarlJoad: Check out this commit as an example: <>
<KarlJoad>nckx: The `install` procedure does not exist. I wanted the `install-file` procedure. I was incorrectly attributing the unbound variable to the install phase symbol rather than the install-file procedure.
<lfam>You need to turn modify-phases into a gexp
<KarlJoad>Herp derp...
<lfam>That means "quoting" it in the gexp way with #~
<lfam>" G-expressions, or gexps, consist essentially of three syntactic forms: gexp, ungexp, and ungexp-splicing (or simply: #~, #$, and #$@), which are comparable to quasiquote, unquote, and unquote-splicing, respectively"
<lfam>I need to print this out
<lfam>Stick it on my computer
<KarlJoad>Yeah. Still getting used to that syntactic sugar. It always confuses me.
<samplet>The X200 has finished pulling (it took two tries...) and now it is reconfiguring.
<lfam>Most of us are getting used to it
<samplet>The part that always gets me is which symbol is for inputs and which symbol is for native inputs.
<KarlJoad>Now I am on the hunt why `gexp` is an unbound variable, when `(guix gexp)` is present as a module...
<lfam>Can you share what you have now, KarlJoad?
<KarlJoad>My best is on the quasiquote.
<lfam>Yes. I would try (arguments (list #:phases ...))
<KarlJoad>lfam: Unknown #guix object: #\<
<KarlJoad>Not guix, just # object.
<lfam>Now I am stumped
<nckx>That means a literal #<some object> made it into the .drv.
<KarlJoad>It is the `#~` causing the issue first. Switching back to `#:phases '()` and commenting out the `(replace 'install` makes everything "work" again.
<nckx>For similar reason I still haven't got substitute-keyword-arguments to work on a gexp.
<KarlJoad>Meaning the build happens correctly, but the default install phase is run.
<nckx>I've run into the same problem KarlJoad.
<KarlJoad>Good to know I am not alone.
<nckx>That said I can't reproduce it with your last snippet.
<KarlJoad>Hm... When I run `guix build -d -f guix.scm`, and grep the returned derivation, there is no `#` symbol anywhere.
<nckx>Sorry, not the .drv, the builder: grep -om1 '"[^"]*-builder"' $(guix build -d -f guix.scm)
<nckx>It's going to be a horrible single line of code, and you probably can't pipe it to pretty-print because of the #<, fun.
<KarlJoad>Switching from `#~` and `#$` to `gexp` and `ungexp` does nothing.
<KarlJoad>That grep returns 2 builders, both with the same path.
<kozo[m]>Hey, I have a guix shell command with a number of arguments in a .sh for testing some container. If I saved it as a .scm, is there an easy way to run all the code in the file without having to type it all into the cli?
<nckx>KarlJoad: The # in the error message is not related to the # in #~ or #$, it's coincidence.
<nckx>It's the # in #<guile's default object notation>.
<nckx>I expect a #<gexp …> literal in the builder somehow.
<samplet>Aha! My X200 does reproduce the bug.
<nckx>I thought the -m1 would limit it to 1 result (‘line’) but TIL grep defines ‘line’ here as ‘input line’ — and everything's on one line, so why not return 2 identical strings sure.
<KarlJoad>Opening the specified file in Emacs actually made it nice. Nice. There is `#:phases #<gexp` with the entire modify phases inside of it.
<nckx>This is beyond my ken.
<the_tubular>KarlJoad you got some new lines with all those parentheses ?
<nckx>Oh god I hope it's not some side-effect of using -f, because that's also how I triggered it, but I don't think it's that easy.
<KarlJoad>Unless I opened the wrong file, I did in Emacs.
<KarlJoad>nckx: I cam checking if other packages that use `#~` produce a similar builder.
<samplet>Hm… I’ve reproduced the bug, but now I guess I can’t log in or do anything. :S
<nckx>Behold: the rub.
<nckx>KarlJoad: I expect the answer to be no.
<KarlJoad>Given that none of the package definitions in use by my Guix install have gexp, I am starting to think my version of Guix is too old?
<KarlJoad>Perhaps a `guix pull`?
<samplet>I guess I’ll put the drive in a different old laptop and see what happens.
<nckx>KarlJoad: Uh, yes please.
<KarlJoad>Just a few commits behind... (28,875 new commits)
<samplet>That’s even older than the random old hard drive I just updated! :)
<KarlJoad>And that was from an ISO that I installed from yesterday.
<KarlJoad>The ISO just came from the website.
<samplet>I guess that’s the “stable” download instead of the “latest” one?
<KarlJoad>That would be correct.
<nckx>It is very old.
<nckx>It is not very stable.
<nckx>But it is very old.
<nckx>Which is almost the same.
<KarlJoad>I could tell. `guix deploy` was not in there at first either.
<samplet>Yeah.... “Stable” is maybe not the right word for it.
<lfam>Usually releases would happen more often but something came up
<nckx>There is discussion about this on the ML, although I don't know if it includes the ISO or only the manual.
<nckx>But defaulting to ‘‘‘stable’’’ isn't actually that great of an idea after all.
<samplet>Putting the disk in a different computer did not help.
<KarlJoad>Ok. I was going to say, coming from NixOS, they build their ISO quite frequently.
<nckx>So do we, we just call it ‘latest’ and order it under the very old one.
<KarlJoad>Ahhhh... Ok. The site does say the latest images are development snapshots, so I figured the stable ones were going to be the ones to use for servers and the like.
<KarlJoad>Just finished the pull and re-opening the file. Building it worked immediately.
<nckx>Your Guix probably lacked even the ‘Gexp build systems’ ☺
<KarlJoad>Exactly my thoughts too.
<nckx>This is what caused that error for me:
<nckx>I think the cause is simple: substitute-keyword-arguments chokes on this newfangled gexp business.
<nckx>But I don't know the obvious solution if there is one.
<samplet>nckx: Here’s a wild guess. What about using gexps for the second ‘modify-phases’?
<nckx>I'll continue gexping the whole family, that will probably make it go away (because there will be a gexp, to, well, gexp).
<nckx>I think we mean the same thing!
<nckx>It feels unsatisfying though?
<nckx>Or is that just me…
<samplet>I guess. Maybe? I’m still processing whether it makes sense or not.
<KarlJoad>So it builds, but the `(getcwd)` for the local file causes non-reproducibility based on the location of the shell when the build is started.
<lfam>Dunno sorry
<nckx>No problem.
<samplet>It does make sense, but having to know whether you’re unquoting a sexp or a gexp is not great.
<nckx>I guess it ‘doesn't matter’, it just bugs me that I don't grok exactly what's going on.
<nckx>Of course, maybe it wasn't supposed to work. That would explain it 😛
<nckx>It just means we'll have to be careful to rebuild all dependents even if the result would be the same.
<samplet>Do you think the ‘list’ gexp compiler is involved, or does the fact that the staged code is a list cause it to skip the gexp machinery entirely?
<samplet>(I need to brush up on the gexp build system code....)
<samplet>In other news, driving that old hard disk from QEMU does not trigger the bug.
<nckx>I ‘think’ the former, but I didn't investigate it. I didn't follow or review the gexp patches as they developed, and now the future has landed in one big whomp, and I have nobody but myself to blame for my lagging behind.
<nckx>I *enjoy* writing gexps though. I didn't expect that. They are so much more intuitive & flow-friendly.
<samplet>There’s something magical about it. It really matches the Guix problem domain.
<KarlJoad>Just to confirm, Guix has no built-in way for a package file to refer to the directory itself is in?
<samplet>I think ‘local-file’ has some magic there. I also think the manual explains it better than I could. Lemme check.
<KarlJoad>I just want to be careful, because I was using `(local-file (getcwd) ...)`, and the `(getcwd)` causes non-reproducibility because it depends on the shell's CWD.
<samplet>“If FILE is a literal string denoting a relative file name, it is looked up relative to the source file where it appears.”
<apteryx>I've started using '(arguments (list ' instead of '(arguments `(#:arg1 ...), and I realize it's weird that the #:configure-flags arguments needs to be double quoted
<apteryx>or (quote (list my args))
<apteryx>doubly quoted*
<samplet>KarlJoad: I take that to mean that ‘(local-file ".")’ is what you want.
<samplet>(Plus the ‘#:recursive?’ bit.)
<nckx>apteryx: Doubly?
<apteryx>doubly quoted ;-)
<apteryx>Dobby quoted?
*apteryx looks at the time
<KarlJoad>samplet: `(local-file "." #:recursive? #t)` returns "invalid name: .
<nckx>(arguments (list #:confy-flags (list …)))
<apteryx>no, that fails
<apteryx>you need (arguments (list #:confy-flags '(list …)))
<apteryx>at least for the cmake-build-system
<nckx>KarlJoad: You need (local-file "." "my-sourcies" #:recursive? #t)
<nckx>apteryx: That's actually what I meant (arguments (list #:confy-flags #~(list …)))
<nckx>I forgot the gexp.
<nckx>I was quoting 0 times and meant to quote 1 time.
<nckx>Because I don't get ‘double’.
<KarlJoad>Why is the name mandatory when the file is "."?
<nckx>I guess Guix doesn't want to figure out (basename (getcwd)), although I'm failing to see a security issue or so.
<KarlJoad>The manual says nothing about that, at least to my untrained eye.
<nckx>KarlJoad: Well, first: did that actually work?
<samplet>Not out myself as someone with way too many old laptops, but I’m testing old laptop number three now. :) It’s a bit newer than the other two.
<KarlJoad>Yes, it did.
<nckx>Then perhaps a bug or patch is in order? :)
<KarlJoad>When in different directories for different shells, the build command produces the same store output.
<nckx>I admit that it seemed ‘obvious’ to me but I can't actually point to any documentation. And /gnu/store/<hash>-. is not, I think, itself an invalid name. It's just… no. But this should indeed be documented as you say.
<nckx>Or Guix could look up the dirname of cwd if that's safe.
<nckx>I'm not confident it is because unix.
<KarlJoad>Ok. At least I am not crazy then.
<nckx>Oh thank god I can finally go to bed.
<KarlJoad>Ok. I'm off! Time for some sleep! Thanks for the help everyone!
<vagrantc> \o
<apteryx>nckx: ah, the gexp. I see. Gexp acts as a quote too, I tend to forget somehow.
<apteryx>your are right, there's no double quoting there
<the_tubular>Is there an emacs module for guix home ?
<opalvaults[m]>has anyone tried to use password store extensions (`pass`) and gotten around the need to use sudo make install?
<opalvaults[m]>i'm getting read-only filesystem :(
<opalvaults[m]>nvm, ignore me. the Makefile had a 'local' parameter so i just ran make local and that seemed to work
<bricewge>Hello Guix!
<bricewge>I'm working on adding test to the `lchown` patch
*mala[m] uploaded an audio file: (303KiB) <! >
<bricewge>But I can't manage to test it in /tmp because that directory because the right on it are drwxrwxrwt do you have an idea how to workaround it?
<efraim>hello guix!
<g_bor>hello guix!
<pukkamustard>hello guix!
<g_bor>pukkamustard: hello
<attila_lendvai>Ludovic's public key has expired. or am i looking for an update at the wrong place? (
<g_bor>attila_lendvai: hello!
<g_bor>I don't know where we are uploading it nowadays.
<g_bor>you can check maybe also on
<attila_lendvai>g_bor, "This service is deprecated."
<g_bor>ok, let's wait for ludo
<g_bor>I think he will be around for the infra hackathlon
<rekado_>attila_lendvai: have you checked Savannah?
*attila_lendvai has set left a note with sneek
<g_bor>heya rekado_!
<g_bor>not seen for a while, but that is my fault :)
<dportnrj[m]><attila_lendvai> "Ludovic's public key has expired..." <- there was such problem too:... (full message at
<awb99>can someone recommend me some usb wifi adapters that work with linux libre?
<attila_lendvai>rekado_, i don't knwo what that is. i tried --keyserver but it just hangs
<rekado_>there you can download the key manually:
<rekado_>g_bor: oh hi!
<attila_lendvai>rekado_, that has the updated version, thank you!
<g_bor>I thought I will join in for the infrastructure thing, maybe I can help out here and there
<g_bor>Do you know how this will go?
<rekado_>g_bor: I don’t know. I’ll also be out soon to get my booster.
<rekado_>g_bor: but we’ve had a few emails from Mathieu that describe some of our problems with the build farm.
<rekado_>I guess we’ll just hack at them.
<g_bor>ok, sounds nice
<g_bor>I will have to go sometime in the afternoon.
<rekado_>in the meantime I’m trying to “guix deploy” to two of the aarch64 machines; and I’m preparing a new x86_64 build node (currently applying firmware upgrades).
<g_bor>Good luck :)
<g_bor>all of the infra is now managed by guix deploy, or we have something that is not migrated yet?
<mothacehe>hey guix!
<g_bor>mothacehe: hello
<pukkamustard>mothacehe: hello!
<rekado_>something’s wrong with mumi
<rekado_>since the fsf/gnu outage I’m missing messages on
<rekado_>I restarted the mumi worker and mumi; and for good measure the rsync process.
<rekado_> still only has messages from 4 days ago.
<rekado_>I checked the copy of debbugs data; that seems up-to-date now.
<rekado_>…and now mumi has caught up with the changes
<rekado_>all good
<rekado_>g_bor: all of the nodes that are hosted at the MDC are managed with guix deploy
<rekado_>we run “guix deploy -L modules berlin-nodes.scm” from the “hydra” directory of maintenance.git.
<rekado_>mothacehe: I’ve started the copying of /gnu/store/trash to /mnt_test/gnu/store/trash yesterday. Only 127G have been transferred so far.
<rekado_>it took 3min 33sec to count the data on the target drive.
<rekado_>(it would take hours to count on /gnu/store/trash directly)
<mothacehe>rekado_: OK. There are probably around 6TB in this directory
<mothacehe>how long did it take for 127G ?
<rekado_>I started this yesterday, so it’s been more than 10 hours.
<rekado_>no idea why the performance is so terribly degraded
<rekado_>any luck with defragmentation?
<mothacehe>rekado_: i need to haul my nets :)
<civodul>Hello Guix!
<sneek>civodul, you have 1 message!
<sneek>civodul, attila_lendvai says: your pgp key has expired, at least the one on, and i don't know where else to get it from.
<mothacehe>here is e2freefrag report:
<mothacehe>e4defrag is still running
<attila_lendvai>civodul, never mind, i found it on you savannah page
<g_bor>civodul: hello
*rekado_ reads
<AIM[m]>Can someone tell me why specification->package is needed in packages list?
<AIM[m]>I think I've seen configs without it
<AIM[m]>In config.scm
<rekado_>it’s not needed
<rekado_>but it’s convenient because it makes the config robust against changes in the location of package definitions.
<AIM[m]>Can you give me the packages required to have bluetooth stuff in Xfce Guix SD?
<AIM[m]>list of packages*
<AIM[m]>Is it just bluez and blueman?
<rekado_>mothacehe: uhm, I just noticed that /gnu is mounted with stripe=320; does this even make sense for this external storage?
<phf-1>apteryx: Hi ! This is how to reproduce the unexpected behviour we
<phf-1> discussed yesterday :
<phf-1>Any help would be greatly appreciated: apparently, Guix cannot install
<phf-1> a package from an archive without network access but should according
<phf-1> to the documentation.
<phf-1>Sorry for the text formatting, here is it again but nicely formatted.
<mothacehe>rekado_: according to, for raid 10: "the picture seems rather more complicated".
<phf-1>apteryx: Hi ! This is how to reproduce the unexpected behviour we discussed yesterday :
<phf-1>Any help would be greatly appreciated: apparently, Guix cannot install a package from an archive without network access but should according to the documentation.
<AIM[m]>How do I enable the bluetooth service in guix?
<AIM[m]>I use Guix SD with Xfce
<AIM[m]>I've seen some issues being posted online that the guix config did not work as intended for bluetooth
<skn38>how to patch gnu/packages/bootstrap.scm **correctly**? (i need to add a custom target to glibc-dynamic-linker).
<skn38>if i copy this file into $GUI_PACKAGE_PATH/ and make changes there, then "guix package --target=<my-target> ..." is executed successfully.
<skn38>but at the same time, the output of each "guix ..." command begins with a warning:
<skn38>> guix build: warning: failed to load '(bootstrap)':
<skn38>> no code for module (bootstrap)
<skn38>> $GUI_PACKAGE_PATH/bootstrap.scm:26:0: warning: module name (gnu packages bootstrap) does not match file name 'bootstrap.scm'
<skn38>maybe i'm doing something wrong?
<mroh>AIM: add (bluetooth-service) to your system config, Users need to be in the lp group to access the D-Bus service, see
<mroh>jackhill: What's wrong with adding gst-plugin-xxx to leaf applications which use webkitgtk? To many maybe?
<florhizome[m]>Morning guix!
<florhizome[m]>I have a question: when I enter guix shell -D to test some guix packages, it seems like my local package descriptions under GUIX_PACKAGE_PATH are still being loaded. i guess i have to unset that env var. Is there some other way to ensure a mostly pure environment?
<florhizome[m]>in this case, it's for a channel, so i don't have ./pre-inst-env around
<pukkamustard>forhizome[m]: you can use `guix shell --pure`. That will unset existing env variables.
<Kolev>Are Go apps tricky to package?
<florhizome[m]>ok, I’ll try that. on the other hand, I would want my git config to carry over...
<florhizome[m]>well I can exit the env to send the mail
<florhizome[m]>Kolev: They are pretty much work, from what everyone says ;)
<Kolev>If it wasn't too hard, I'd package TMSU.
<rekado_>Kolev: I think I packaged tmsu already
<rekado_>mothacehe: my colleague suggests 1) switch to the second path (in case it’s the HBA port), and 2) reboot the storage array
<g_bor>Kolev: most of them are not too hard.
<phf-1>apteryx: Well, using the `hello' package, it does not seem to work either :
<Kolev>rekado_: Could not find it under T.
<g_bor>Usually they work out of the box, but there might be lots of dependencies
<Kolev>Oh... Mult. pages...
<g_bor>Kolev: go packages are namespaced, have a look at g
<rekado_>no, it’s called tmsu
<rekado_>I use “guix show tmsu”
<rekado_>certainly faster than clicking around :)
<Kolev>Are browser extensions packaged?
<jonsger>and faster then googling :)
<Kolev>I'm on mobile right now.
<rekado_>ublock-origin is.
<Kolev>rekado_: What namespace? Noy icecat-
<rekado_>other than that I can’t think of any other extensions that have been packaged.
<Kolev>rekado_: All i care about is thay they CAN be. Thats awesome! So tired of software creeping in from random places
<Kolev>Everything has a pavkage mamager these days. It's becoming unweildy
<g_bor>Kolev: packaged as tmsu in file-system.scm
<Kolev>So if I wanted to, i could distribute my site's JS as an extension, and package it :D
<raghavgururajan>Hello Guix!
<ArneBab>Kolev: that would be pretty cool!
<mothacehe>rekado_: rebooting the storage array looks like a good idea.
<ArneBab>Kolev: it would be a good reason to finally create a Freenet browser extension
<raghavgururajan>sneek, later tell nckx: Did you mean device upgrade to something else or libreboot upgrade?
<mothacehe>but maybe we need to announce it first
<Kolev>Hm. Its "ublock-origin-chromium". Shouldn't it be chromium- like emacs-prelude?
<Kolev>ArneBab: freenet browser extension? I thought freenet died and was insecure
<g_bor>mothacehe: what is the impact? are we having a cache in front? how long does it take to reboot?
<Kolev>And what is a freenet browser extension?
<rekado_>g_bor: no idea
<ArneBab>Kolev: Freenet is alive and well, it’s friend-to-friend mode is secure (opennet has known weaknesses), and I am release-manager these days :-)
<rekado_>g_bor: I mean: how long does it take: no idea. We don’t have a cache in front, so this means downtime for
<ArneBab>Kolev: a freenet browser extension would be a freenet:// scheme handler that opens the sites with a secure launcher.
<ArneBab>making clickable freenet-links
<mothacehe>g_bor: the website and publish server will be done while rebooting
<rekado_>we’d shut down berlin, then reboot the storage array
<Kolev>Am i crazy for suggesting that site js should be distr. as br. ext.?
<rekado_>…and hope that the storage array comes back up fine, and berlin boots well :)
<florhizome[m]>Would guix want to maintain all these extensions though?
<florhizome[m]>I mean it would be cool for stuff like tor or icecat if they could refer to Guix as a store
<ArneBab>Kolev: I like the idea — I think icecat is going into that direction.
<g_bor>rekado_: ok, that is clear
<ArneBab>Kolev: but it might not need to be a guix-exclusive project, rather project many distros can use
<Kolev>ArneBab: yes, just reg. br. ext.
<florhizome[m]>bit different question: does someone have a home service for dealing with their user.ja?
<g_bor>from my experience if the array is in any good shape it should be back in like 20 minutes, and not more, if this is anything similar to the things I have dealt with
<ArneBab>Kolev: creating packages for distros for that would add reliability (just a browser extension wouldn’t be more secure than just shipping JS from your site)
<rekado_>g_bor: yeah, I just worry about it being in a bad shape, unexpectedly :)
<Kolev>ArneBab: the issue is that ja can be changed at any time. If installed, changes are tracked
<rekado_>also: I’ll be out in 2 hours
<rekado_>so if things go terribly wrong we’re looking at a long recovery
<g_bor>rekado_: if it is in a bad shape, then we really have nothing to do to avoid downtime in the next reboot, so it might be wise to do this controlled
<mothacehe>might be nice to setup a redundant website server first, turns out it is today hackathon topic :)
<g_bor>can we migrate the workloads of this storage while we reboot it?
<rekado_>yes, let’s do that
<rekado_>setting up the website on bayfront or bordeaux sounds like a manageable task
<rekado_>g_bor: the berlin server is booted off of /gnu, which sits on the external storage
<g_bor>ok, then we can flip over dns and wait to propagate?
<rekado_>I’m trying to build aiscm, which need LLVM and clang. I added llvm-13 and clang-13, but the configure test fails with “undefined reference to `LLVMLinkInMCJIT'”
<attila_lendvai>(specification->package "glibc:debug") kills/exits my guix repl. is that expected?
<attila_lendvai>i mean, any missing package error kills it
<g_bor>I see berlin and bayfront in maintanence.git, but where is bordeaux managed from?
<rekado_>attila_lendvai: it’s surprising. Don’t know if that’s intentional. (guix ui) does some error handling and will exit with a prettier error message. But in the REPL I would have expected a backtrace and a way to recover.
<g_bor>Am I confusing something?
<rekado_>civodul has the answers for bordeaux, I think. I haven’t seen it in the maintenance repo.
<cbaines>g_bor, is mostly a domain, it currently points at bayfront
<futurile>morning guixers
<g_bor>cbaines: ok, noted
<rekado_>attila_lendvai: (gnu packages) uses “leave” with the “unknown package” message; “leave” reports the error and exits.
<rekado_>attila_lendvai: this doesn’t say whether that’s desirable for the REPL, but it explains why it happens.
<g_bor>so basically the website backup should be configured in bayfront.scm
<rekado_>g_bor: correct
<rekado_>another option would be to host it on one of the build nodes and give that the public IP.
<mothacehe>yes the static-web-site services in berlin.scm needs to be copied to bayfront.scm
<rekado_>but i think for redundancy we’d like to have it actually at a different site
<rekado_>(e.g. in case of network outages, etc)
<cbaines>I haven't been paying full attention, but I would think that the TLS cert is the trickiest part about getting two machines to serve (assuming it's a requirement to make it available over HTTPS)
<mothacehe>in fact we should probably factorize the website stuff in a (website) module and use it both in berlin.scm and bayfront.scm
<rekado_>cbaines: IIRC it was TLS cert *renewal*
<rekado_>not merely the cert itself
<cbaines>well, I guess you could copy it from berlin initially
<rekado_>we just wouldn’t be able to renew things from the fallback site
<rekado_>mothacehe: we have two firmware updates for the disks in the storage array.
<rekado_>the management server tells me that these are “urgent” (but it’s known to be overly dramatic)
<rekado_>we’ve got two different makes of disk in that array (because some disks had failed and needed replacing); for the replaced disks we have “recommended” fw updates
<rekado_>new firmware requires at least a disk restart, so I’d apply these when the storage is rebooted anyway.
<rekado_>shouldn’t extend the downtime too much
<g_bor>rekado_: I agree, this should be done
<g_bor>Also, how is certbot currently configured?
<g_bor>What is the renewal mechanism?
<rekado_>looks like “webroot”.
<g_bor>We should be able to just spin an ad-hoc renewer on demand, do DNS validation, and push the renewed certs into the servers. This would allow to decouple this whole renewal stuff
<skn38>how to patch gnu/packages/bootstrap.scm **correctly**? (i need to add a custom target to glibc-dynamic-linker).
<skn38>if i copy this file into $GUI_PACKAGE_PATH/ and make changes there, then "guix package --target=<my-target> ..." is executed successfully.
<skn38>but at the same time, the output of each "guix ..." command begins with a warning:
<skn38>> guix build: warning: failed to load '(bootstrap)':
<skn38>> no code for module (bootstrap)
<skn38>> $GUI_PACKAGE_PATH/bootstrap.scm:26:0: warning: module name (gnu packages bootstrap) does not match file name 'bootstrap.scm'
<skn38>maybe i'm doing something wrong?
<rekado_>the hint is correct
<rekado_>move the file to $GUIX_PACKAGE_PATH/gnu/packages/bootstrap.scm
<rekado_>though I would not mess with GUIX_PACKAGE_PATH at all
<rekado_>instead just take the git repository, make your changes there, and then use ./pre-inst-env guix
<rekado_>(assuming you want to contribute your changes eventually)
<rekado_>mothacehe: running “du” on /mnt_test/gnu is faster on subsequent runs. From 3m33sec for 127G down to 40secs for 143G.
<rekado_>another silly idea: we could already relocate /var/cache to the SAN. Serve all substitutes from the fast storage.
<mothacehe>rekado_: oh, /mnt_test is the SAN mount point, right?
<mothacehe>why stripe=512 specifically?
<rekado_>I didn’t pick it
<rekado_>mkfs.ext4 does it
<mothacehe>regarding /var/cache on the SAN, that would be great
<rekado_>I choose to just trust it, because I don’t understand how I could do any better.
<mothacehe>that's what the raid wiki is suggesting
<g_bor>so, for certbot dns validation we would need the following: a dns server that can be controlled by the renewer to be able to inject the dns record, and post 80 to be free to bind on the renewer, otherwise we can select the actual renewver from a pool. This makes the spof go away, as the renewer can be basically anywhere and the dns server should be redundant anyway.
<rekado_>okay, I’ll do a first pass copy with “cp -ar”; when that’s done I’d stop nginx and guix-publish; rsync; remount; restart nginx and guix-publish.
<rekado_>the initial copy is going to take hours, so there’s plenty of time to get the website fallback set up ;)
<g_bor>I don't think I will be able to implement this like today, but this looks like a good idea
<mothacehe>hehe, you rock! then we can setup a backup node to server the /var/cache content
<mothacehe>this way people can have substitutes even if berlin head node is down
<g_bor>It would also not depend on any webserver being up in the first place, so it would make it easier to bootstrap new infra. You could spin up the webserver by the cert already available. I assume right now we do not have this capability, but I might be mistaken here.
<skn38>@rekado_: thank you, moving the file helped!
<civodul>rekado_: so we'll be setting up the web server fallback on bordeaux, right?
<rekado_>civodul: yes, I think that makes the most sense.
<civodul>agreed, cool!
<rekado_>mothacehe: I don’t know exactly how we would serve /var/cache from the SAN from another node.
<mothacehe>i'm taking care of adding the website configuration to bayfront for the record
<mothacehe>i'll send a maintainance patch
<rekado_>I suppose we could do that only if another node had a fiber connection to the SAN.
<rekado_>currently it’s only the head node with a connection
<AIM[m]>Unbound variable bluez???
<rekado_>AIM[m]: hence the need for specification->package. Without it you’ll have to import the module that defines it.
<mothacehe>rekado_: yes connecting the SAN to another node would be required. Is it realistic?
<rekado_>mothacehe: I don’t know.
<rekado_>maybe :)
<rekado_>not in the short term, though
<mothacehe>something to keep in mind :)
<rekado_>I don’t think we have any spare IO card for fiber connections
<rekado_>I was in the data centre yesterday and only saw one card in the whole rack.
<civodul>mothacehe: the web server has stateful bits that are not backed up: /srv/{audio,videos,guix-pdfs}
<civodul>we should take care of those
<rekado_>I’ll ask my colleagues if we have another one (and extra ports on the SAN)
<civodul>rekado_: what was the situation of rsync support that you set up for the .cn support?
<mothacehe>civodul: would rsyncing them to bayfront be enough?
<mothacehe>ok i'll take care of it too
<civodul>i guess we don't want to use rsync over ssh?
<civodul>or do we/
<mothacehe>why not?
<g_bor>it also looks fine to me
<AIM[m]>I think the unbound variable comes from the line
<AIM[m]>blutooth-service #:bluez bluez
<civodul>if the syncing processes are automated, what ssh key/user account would they run under?
<AIM[m]>> <> I think the unbound variable comes from the line
<AIM[m]>> blutooth-service #:bluez bluez
<AIM[m]>Is this wrong?
<rekado_>civodul: two open issues: rsyncd needs to be started (I did this manually last time, but it’s gone since the reboot); we have an rsyncd config in /etc. The second issue is permissions of the files generated by guix publish. Not sure if that’s still a problem.
<AIM[m]>I'm trying to enable bluetooth
<rekado_>AIM[m]: if you want to pass “bluez” you’ll have to import the module that defines it.
<civodul>rekado_: ah good; the second issue is gone
<rekado_>AIM[m]: why pass the #:bluez argument at all?
<rekado_>you probably all know this already, but I only just learned that “progress -m” exists
<AIM[m]>rekado_: Which module has bluez again?
<cbaines>I'm currently committing some of the uncommitted bayfront.scm configuration changes
<rekado_>you have some copy processes running? Some cp here and some rsync there? Run “progress -m” to monitor progress for all of them.
<rekado_>AIM[m]: why pass the argument at all? Is the default not okay?
<AIM[m]>I'll then remove it and see if it works
<cbaines>Currently the configuration includes a nar-herder package definition, so I've sent a patch to add this properly if anyone wants to take a quick look
<civodul>rekado_: oh, nice tip!
<rekado_>interestingly, copying /var/cache seems faster
<rekado_>we’re already at 56G
<mothacehe>maybe the content is less fragmented
<rekado_>compare that to the pitiful 170ish G after 10 hours of copying /gnu/store/trash
<rekado_>I’m trying to “guix deploy” to the honeycombs.
<rekado_>building a kernel.
<rekado_>but I keep getting “guix deploy: error: unexpected EOF reading a line” when offloading to other machines
<rekado_>not to all of them.
<rekado_>but it keeps happening
<civodul>grr that's annoying
<civodul>IWBN to figure out if it's the target machine hanging up too early or what
<civodul>mothacehe: rsync'ing /srv should probably go through the standard rsync protocol, but connecting over the wireguard VPN so we can be sure we're talking to the right host
<civodul>for that we'd need to run WG on bordeaux
<mothacehe>sure I can do that
<cbaines>might be good to try and separate bayfront and bordeaux
<cbaines>bordeaux isn't a machine, it's mostly a domain name used to serve substitutes
<cbaines>bayfront is the machine that currently manages that
<civodul>ah yes, i meant "bayfront"
<civodul>mothacehe: are you preparing the rsyncd config for berlin?
<civodul>if not i can give it a try and post a patch here
<mothacehe>no, i'm working on reconfiguring berlin atm!
<mothacehe>would be great :)
<mothacehe>i removed my zabbix workaround from /root/maintenance now that php is fixed
<civodul>alright, giving rsyncd a spin :-)
<mothacehe>would be great to get commit/discard everything unstaged in the /root/maintenance/ git repository while we are at it
<notmaximed>nckx: The problem is that gnu-build-system & friends use (sexp->gexp ...) on the 'phases' argument if it is a pair, so it forgets that any G-exp inside is a G-exp
<notmaximed>It's an optimisation, I presume
<notmaximed>Also, in your example, I think #$(file-append the-package "/bin") would be slightly preferred above (string-append the-package "/bin")
<notmaximed>Though it probably doesn't matter much in practice
<notmaximed>And to make package rewriting work, (this-package-input "coreutils-minimal") and friends are required
<notmaximed>I guess it could be useful to write a linter for the latter
<mothacehe>mmh hpc website is hosted on bayfront
<mothacehe>can someone please initialize my password on bayfront so that i can use sudo?
<cbaines>I can, if no one else beats me to it...
<rekado_>re file systems: XFS may be a better fit for /gnu/store than ext4:
<cbaines>mothacehe, see ~/password
<mothacehe>rekado_: the article i sent yesterday also suggests than BTFS is dealing with fragmentation way better than ext4 on HDD.
<mothacehe>thanks cbaines!
<cehteh>eww :D
<cehteh>well the store can be volatile and rebuild :)
<civodul>mothacehe: would you prefer for to be hosted elsewhere?
<cbaines>Now that there's a nar-herder package, I'm going to pull as root on bayfront, commit the uncommitted changes (without the nar-herder package definition) and then reconfigure
<rekado_>mothacehe: but also: performance according to fio is significantly worse with btrfs
<mothacehe>civodul: wouldn't it make more sense to build/host all websites on both machines?
<cehteh>anyway your deletion troubles gave me some motivation to implement the rmrfd now, should become useful for guix eventually
<mothacehe>but use berlin as the default host
<mothacehe>and bayfront as fallback only
<rekado_>already copied 227G of /var/cache to /mnt_test/var/cache
<rekado_>I bet the fact that /gnu is an absolutely humongous directory is a real problem.
<mothacehe>rekado_: the SAN is using SSD drives right?
<rekado_>yes, some for the top tier
<rekado_>interestingly: du -sh /mnt_test/var takes 0.5 seconds for 227G
<rekado_>but for /mnt_test/gnu it hasn’t even completed yet
<rekado_>and it’s less than 200G, I’m sure.
<rekado_>at least for /gnu/store/trash we could use prefix directories
<cehteh>eventually introduce one directory level more like /gnu/store/aa/aa021gy9bryfwjbgj... could be worthwhile for better performance on many filesystems
<AIM[m]>Finally installed icecat on guix... I'm crying of joy... Finally I'm not a total noob of guix anymore... I have conquered yet another linux distro...
<mothacehe>cbaines: i have a conflict on bayfront unstaged changes in /root/maintenance/, do you think you could have a look?
<AIM[m]>AIM[m]: With config.scm of course... I consider browser to be part of the system....
<cehteh>actually that could be introduced easily with just some fallback
<cbaines>mothacehe, I'm working on committing the changes, just need guix pull to finish first
<mothacehe>great! :)
<rekado_>cehteh: note that binaries embed store file names.
<cehteh>yeah but the non prefix and prefix directories can coexist and slowly be upgraded/phased out
<rekado_>very slowly, yes
<rekado_>152G copied to /mnt_test/gnu
<rekado_>268G to /mnt_test/var
<rekado_>I think this is pretty good evidence that the storage is fine but the file system isn’t.
<mothacehe>rekado_: that's also what i'm thinking
<mothacehe>at least that's were the regression comes from probably
<cehteh>who cares? .. mean even when that takes years it wont be so bad, and actually could be introduced soon and, old way deprecated after the next major release, then 1-2 releases later transition is done
<rekado_>cehteh: complexity
<rekado_>new binaries need to embed the new store locations, and they will take up more space in the binaries
<cehteh>also note i read that deeper prefix directories dont make sense
<rekado_>this means that reference scanning and grafts need to be changed.
<cehteh>3 chars more space?
<rekado_>I encourage you to look at the way reference scanning and grafting works
<rekado_>supporting two different schemes at the same time is not optimal.
<cehteh>haha no thanks :D then
<mothacehe>so array reboot + upgrade won't help much probably. What we may need is more switch to a different fs or re-create the ext4 fs to reduce fragmentation, and/or move to the SAN that is SSD backed.
<cehteh>i am bad/nil at scheme
<rekado_>mothacehe: yes.
<cehteh>but my directory iteration code in rust got very fast now, with some threads in parallel and message queue
<rekado_>we don’t have enough space to move all of /gnu/store now.
<rekado_>we also don’t have enough time to copy all of /gnu/store
<rekado_>the copy will be outdated by a week or two at this rate
<mothacehe>would reinstalling berlin be an option?
<mothacehe>if we backup only /var/cache
<cehteh>time find bigdir/ took 15secs here, my code can do that in 2.5 sec :)
<rekado_>mothacehe: and effectively discard /gnu/store?
<rekado_>I’d be happier if we baked substitutes for everything in /gnu/store
<rekado_>not sure what our retention policy is atm
<cehteh>is /gnu/store its owm filesystem?
<rekado_>I just don’t want to lose everything
<rekado_> /gnu/store and /var/cache sit on the same file system
<mothacehe>the policy is that on demand, nar are backed and stored on the /var/cache directory. If a nar isn't accessed for 180 days it is removed.
<mothacehe>otherwise the 180 days counter is reseted.
<rekado_>can we bake them all and keep them all?
<mothacehe>yes we can
<rekado_>or would that be silly?
<mothacehe>not sure how long would it take
<mothacehe>civodul: would it be silly :) ?
<rekado_>I just feel uncomfortable with erasing /gnu/store :)
<cbaines>that's basically what happens for
<cehteh>rekado_: you can move it aside, or maybe use an overlay (no idea how that performs)
<rekado_>we’re making good progress copying /var/cache
<rekado_>318G now on the SAN
<cehteh>did you figure out why the array is so slow?
<rekado_>cehteh: it’s not the array itself
<rekado_>as you can see, copying from a different directory on the same fs is totally fine
<rekado_>not *great*, but fine.
<rekado_>it’s just /gnu/store that’s glacial.
<rekado_>gotta go soon
<cehteh>well its fat
<rekado_>mothacehe: shall we stop the copy of /gnu/store/trash?
<rekado_>i don’t see much value in copying it.
<mothacehe>yes agreed
<mothacehe>i also stopped the e4defrag process
<rekado_>real 862m43.787s
<rekado_>user 1m44.280s
<rekado_>sys 22m53.058s
<rekado_>for a measly ~160G
<cehteh>e4defrag is extremely slow, when you have performance problems in the first place, then running defrag would need some patience
<cehteh>as in weeks to months :)
<mothacehe>cehteh: yeah i was running it in diagnostic mode
<cehteh>on one tiny server i have a quarterly cronjob to run it ionice -c3 ihardly notice it running, but when i see it it stays there for days
<cehteh>its prolly a good idea to run it regulary (if at all!) on a new filesystem not to become much fragmented in the first place
<cehteh>but on a old huge scrambled filesystem i think its futile
<mothacehe>rekado_: got to go too, see u later!
<rekado_>time rm -rf /mnt_test/gnu
<rekado_>real 3m7.510s
<rekado_>user 0m18.372s
<rekado_>sys 2m40.220s
<efraim>I think I have a fix for binutils-gold on non-x86_64
<phf-1>civodul: Hello ! Here is away to reproduce the problem that we've discussed yesterday:
<phf-1>If by any chance you have a few minutes
<CCC>Hello. After a recent guix-pull, guix system reconfigure is failing to install the bootloader with this error: efibootmgr -v also fails with this error: Anyone seen this before?
<CCC>I saw some threads saying this could be a bug in efivar, but it was a couple years old and my system was working before the pull, so it didn't seem relevant
<vivien>CCC, that’s a common error
<vivien>I had it a few weeks ago
<vivien>There are temporary files that you need to remove but I don’t exactly remember where
<cbaines>I'm still working on reconfiguring bayfront, but I'm going to get lunch in a minute
<cbaines>I've translated the static networking configuration to the new style, but if someone could double check that, that would be good (it's changed on the machine)
<cbaines>I'm just doing guix system build at the moment, and it's busy building linux, so it's going to be a little while anyway
<vivien>I seem to remember that things needed to happen in /sys/firmware/efi/
<vivien>CCC, it seems to be that you should remove /sys/firmware/efi/efivars/dump-*
<CCC>Yup that did, thanks vivien. I'll keep that fix in my notes
<jlicht>hey guix!
<jlicht>jgart: you showed some kind of CLI csv-viewer; what was I called again?
<civodul>cbaines: the static-networking config on the uncommitted bayfront.scm LGTM; i'd suggest IPv6 on ens10 while you're at it
<efraim>I might need to drop python-pingouin to an earlier version so it matches with our packaged scipy and other libraries
<efraim>nope, fixed the wrong bug for binutils-gold, this one is qemu related. still need to work on the correct bug
<civodul>rekado_: those "rm -rf" figures are for what size?
<civodul>phf-1: "guix package -i" tries to get substitutes i guess, and it's failing gracelessly here
<civodul>the workaround is to pass --no-subsitutes, though it shouldn't be necessary
<jlicht>I recently saw some hype around the 'mold' linker; has anyone worked with it, perhaps in guix as well?
<civodul>jlicht: i was wondering if it was going to be Rust or copyleft, and it's neither of those!
<civodul>didn't know something could be hype without that
<cbaines>I'm seeing: kernel module not found "simplefb" when reconfiguring bayfront, when it's trying to build the linux-modules.drv
<civodul>cbaines: vivien reported that recently
<civodul>haven't been able to look into it yet
<civodul>is that for the LTS kernel?
<cbaines>civodul, it's for linux-libre-5.10
<cbaines>so yes
<rovanion>clojure-tools-cli-0.4.2 fails to build for me, can anyone reproduce?
<phf-1>civodul: Thanks for the reply. I tried and it failed with this error:
<vivien>cbaines, civodul, lfam provided a branch that fixes it
<phf-1>civodul: failed to download "/gnu/store/f99fblkzb6ip268sg096shhs7wzjyp55-Python-3.5.9.tar.xz" from ""
<vivien>Sorry bad copy/paste
<notmaximed>Seems related.
<cbaines>civodul, thanks for checking the static networking config, I've added the IPv6 bits now
<apteryx>rekado_: another thing that could be used if your storage array is repartitioned as btrfs are subvolumes; they are very cheap to create and destroy; so the use would be: 'btrfs subvolume create /var/guix/.trash && trash things && btrfs subvolume delete /var/guix/.trash'
<apteryx>I don't recall exactly where this daemon trashDir is, but you get the idea
<cehteh>apteryx: you cant rename between subvolumes
<cehteh>moving things into trash relies on that
<cehteh>when you have to copy data then it becomes very io costly
<tissevert>hi guix
<cehteh>may rather rename in place then mc /gnu/store/f99fblkzb... /gnu/store/.trashed.f99fblkzb...
<apteryx>ah... so supposing /gnu/store is on default subvol id=5 (root), and you have another temporary subvolume, you'd trigger this cost
<cehteh>but that has a lot other issues
<notmaximed>Possibly relevant: <>
<notmaximed>Seems like sometimes, moving between subvolumes is cheap, but sometimes it is expensive.
<cehteh>i dont know btrfs well enough but i thin renaming/moving between subvolumes is a coply
<notmaximed>cehteh: Why would it be a copy?
<notmaximed>Subvolumes are on the same file system and disk storage, no?
<cehteh>when you say so
<cehteh>i dont know
<notmaximed>Though according to that reddit thread, sometimes Linux doesn't recognise two directories are on the same file system ...
<apteryx>if both /gnu/store and the trashDir are on the same Btrfs file system and not mounted (you can mount subvolume at random places), it should be cheap
<cehteh>i dont know, anyway relying on btrfs is something *I* wont want :D
<apteryx>according to the above link
<cehteh>didnt read that all .. and i think deletion should just be done asynchronous in the backgrounds, that should solve the issues
<cehteh>(for all filesystem)
<notmaximed>cehteh: (about BTRFS) Why not?
<cehteh>i havent tested it recently, but it lacks the features i would need and when i stress tested it i always managed to trash it
<cehteh>and development is pretty much stalled, i only see bugfixes
<cehteh>well lost track of it, havent looked recently
<notmaximed>Trash = low performance, or file system corruption?
<cehteh>totall loss
<notmaximed>total loss of files or total loss of performance?
<cehteh>filesystem :D
<notmaximed>That has never happened to me but ok
<cehteh>under extreme loads, on normal loads its okish, but with some stress tests running long time i was always able to damage it beyond repair
<cehteh>maybe they fixed that meanwhile, but the lack of higher raid levels and no encryption makes it useless for me
<apteryx>when btrfs has failed for me, it was the disk
<cehteh>you use it on a single disk?
<apteryx>3 disks in a RAID1
<cehteh>then i would worry more about that as well :)
<cehteh>ah ok
<apteryx>with zstd compression, on top of subvolumes
<apteryx>I don't care about it (hard resets often), and it keeps going
<notmaximed>I assume the BTRFS developers would be interested in your ‘damage beyond repair’ test case, assuming it is a bug in the file system and not disk wear
<notmaximed>It's possible to use BTRFS on top of LUKS for encryption, though I don't know the performance implications.
<cehteh>thats quite some time ago that i tested it last, i talked with the devs back then on irc, but now i am not interested on it anymore
<apteryx>OK, but don't spread outdated FUD ;-)
<cehteh>will test it again when encryption and raid5/6/z whatever is stable
<cehteh>i clearly stated i didnt test recently
<cehteh>i will test again when it has the featureset i need
<cehteh>well .. i crashed zfs too :) .. back then found some bugs that got confirmed, i am good at destroying filesystems
<phf-1>apteryx: Hello! Here is the neat reproducer you asked yesterday:
<cehteh>also some time ago .. when encryption on zfs where introduced
<cbaines>bayfront is now reconfigured, I've just restarted the guix-build-coordinator services though
<phf-1>apteryx: is it enough or should I change something? Adding `--no-subsitutes' does not change anything to the problem.
<cbaines>I'm going to look at changing the knot configuration to add AAAA records, do I just need to restart knot on bayfront to apply the change?
<mroh>rovanion: I can reproduce it.
<efraim>'testsuite' or 'test suite' or 'test-suite'?
<phf-1>Well, I guess I will stop spamming the channel since a bug report has been filled. . Thanks for the replies ! I will use `guix pack -RR -S /bin=bin ...' in the meantime. Thanks!
<civodul>hey! i'm soliciting fast-track review for the rsync service: :-)
<civodul>phf-1: thanks for reporting it!
<yewscion>Good Morning, Guix!
<rovanion>I really wish recompiling guix wasn't so slow. Each time I want to fix or add something I need to spend half an hour or something waiting for make -j :/
<tissevert>rovanion: have you enabled substitutes ?
<rovanion>tissevert: For my normal system, yes. Or are you saying it can do something for me when hacking on the guix source code itself?
<notmaximed>rovanion: It depends on how much and what you modify, but normally it should be less than a minute.
<cbaines>civodul, I've had a quick look through the rsync changes, and it looks good to me
<notmaximed>Assuming you don't run "make clean" and aren't rebasing or running "checkout this-branch" "checkout that-branch".
<notmaximed>(Switching between branches often leads to rebuilding many .go)
<rovanion>I'm on master but did have to run make clean because of an ABI mismatch, make told me.
<rovanion>(a clean master).
<notmaximed>rovanion: ok.
<notmaximed>These ABI mismatch things only happen irregularily, when the definition of the package record or something like that is changed
<civodul>cbaines: thanks!
<civodul>mothacehe, rekado_: i'll be going with these rsync-service-type changes if that's fine with you:
<civodul>then i'll set it up on berlin
<civodul>cbaines: for the Disarchive database, should i rsync it from berlin to bordeaux, or can we build it on bordeaux?
<civodul>with something like "guix build -m etc/disarchive-manifest.scm"
<rekado_>civodul: the rm times are for the tiny copy. It took more than 860 mins to copy 170G from /gnu/store/trash (storage array) to /mnt_test/gnu/store/trash (SAN); and only over 3 mins to delete it all again.
*rekado_ is back
<yewscion>Hello everyone, I've just cloned guix and I'm trying to set up my first development environment to contribute to the project. The manual says to run `make check`, and the repo I've checked out has two tests fail when that's run. Is this expected, or have I set something up wrong? (guix-pack-relocatable and guix-git-authenticate are the tests that
<civodul>yewscion: hi! these may be benign failures, but it'd be ideal if you could send the details to
<mothacehe>civodul: i has a quick look, nice upgrade :)
<notmaximed>Tests shouldn't fail, but if they fail, that's probably a bug in guix (or the test suite), not a problem with your setup
<mothacehe>cbaines: did you managed to reconfigure bayfront?
<yewscion>civodul and notmaximed: Copy that, sending bug report now.
<rekado_>civodul: looks fine
<civodul>yes, really
<rekado_>read-only? #f was the default, eh?
<civodul>the system test specifically checked that
<civodul>i didn't even know this was possible
<cbaines>civodul, I don't know enough about disarchive really, rsyncing it across sounds simple enough.
<civodul>what would it take to build it locally?
<civodul>at worst an mcron job that does "guix time-machine -- build -m ..." would do
<civodul>but anyway, we'll need to set up rsyncing at least for the web site static bits
<AIM[m]>The Chinese brand TP-Link Bluetooth USB adapter seems to work fine with libre kernel
<AIM[m]>The only issue I have now is the sound driver
<cbaines>civodul, I'm not quite sure what building it locally would involve. If it takes some work that could happen across multiple machines, it might be worth building it through the coordinator. It sort of looks like it'll fail though if any single origin can't be "built" which I find a bit confusing?
<AIM[m]>TP stands for Tapo I think (Some chinese word?)
<AIM[m]>I guess their WiFi adapter should also work
<AIM[m]>But I don't have it right now...
<rekado_>wikipedia says: The company name was based on the concept of "twisted pair link" invented by Alexander Graham Bell, a kind of cabling that reduces electromagnetic interference, hence the "TP" in the company name.
<civodul>cbaines: exactly, it succeeds iff all the tar.gz origins succeed
<rekado_>“ta po” could be translated as “it’s broken” :)
<cbaines>civodul, I would guess that you'd want the database with most of the data if a few origins fail though?
<civodul>(see <>)
<rekado_> /var/cache on the SAN is now at 928G
<civodul>cbaines: no, we want zero failures on origins; that's also the motivation behind the "source" jobset
<AIM[m]>rekado_: Ohhh
<civodul>but again, that's zero failures for origins at a given point in time
<AIM[m]>rekado_: Damn....
<civodul>it doesn't mean we can still build origins from past revisions
<AIM[m]>Btw, can you restrict a channel to be used only for the packages I specify? Like say If I want only custom emacs and it's dependencies only from the channel can I do that?
<cbaines>civodul, I guess this sort of relates to zimon talking about the etc/source-manifest.scm
<notmaximed>I see a few ‘ERROR: In procedure lstat’ in the log of the disarchive build
<AIM[m]>Can I package lock a channel
<AIM[m]>AIM[m]: Guix install will scan all channels right?
<notmaximed>Seems like a file name encoding issue, maybe?
<civodul>cbaines: right
<tissevert>rovanion: oh, sorry I misread your remark and thought you were compiling the system, not guix itself, yeah ABI mismatch make the following recompile longer
<notmaximed>AIM[m]: You could avoid specification->package
<cybersyn>hiya Guix, here to pitch in for the hackathon if anyone has some tasks I can work on
<notmaximed>E.g. (@ (your channel module) a-custom-emacs)
<tissevert>half an hour sounds long though, but I suppose it depends on the resources you have
<notmaximed>civodul,cbaines etc.: E.g.: In procedure lstat: No such file or directory: "/tmp/guix-build-Django-4.0.tar.gz.dis.drv-0/disarchive-directory.qmANcD/Django-4.0/tests/staticfiles_tests/apps/test/static/test/???.txt"
<AIM[m]>So, no way to lock the packages in channel file? I mean specify only to use that particular packages....
<notmaximed>AIM[m]: What are you trying to do, exactly?
<notmaximed>If you are using manifests, you could do (@ (the channel module) custom-emacs) from the manifest, to choose the channel to use modules from.
<eyJhb>Using the guix system install image, I am trying to use the graphical cli installer, and each time I get to the final step with selecting my disk, and have choosen to use a single partition, I am taken back to the main menu, where I can choose the language during the installation process (I have done this 4 times now). Not getting any information when I am doing it.
<cbaines>notmaximed, that looks suspiciously like a locale related problem
<AIM[m]>Like I'm thinking like adding channels where only the packages that I want will be listed in guix install and specification->package....
<mothacehe>eyJhb: are you using the 1.3.0 installer?
<eyJhb>mothacehe: Indeed I am
<notmaximed>AIM[m]:That's not implemented, I think.
<apteryx>phf-1: thanks for the reproducer and bug report!
<mothacehe>eyJhb: that release has sadly some installer bugs such as the one you are describing
<mothacehe>you may want to use the latest installer image available here:
<eyJhb>Damn, I was so impressed by the installation process so far :D
<eyJhb>I will try to dd the new one to my USB, and see if it works
<civodul>notmaximed: what's this error message?
<mothacehe>eyJhb: good luck with that, hope it will work better :)
<notmaximed>civodul: It's from
<mothacehe>otherwise don't hesitate to report it here
<notmaximed>nevermind, wrong URL
<cbaines>I'm going to clear up from lunch, but I've finished messing with the bayfront config. Assuming I've done the DNS stuff right, should work with IPv6 now, and the other hosted sites should work too once there are DNS records put in place
<rekado_>I used the latest installer yesterday and wasn’t able to install Guix System due to a test suite failure in some package.
<rekado_>(it may have been the “guix” package)
<rekado_>eventually I installed with 1.3.0
<mothacehe>yes we need to upgrade the guix package
<mothacehe>i think nckx upgraded it, fixed the test issue
<mothacehe>but did not upgrade again
<mothacehe>unless i'm mistaken
<nckx>This is poss.
<sneek>Welcome back nckx, you have 1 message!
<sneek>nckx, raghavgururajan says: Did you mean device upgrade to something else or libreboot upgrade?
<notmaximed>civodul: Search for ‘ERROR: In procedure lstat’ in
<nckx>raghavgururajan: Neither, I meant guix upgrade (pull+reconfigure).
<nckx>I briefly but relatively deeply dove into gexps this morning and I think I understand the problems I was having now.
<nckx>There are other gotchas:
<mothacehe>cbaines: thanks a lot! did you start a reconfiguration or can i do it?
<nckx>Oh, notmaximed basically explained the issue in my absence, typical. Thanks Maxime 😉
*nckx → work.
<AIM[m]><notmaximed> "AIM:That's not implemented, I..." <- notmaximed: ah.. cool... It'll be a cool feature tho....
*mothacehe has reconfigured bayfront
<mothacehe>it is now building websites
<mothacehe>and connecting to berlin using wireguard
<mothacehe>i'm now reconfiguring berlin
<eyJhb>mothacehe: It came much further
<mothacehe>eyJhb: but ... ?
<mothacehe>civodul: bayfront is accessible as from berlin
<eyJhb>Oh, sorry I meant. Thanks it came further, it's currently downloading. So I think everything works :D
<mothacehe>eyJhb: oh that's terrific :)
<civodul>mothacehe: noted!
<notmaximed>AIM[m]: You can work-around things by using package manifests instead of "guix install" & using (@ ... ...)  instead of specification->package in the manifest
<mothacehe>the issue with bayfront is that it doesn't use berlin substitutes
<mothacehe>so building websites rebuilds the world
<civodul>not sharing substitutes is a good thing if we are to use it to test for reproducibility
<roptat>hi guix!
<civodul>but yeah, for things like the web site, it doesn't help
<civodul>hey roptat!
<mothacehe>not sure what to do here
<mothacehe>hey roptat!
<civodul>i guess it's taking substitutes from bordeaux though:?
<civodul>so eventually that should be fine
<mothacehe>yes it does
<cbaines>when I reconfigured, I added in to the substitute URLs
<mothacehe>would it be acceptable to add it to bayfront.scm at least temporarily
<mothacehe>to setup the website mirror
<mothacehe>and test it
<cbaines>I think that's fine, it won't really affect things being built for
<cbaines>The signing key needs adding as well
<civodul>the problem of temporarily getting substitutes from the other build farms is that it breaks isolation
<civodul>but i guess it's too late already
<cbaines>civodul, the only place where they would leak in is when the agent on bayfront builds things, and it explicitly tries to just use substitutes from, so it's only in cases where things are already in the store that isolation is broken
<civodul>yes, but that's the case now?
<civodul>or am i overlooking something?
<cbaines>I'm more getting at that it's not that big of a thing in my mind
<cbaines>I think the rigerous approach here would to be check the nar hashes of store items before builds start, to check they're as you expect
<cbaines>that way the contents of the inputs used in a build would never be in question
<mothacehe>ok so websites are now built in the /srv directory of bayfront. Any volunteer to update bayfront nginx configuration, that's not my cup of tea.
<jackhill>mroh: maybe adding to the leaf applications is the right thing to do. I'm still trying to think it through. It was jut feeling to me like the wrong place if webkitgtk expects the plugins to work properly, it should bring them in without leaf apps needing to worry about it. Also webkitgtk currently has gst-plugins-base as an input, and if we know we're going to want gst-plugins-bad in the end, we can go
<jackhill>ahead and enable GSTREAMER_GL (which upstream recommends that we do)
<civodul>rekado_, mothacehe: i'll reconfigure berlin with the rsync service config i've just pushed to maintenance.git
<nckx>Hi! I noticed and was just wondering what this means.
<apteryx>jackhill: upstream recommends that?
<jackhill>apteryx: yep
<apteryx>the problem was that gst-plugins-bad, are well, bad (lower code quality, potential issues (more CVE-prone))
<jackhill>I asked on IRC yesterday
<apteryx>Is this the reason the browsers using webkitgtk were crashing, or that's an entirely seperate issue?
<jackhill>I wonder if thye're all consistently bad though, and if webkit uses the less troublesome ones. Also, do the plugins run n the bubblewrap container?
<apteryx>no idea
<apteryx>but, funny thing, gtk4 is already using gst-plugins-bad, IIRC
*apteryx checks
<civodul>nckx: that's what we were just discussing with mothacehe and cbaines; it was done out of convenience but i think we should revert it soon
<civodul>mothacehe: on berlin, there's a "wip-website" branch; but can i reconfigure from master?
<mothacehe>civodul: yes i removed it upstream
<jackhill>apteryx: the crashing goes away with just adding gst-plugins-bad to the environment, and not hadding GSTREAMER_GL, but if we need to pull in -bad anyway…
<mothacehe>civodul: you are also syncing substitutes to bayfront?
<mothacehe>cbaines: will bayfront be able to serve them?
<nckx>I don't have scrollback for some reason 🤷 No time to investigate.
<apteryx>yeah, it seems ugly that the browser *crash* instead of telling you what's missing
<civodul>mothacehe: right now i'm just running rsync; then we'll see :-)
<mothacehe>hehe nice :)
<nckx>civodul: Without any killer new arguments, I agree.
<nckx>Otherwise we might as well pool the resources for real.
<jackhill>apteryx: indeed, it looks like gtk4 does depend on -bad. Interesting. I guess upstream views these things differently than we do.
<apteryx>I suppose "they don't care" ^^
<jackhill>apteryx: there's a bug for that:
<jackhill>and it's not the whole browser crash, just the tab, so the UI still works to close it, or change the URL, and it doesn't affect the rest of the application so that's nice at least
<apteryx>rekado_: I successfully built /gnu/store/j416dfcl1qkcd9jhzcgghk0d3ppmdad1-python-matplotlib-3.5.1; anything to check to avoid biopocalypse?
<jackhill>apteryx: interestingly, I couldn't easily find on the gstreamer website guidence on when to use which batch of plugins. I didn't check readme's in the source though.
<jackhill>apteryx: I did ask if there is some official statement from webkitgtk that we could reference, but there isn't. They just say that they expect to have a working gstreamer intallation. Sounds like the divisions (except for maybe -ugly) are more managing difference upstream.
<apteryx>I think gtk shouldn't depend on -ugly ideally, so I wouldn't want to use this as a precedent to have webkitgtk depend on it
<civodul>also, the whole point of plugins is that they can be dynamically loaded
<civodul>so perhaps we can avoid the hard dependency?
<jackhill>apteryx: sorry, I'm not quoting, but I'm not sure what the logging policy is on I did talk with Michael Catanzaro, a lead developer of Epiphany, whom I consider to be authoratative.
<apteryx>it can already be avoided (at the cost of a confusing ux experience -- tab crash without informative message)
<jackhill>apteryx: I don't think either of them depend on -ugly, just bad
<apteryx>I meant bad, sorry
<jackhill>yeah, just want to make sure we're on the same page
<apteryx>I wonder how it's handled on fedora (do bad plugins contain potentially patent-encumbered codecs? if so they wouldn't be installed out of the box there)
<cybersyn>jackhill: I would just note that the introductory material for gst /does/ advise installing -bad. I think the -bad is bit like racket's unsafe/ffi
<jackhill>apteryx: so, on the flip side: if webkitgtk or browsers didn't wrap -bad then folks would have to install -bad into their profile, potentially polluting the rest of their environment when they only needed bad for the browser.
<jackhill>cybersyn: oh, I missed that! Which introductory material is this?
<jackhill>I wonder if we just need finer-grained packages. Like a package for each plugin, and then we could really only depend on the ones that were needed.
<cybersyn>with the difference being that there the -bad can and should eventually become -good, because you can't have complete memory safety with realtime media (or can you?), whereas with racket its a strict matter
<apteryx>jackhill: that's not really guix's problem; it should be addressed upstream
<jackhill>apteryx: adding the message or remoing the dependency?
<apteryx>the message clearly, but I was answering about the granularity of their plugins system
<apteryx>(if there's something to be improved there)
<cybersyn>jack it's in the "getting gstreamer" (or maybe "installing", I can't remember which they mention") in the introductory tutorials. if you look at the debian requirements, it instructs you to install all of the -good -bad -ugly etc plugins
<jackhill>ah, yes, or fouce. But we might be the only ones who don't want to depend on all of -bad
<eyJhb>mothacehe: the "building" part however takes forever. Doesn't seem like everything is in the cache. Has been going since I wrote last
<jackhill>cybersyn: that makes it sound like we should allow things to depend on -bad when needed then, or am I confused?
<mothacehe>eyJhb: we are experimenting some CI issues, so some package substitutes might not be available
<mothacehe>and as you are using the "latest" installer you need them :)
<apteryx>jackhill: do I get that fedora isn't even making webkitgtk available anymore?
<civodul>rsync daemon on berlin is working and i've been able to manually copy the Disarchive database to bayfront
<mothacehe>civodul: good job!
<civodul>now to see if i can write mcron jobs for bayfront
<jackhill>apteryx: I guess so, I'm not really that familiar with fedora. Perhaps just no one was taking care of their package. IIRC fedora's really invested in flatpak, so all their webkit needs probalby come via that channel.
<apteryx>jackhill: OK, no it's now called webkit2gtk3
<jackhill>ah, clever :)
<apteryx>they have: Recommends: gstreamer1-plugins-bad-freeworld
<eyJhb>Lets see if my computer has the power to build it :p It has been at the `check` phase for _something_ for some time
<apteryx>not sure what '-freeworld' means
<apteryx>since we do not have a recommend system, perhaps the closest we could do is add some information to the description of our 'webkitgtk' package.
<apteryx>that said having USE_GSTREAMER_GL=OFF doesn't seem ideal...
<jackhill>and what would the information state, that browsers be wrapped to include the nessisary plugins? Maybe a package varient for enabling GSTREAMER_GL?
<eyJhb>Argh. it just failed and then I am back at the begining of the installer again.
<apteryx>perhaps it's workable (the comment hints at "more investigation needed")
<eyJhb>mothacehe: is there a version of the installer, that uses the cache from before the CI issues?
<mothacehe>eyJhb: was generated 5 days ago and should have more substitutes available I guess
*apteryx tries a webkitgtk build wih "-DUSE_GSTREAMER_GL=OFF" commented out
<mothacehe>eyJhb: but i think you should wait a bit longer before jumping to an older installer
<jackhill>apteryx: I'm inclined to trust upstream in this case. They do seem to be cognizant of and sympathetic to the security nightmare that are browsers
<eyJhb>mothacehe: I waited until it failed installion by itself :p
<mothacehe>what's the error message?
<eyJhb>I didn't touch anything. So I assume that ethier 1. Something timed out (unlikely) 2. It failed to build something
<eyJhb>It just said it failed to install
<raghavgururajan>nckx: I see.
<mothacehe>eyJhb: with no context at all?
<AIM[m]>How do I use "append" for the channels instead of "cons"?
<eyJhb>Not as far as I can tell. Is there a way I can see it now that I am back at the start?
<vldn[m]>mhh i i have 25GB Freespace on my ext4 partition with the /gnu/store
<vldn[m]>but df -i shows that 100% of all inodes are used..
<vldn[m]>and reconfigure complains about not enough space on hdd
<vldn[m]>someone had this too?
<notmaximed>AIM[m]: The Guile manual documents how to use 'append' and 'cons'
<AIM[m]>cons* star can handle only two elements/variables right?
<mothacehe>eyJhb: maybe jumping to a TTY and having a look to the /var/log/messages file
<notmaximed>Presumably there isn't anything channel-specific there
<notmaximed>AIM[m]: It's cons (without start) that only accepts one element and one list
<notmaximed>cons* accepts multiple elements and one lists
<apteryx>jackhill: they simply *recommend* to install plugins, though, not to force it on users, right?
<rekado_>strace repeatedly failed to build for aarch64, so I can’t deploy to the aarch64 nodes.
<tissevert>what service on the installer is responsible for respawning the installer ?
<rekado_>trying again
<tissevert>it won't start but seems to keep trying to
<jackhill>apteryx: you mean the build doesn't fail without them? Yes that's true, and I can immagine using webkitgtk with something that's not a browser where the plugins wouldn't be needed
<rekado_>apteryx: building pigx-rnaseq is a good test :)
<apteryx>rekado_: OK! thank you
<eyJhb>mothacehe: `crashing due to uncaught exception: system-eror ("mount" "mount "S...` something something cryptroot on /mnt no such file or directory
<AIM[m]>How do I add multiple channels to channels.scm?
<AIM[m]>I'm very confused
<eyJhb>So, gussing the FDE didn't work as expected.
<eyJhb>Don't think running it again would help much :/
<apteryx>jackhill: "guix size webkitgtk gst-plugins-good gst-plugins-bad" is 1 GiB bigger than webkitgtk alone
<notmaximed>AIM[m]: did you read ‘6.1 Specifying Additional Channels?
<mothacehe>eyJhb: sorry it didn't work. We recently merged a big update that possibly caused some regressions. Any chance you could open a bug report by exposing the situation in an email sent to You could take a screencopy using your smartphone of the /var/log/messages file too.
<apteryx>jackhill: how about this:
<notmaximed>If so, just add the channel before the (channel (name ...) ....), and replace cons by cons*
<jackhill>apteryx: yikes, that's a lot
<notmaximed>Using `append` is also possible, it's just a matter of what you find most convenient
<AIM[m]>notmaximed: that only shows adding a single additional channel, right?
<notmaximed>AIM[m]: Yes, but it can be extended to multiple channels
<AIM[m]>Atleast the example has only one chann
<eyJhb>mothacehe: That's quite alright. I'll see what I can do. Would the previous link you sent me _maybe_ work?
<eyJhb>The 5 day old one
<jackhill>apteryx: that sounds reasonable to me.
<AIM[m]>How would it be?... (full message at
<cybersyn>jackhill: I'm not sure of other distros policies, and I could be over generalizing with the comparison of -bad to unsafe/ffi. I've only been getting int gstreamer over the last month, but it seems that -bad is widely used, while not acceptable for mission-critical applications. gstreamer is like the linux of media programming, its used in everything from telescopes in space to medical equipment, so some applications have a really high
<cybersyn>bar of safety they must meet.
<mothacehe>eyJhb: thanks a lot. That's not likely :(
<jackhill>apteryx: I'm also waiting on a buidl with GSTREARMER_GL=ON
<notmaximed>AIM[m]: No, cons only accepts one element and one list. Use cons* instead
<AIM[m]>It accept more elements?
<apteryx>jackhill: I'm curious though, why did you notice about the issue after the big merge; did you reinstall your system and forgot to install the plugins ?
<jackhill>cybersyn: ah, cool. Well you know far more about gstreamer than I do :)
<apteryx>it probably existed the same before, no?
<AIM[m]>So like:... (full message at
<notmaximed>yes, that should work
<AIM[m]>notmaximed: Thank you so so much....
<jackhill>apteryx: yep, it looks like I left out the plugins while cleaning up my manifests while testing c-u-f
<jackhill>and it was easier to check how it was on c-u-f v. master on different machines
<cybersyn>i should clarify: i've /used/ gstreamer for years as I program media installations for a living, but always interfaced through a higher level library. but only recently have I started digging into gstreamer itself
<jackhill>I think I'm still inclined to advocate for having epiphany, vimb, etc. wrapped to be able to find the plugins without folks needed to install them into profiles
<jackhill>apteryx, cybersyn: I got permission to quote from the IRC conversation, so I'm going to post that to the ticket for some additional context.
<jackhill>I guess I'll also re-title the ticket since it's not a c-u-f problem
<apteryx>good, I'm testing enabling GSTREAMER_GL
<apteryx>seems to build so far
<apteryx>I didn't add any input
<jackhill>apteryx: "didn't add any input" oh, interesting. I wonder how to test if it actually works
<jackhill>in other news, mumi has been slow for me today (and maybe last night). It this related to the infrastructure work?
<apteryx>jackhill: interesting; this is how fedora 'cleans' their gst-plugins-bad package:
<cybersyn>jackhill: just know that I'm in no way an authority; I'm a new comer to gstreamer with a vague understanding of these things
<jackhill>cybersyn: fair enough. It still helps to have a conversation partner to talk things through!
<cybersyn>no doubt
<apteryx>jackhill: also, 'bad' is worst than 'ugly', according to
<GNUtoo>Hi, when you run Guix system, how updating guix works under the hood?
<apteryx>the same as on other systems :-) per user profile
<apteryx>per user *guix* profile
<GNUtoo>(1) If I do guix pull and guix describe, it updates guix, if I do sudo guix describe, it has the same revision
<GNUtoo>(2) if I do sudo su and guix describe it has a different revision
<lfam>Guix is installed per-user GNUtoo
<lfam>With each of those commands, you are acting as a different user
<lfam>So, make sure you are working as the user you intend to be working as
<GNUtoo>With that, does guix package -i guix have an influence somehow?
<lfam>Don't do that, but it shouldn't have an impact in that case. That command is also per-user
<GNUtoo>When doing guix system reconfigure or sudo guix system reconfigure or sudo su and guix system reconfigure, it uses the user guix, right?
<lfam>Specifically, I recommend reading the manuals of `sudo` and `su`, learning about how they work related to "login shells"
<lfam>Guix works per-user by using login shells
<jackhill>apteryx: ugh :/ There's also gst-libav for providing the ffmpeg code.
<GNUtoo>Yes, I see that when doing guix pakcage -i or guix describe
<GNUtoo>but I was wondering if there was something special with guix system reconfigure
<jackhill>but is seems like some web content really needs one particular plugin.
<lfam>No, it should be the same way
<lfam>That was for GNUtoo
<GNUtoo>So it uses the login shell / user's guix?
<jackhill>I think it would be nice if webkitgtk in addition to a message was able to load the page without the offending content when a plugin is missing
<GNUtoo>If I do sudo su + guix system reconfigure or sudo guix system reconfigure, would I get the same thing?
<lfam>GNUtoo: What you see from `sudo guix describe` will be used for `sudo guix system`. And similar for `sudo --login` and `sudo --preserve-env`
<GNUtoo>ok, thanks a lot
<lfam>I recommend never using `su` on Guix
<lfam>`su` will put you into weird environments that don't work as expected
<lfam>Check the su manual for ENV_PATH
<lfam>That's not something that can work in Guix
<lfam>Oh, I see we fixed it in Guix
<lfam>In gnu/system.scm
<lfam>Anyways, what you see from `$(elevate) guix describe` is what you will use for `$(elevate) $(guix-command)`
<lfam>Even though we made `su` work on Guix, it still lacks the user's PATH. It will only give you a system PATH that doesn't work as most users will expect
<lfam>I always recommend using sudo
<GNUtoo>I was assuming it used /root/ 's paths
<GNUtoo>and not the system's
*jackhill → afk
<lfam>GNUtoo: The su workaround:
<lfam>You won't get anything from any user if you use su
<lfam>We "fixed" it enough that you can actually use the shell, but it's not something that will work for Guix itself
<lfam>The distinctions between login and non-login shells don't matter on other distros. I guess they spend 20 years making the distinction irrelevant
<lfam>So it's tricky to come to Guix and have to learn about it
<GNUtoo>So if I guix pull && sudo guix reconfigure system.scm and then do sudo su, I should then get the latest git revision when running guix describe after using sudo so
<GNUtoo>*sudo su
<GNUtoo>I start to understand some weired behavior I got
<GNUtoo>as user, GUILE_LOAD_COMPILED_PATH is set to /run/current-system/profile/lib/guile/3.0/site-ccache:/run/current-system/profile/share/guile/site/3.0
<f1refly>I just discovered that, for a brief moment after logging in, I can use at least the primary selection with selecting text + middle click. it stops working after a few seconds, after which the clipboard does not work anymore
<f1refly>I have no idea what could cause this behaviour
<GNUtoo>But the user Guix is not the system Guix
<GNUtoo>So there can be some mismatch in some cases
<GNUtoo>which can cause the "Throw to key `record-abi-mismatch-error' with args `(abi-check "~a: record ABI mismatch; recompilation needed" (#<record-type <origin>>) ())'."
<GNUtoo>If for instance I've an old system (because I didn't do a guix reconfigure) but a recent user guix (because I did guix pull as user) then this is meant to happen as I understand
<GNUtoo>So I've to always keep the user guix and the system guix in sync as I understand, right?
<GNUtoo>So: to maintain a good state I need to (1) don't install guix with guix package -i guix as sudo so, or as user (2) keep the user guix and the system guix in sync with guix pull and sudo guix reconfigure
<GNUtoo>My next question would be (C) what happens if I have 'guix' as package in my system.scm
<GNUtoo>and what happen if I don't
<GNUtoo>As I understood not having guix there would keep the last guix release for the daemon
<GNUtoo>like I'd have guix 1.3.0
<rekado_>or use "su -"
<GNUtoo>And if I do I'd have a rooling release guix (with guix home and so on), right?
<lfam>GNUtoo: I'm getting distracted by offline stuff so I can't answer in detail. Maybe somebody else can. I would say that you don't need to keep your user's Guix in sync with the system Guix. That's a nice thing about Guix System
<vagrantc>GNUtoo: you need to make sure the guix that "guix pull" builds is the first guix in your path
<vagrantc>GNUtoo: e.g. .config/guix/current/bin should come before anything else...
<GNUtoo>I've that
<GNUtoo>/home/gnutoo/.config/guix/current/bin/guix is the first
<GNUtoo>guix package -i hello fails with record-abi-mismatch-error, but GUILE_LOAD_COMPILED_PATH="" guix package -i hello doesn't
<GNUtoo>GUILE_LOAD_COMPILED_PATH=/run/current-system/profile/share/guile/site/3.0 guix package -i hello works
<GNUtoo>and GUILE_LOAD_COMPILED_PATH=/run/current-system/profile/lib/guile/3.0/site-ccache guix package -r hello doesn't
<GNUtoo>so /run/current-system/profile/lib/guile/3.0/site-ccache has an isue, and that path seems to be related to the system guix
<GNUtoo>maybe it should point to /home/gnutoo/.config/guix/current/lib/guile/3.0/site-ccache/ instead when run as user ?
<GNUtoo>Replacing /run/current-system/profile/ by ~/.config/guix/current/ in GUILE_LOAD_COMPILED_PATH makes it work fine
<GNUtoo>So maybe we need to add export GUILE_LOAD_COMPILED_PATH=[...] to ~/.guix-profile/etc/profile
<Kolev>Does Guix have the programs to put Guix on Android? I have Graphene OS.
<GNUtoo>I managed to run programs packed with Guix on Android
<GNUtoo>It requires a sufficently recent kernel though
<GNUtoo>Someone also wrote a blog about installing Guix itself on top of Android
<GNUtoo> http://c25o7knygjm3m67jy27yuynvv4pkfi25naucscmh4ubq2ggiig3v57ad.onion/en/guix-on-android.html
<GNUtoo>I don't recall the beginning of the non onion version of that URL though
<GNUtoo>It probably has lepellier inside that address
<Kolev>Tor says to not use IceCat with Tor, to only use Tor Browser.
<Kolev>And Tor Browser does not run on Guix.
<GNUtoo>Ah there is that too:
<lfam>Kolev: Yes, they recommend that everybody use the same browser to reduce the chance of "fingerprinting" users by their browser
<Kolev>What i cannot find is the article about GNOME guix and rhel 7
<lfam>They also implore distros to not try building and distributing tor browser, for the same reason. That's why Debian only packages the launcher, not the browser itself
<GNUtoo>If we could manage to change the addons URL to something else (like don't point to an addon website) it might be OK for FSDG distros
<GNUtoo>And that would need to be done upstream
<lfam>It's a case where there's no simple answer for Guix
<Kolev>I always just ran TBB on Trisquel but it does not work here
<GNUtoo>I run a Parabola chroot
<Kolev>That works?
<GNUtoo>for the tor-browser yes
*GNUtoo needs to share his script for that
<lfam>Potential fix for #52051 (login timeout): <
<lfam>Pull from the wip-fix-52051 branch to test
<GNUtoo>Basically I run that and copy-paste the 2 commands it produces
<GNUtoo>I didn't manage to make it commands directly in the chroot yet
<GNUtoo>For pidgin (crash), gajim (crash) and Guix (it cannot access the daemon) it didn't work though, but for all other programs it did
<attila_lendvai>i'm not sure i'm happy about the guix codebase running wild and scanning all reachable modules for possible packages... i think this should be more explicit.
<notmaximed>attila_lendvai: Where is it ‘running wild’?
<attila_lendvai>notmaximed, i have some test code, much like etc/system-tests.scm, and i've put it in a module in another repo. it's in a half-baked state, and to my surprised calling specification->manifest ends up loading it and evaluating its toplevel forms (that print something).
<attila_lendvai>also, specification->manifest is... well. not very fast. i'll look into that.
<apteryx>how else could it work? it has to scan for the available packages before it can map labels to packages, no?
<apteryx>have you byte compiled the source tree?
<attila_lendvai>another surprise is that defining two packages with the same name and version doesn't yield an error
<notmaximed>attila_lendvai: That's one of the features of guix, you can do (define my-emacs (package (inherit emacs) modify-some-stuff)) (packages->manifest [...] my-emacs))
<attila_lendvai>apteryx, i can't propose a better solution right now, but this entire setup of using two independent namespaces to keep track of packages feels strange to me (the scheme modules, and the guix package repository based on name+version)
<attila_lendvai>for example, i'm working on the go importer, and it refers to packages by their scheme variables, but it also skips packages that are present in the guix package registry => broken package definitions. i'm experimenting with using specification->package in the inputs now.
<attila_lendvai>*already present, i.e. in some other module somewhere
<attila_lendvai>notmaximed, i think that could work by a primitive like define-public that would register in the package repository, as opposed to exporting from a symbol from scheme module. and then you could (define my-emacs ...) locally, and keep it isolated from the rest of the codebase.
<singpolyma>In general I think the "package specification" concept just exists to make the cli easier to use
<attila_lendvai>and in that setup the find-package primitive would only find packages that were already imported into scheme
<singpolyma>Normally you don't need package specification of course, and in an ideal world it would go away
<attila_lendvai>singpolyma, without specification->package i couldn't deal with the situation i'm facing (see importer description above)
<singpolyma>attila_lendvai: I'm not sure I understand your problem. The importer outputs code that you are expected to put somewhere that imports any needed modules
<attila_lendvai>well, not specitication->package per se, but the ability to look up packages in the repository (not only through scheme global variables)
<Kolev>GNUtoo: how do you do the paste subdomain?
<attila_lendvai>singpolyma, that already assumes that the scheme variables of the dependencies were named in a disciplined way to match the algo that produces it in the importer.
<notmaximed>attila_lendvai: The problem with such a primitive, is that it would require loading the relevant package modules before you can specification->package. And then you might as well do specification->package directly
<singpolyma>attila_lendvai: yes. That is something that needs to be true with the importer
<attila_lendvai>...which is not always the case. e.g. the version is not included in the name, or the name is totally ad-hoc by a human.
<attila_lendvai>notmaximed, most probably some form of API would still be needed to load all reachable modules, but i think it should be an explicit call by the user, and never done implicitly
<singpolyma>I guess the importer could maybe find the "real" variable names in case there are differences. Or else the name in guix should be patched to match what the importer produces. Or both
<notmaximed>attila_lendvai: I'm not sure what you are referring to?
<notmaximed>Is this about (guix modules)?
<notmaximed>or specification->package?
<attila_lendvai>singpolyma, another issue: i'm importing two golang apps with --pin-versions into two modules. both have 100+ transitive closure of deps, and the two sets overlap. now, which one should include which? especially when the two apps have not much to do with each other...
<GNUtoo>Kolev: through apache and DNS configuration
<notmaximed>attila_lendvai: Do you mean two go packages with a different version?
<notmaximed>If so, then maybe ignore the version pinning and use the latest version (assuming backwards-compatibility)
<attila_lendvai>notmaximed, let's name things: there are two namespaces now that keep track of packages: 1) global variables in scheme modules, 2) a repository of (cached/loaded) packages maintained by guix.
<Kolev>GNUtoo: is a paste service actually runnimg there ?
<notmaximed>If there are API-incompatibilities, then I suppose it should be possible to include multiple major versions in guix.
<singpolyma>attila_lendvai: if the two apps aren't going into guix then I guess neither needs to import the other. If they are going into guix probably they can share a third module or both go in the same module
<Kolev>singpolyma: Is JMP currently running on foreign Guix?
<singpolyma>Kolev: parts of it. The migration is ongoing
<lfam>I recommend using the exact versions (per commit) specified in Go softare
<Kolev>singpolyma: cool.
<Kolev>Can Guix do Dart? I want to write a Flutter app.
<attila_lendvai>notmaximed, what i was talking about in the last couple of comments is a hypothetical setup where 1) is never scanned, and only 2) is The Repository. loading a module would result in the defined packages getting registered into 2). and there could be an API to scan 1) and load every package reachable into 2), but it should be an explicit operation, and never called implicitly by the guix package lookup API functions.
<singpolyma>Kolev: Can do anything if you believe hard enough red type enough parens ;)
<attila_lendvai>lfam, that's what i'm doing, but there still are a lot of overlaps between the two apps
<lfam>There's no "right answer" yet
<lfam>Do what works for you
<Kolev>Is Conversations written in Java or Dart?
<singpolyma>attila_lendvai: that seems backward to me. 1 (guile vars) is more explicit than 2 (string names)
<notmaximed>The (2) is currently hypothetical, currently there is no cached/loaded cache. However, there is a database mapping package names+versions to the module.
<singpolyma>Kolev: Java
<attila_lendvai>yeah, i'm trying to cross the jungle, but i'm getting too deep too quickly... discussing the package registry architecture on IRC... :)
<notmaximed>attila_lendvai: The point of specification->package is to make the module resolving implicit.
<notmaximed>So if you don't like this implicitness, you could simply not use it.
<notmaximed>Or maybe add an optional #:use-these-modules '((my this) (my that)) keyword argument.
<notmaximed>Anyway, Guix doesn't use specification->package anywhere itself (with a few exceptions, e.g. in the guix pull code and in the CLI)
<bdju>the emails to bug-guix with [BUG] in the name don't seem to open in my mail client. very odd. two different issue #s and from different people or else I was gonna blame the sender
<attila_lendvai>notmaximed, but i cannot not use it if i want to produce an importer that is reliably producing package definitions that can be loaded without human editing. and i don't want to hand-edit 100+ packages at every new release of an app that i ultimately want to compile on guix so that it reproduces the officially signed binary release...
<lfam>bdju: They don't open?
<bdju>I see "mime: no media type". running aerc on an arch server (guix hasn't packaged aerc yet) so probably mostly off-topic, just felt like mentioning it
<bdju>lfam: yeah
<bdju>I imagine if I open webmail they'll load fine
<lfam>bdju: Can you give one of the bug numbers?
<attila_lendvai>notmaximed, i guess my short message is this: using the scheme module namespace for keeping track of packages brings with itself constraints that i'm hitting right now
<bdju>oh, I see there are a bunch of emails like that recently. I was still working through ones from earlier
<lfam>Works in Mutt bdju. I guess that aerc is not yet bloated enough to read these messages ;)
<bdju>#52684 has the [BUG] in it but opens. I wonder what's different
<lfam>Give aerc a few years to accumulate cruft... I mean, more features
<notmaximed>attila_lendvai: I suppose you can add a --recursive-even-import-go-packages-already-in-guix-and-import-multiple-versions-like-upstream-wants & dump the entire (go-part of the-) package  graph in a single file.
<bdju>haha alright
<rovanion>Bah, wanted to bisect the clojure-tools issue but the last known good commit I had was so far back there were too few substitutes for it to finish in reasonable time. Oh well.
<lfam>I think these messages were created via the bug reporting web interface, and have a bunch of attachments, bdju. Also, the bug reporter had some questions about the format of the emails too
<bdju>oh, interesting
<lfam>So maybe something is amiss and Mutt has learned to read weird emails after 25 years
<attila_lendvai>notmaximed, i'm not saying i can't solve it. what i'm saying is that i don't see a reasonably not-ugly solution.
<lfam>bdju: You might report it upstream to aerc
<notmaximed>With output like (define-public go-something-0.1.2 [...]) (define-public go-something-3.2.1 [...]) [...] (define-public go-something-else-0.1.2 [...]).
<lfam>I thought about packaging / using aerc and then came to my senses
<lfam>I already spent years learning how to use Mutt. Why would I ever change that
<bdju>I never spent too much time with mutt, so I don't know how it compares. I have enjoyed aerc (aside from a few bugs), though.
<lfam>Besides, we are talking through a conversation about how hard it is to package Go software ;)
<bdju>I'll probably report the issue upstream later. good idea.
<bdju>oh, ha, I didn't even notice
<attila_lendvai>my original plan was to smarten up the importer, and import the trans.closure of the deps and the app itself into an isolated scheme module. but... see the above headaches.
<lfam>But, I spent so many hours configuring mutt
<lfam>It's not amazing either
<bdju>I should probably try mutt sometime and have it as a backup
<singpolyma>I wish more true GUI apps focused on power users and minimalism and keyboard navigation
<lfam>singpolyma: I guess that few people are using native apps these days
<singpolyma>Being trapped in a terminal is not always helpful to mutt, but no guy comes close right now
<lfam>There's not much development activity on native apps at all
<singpolyma>lfam: well, few people spend much time with PCs at all. Just smartphones and sometimes tablets
<lfam>And of course, with a mail GUI, you need to get feature parity with what Microsoft offers, or it's not competitive
<lfam>And that's not feasible
<lfam>Same with calendar apps
<singpolyma>lfam: no, I want feature parity with Mutt, not with outlook :P
<lfam>I know, but I'm talking about development. Not usage
<lfam>We have to consider how developers choose what to use
<lfam>I mean, what to work on
<notmaximed>attila_lendvai: Letting a package-searching procedure depend on what modules are already loaded seems ugly to me?
*attila_lendvai will consider notmaximed's suggested solution while going outside a little
<bdju>singpolyma: agreed about GUI apps, I think the best example of a good GUI is actually emacs since it basically works the same as the TUI version but with added benefits of mouse support and windows being separated by more than just an ascii line
<singpolyma>Mutt it great, it just wish it drew using gtk ant widgets instead of ncurses. But yeah, probably the value to the mutt devs is not there to do that work of course
<singpolyma>bdju: yes. I have mildly considered trying an emacs MUA so I get the guiness for free
<singpolyma>It's sort of cheating but sort of not
<attila_lendvai>notmaximed, not to me. there would be a function like load-packages-from-modules-recursively... it's just who calls it: the user, or the system implicitly.
<singpolyma>gVim is similar of course
<notmaximed>attila_lendvai: the procedure you're suggesting doesn't seem to depend on what modules are already loaded, I presume that procedure takes a ‘#:use-modules-from-these-packages’ argument’?
<jacereda>I've decided to go all-in with guix and install on a 2013 macbook pro, but the installer won't recognize the wifi card, even if it says it should (bcm43xx). It shows 'Missing Free firmware (non-free firmware loading is disabled)' and 'Direct firmware load for /*(DEBLOBBED)*/ failed with error -2'. Any idea?
<notmaximed>The eventual goal was to reproduce the official binary of go-ethereum, to prevent ‘financial loss due to miscompilation’, I think?
<notmaximed>That seems sound to me from a ‘diverse double-compiling’ perspective (though that's actually a slightly different thing).
<attila_lendvai>notmaximed, i would be the same thing that is currently in guix, based on fold-module-public-variables, but invoked by the user as needed, not the system implicitly. i didn't think through what its API would look like.
<lfam>It's a bit silly that financial loss is a risk of miscompilation of a package. Although I always advocate for using the exact dependency graph specified upstream for Go software
<notmaximed>But I don't see the ‘invoked by the user as needed/by the system implicitely’ thing, because the former is currently the case.
<lfam>Like, what is really the point of that stuff
<lfam>I don't lose my money if my web browser has a bug on my bank's website
<lfam>And then, why not just use upstream's binary or bundled source code
<attila_lendvai>notmaximed, yes, because the binary has the digital signature, and the guix package could verify it at the final step of the build. it's a far cry at this point, though...
<GNUtoo>Kolev: not really, I paste through ssh
<lfam>What is the actual security boundary here? I'm guessing that it's Github's authentication system?
<attila_lendvai>notmaximed, ha! unless i can convince them to use guix to produce their relases... :)
<lfam>If you do that, maybe consider asking them to contribute to Guix in some wa
<lfam>Some way
<notmaximed>(about the financial loss thing) Why are go packages special here? Because if my browser has some security bug (caused by a compilation mistake or whatever), then when I log in into the e-banking system, then an attacker could use that  security bug to steal money or whatever.
<lfam>It's kind of sad for Guix to become a target of supply chain attacks from cryptocurrency thieves, but to not be supported by the cryptocurrency projects themselves?
<attila_lendvai>lfam, it's not probable, but i still don't want to be the guy who is reading the email account that is in the commit... :)
<vagrantc>jacereda: sounds like it won't be able to support your network card without non-free firmware, and guix does not support non-free firmwares
<lfam>notmaximed: They could do that, but at least in the US, you'd get your money back
<lfam>Consumers are protected from that type of fraud by law
<jacereda>vagrantc: it's one of the 2 cards mentioned in the supported devices
<vagrantc>oh, curious
<jacereda>it's supposed to use b43-open
<notmaximed>lfam: right (also in the EU, I presume, though it's still irritating if someone tries to pull something like that, and not everyone would notice ...)
<lfam>Yeah, it's definitely disruptive. But your money will be returned to you
<attila_lendvai>notmaximed, oh, i see. the 'invoked by the system' thing is in my head because i (wrongly) assumed that specification->package is more than just a random helper for CLI work.
<vagrantc>maybe a bug in linux-libre ... e.g. they removed support for loading firmware even though there is a free firmware available?
<lfam>And I agree, my objection to the entire enterprise is not specific to Go.
<jacereda>vagrantc: looks like that's the reason, I don't know how to proceed
<notmaximed>Then I presume the difference between the financial loss in ‘go package/browser’, is that the go package is ethereum, a cryptocurrency, where reverting fraudulent transactions is way more difficult
<lfam>In theory, it's actually impossible
<attila_lendvai>lfam, i have already created packages that use the upstream binary, but it won't be accepted in guix, but i want my swarm service in guix... => i need a source based package (or i need to convince the maintainers, but i didn't even attempt that).
<notmaximed>Unless the person doing the fraud is caught.
<lfam>Even then, the money may not be accessible
<lfam>Or, the "cryptocurrency" or whatever is supposed to represent the value
<jacereda>should I start with an installer using plain kernel instead of linux-libre?
<jacereda>and how can I do that?
<lfam>Well, what I'm talking about is totally offtopic
<lfam>jacereda: It's easier said than done
<lfam>I'm surprised the installer doesn't include the b43 firmware
<jacereda>it supposedly does
<jacereda>the manual: One of the main areas where free drivers or firmware are lacking is WiFi devices. WiFi devices known to work include those using Atheros chips (AR9271 and AR7010), which corresponds to the ‘ath9k’ Linux-libre driver, and those using Broadcom/AirForce chips (BCM43xx with Wireless-Core Revision 5), which corresponds to the ‘b43-open’ Linux-libre driver. Free firmware exists for
<jacereda>both and is available out-of-the-box on Guix System, as part of ‘%base-firmware’ (*note‘firmware’: operating-system Reference.).
<lfam>Is there a reference for that?
<notmaximed>lfam: If they are caught, then even if the cryptomoney is inaccessible, the fraudster could be forced to pay back the victim out of their bank account (following the exchange rates), I presume? And ‘deurwaarders’ are a thing. (not sure about the translation). Anyway, let's stop because off-topic. (#guix-offtopic if you want to continue)
<rekado_>here’s the strace error I get when running “guix deploy” on berlin:
<rekado_>Throw to key `decoding-error' with args `("scm_from_utf8_stringn" "input locale conversion error" 0 #vu8(1 2 48 11 4 55 12 5 56 9 7 57 10 8 15 49 13 14 55 17 56 31 32 92 39 34 60 60 48 58 58 48 62 62 49 126 127 128 255))'
<rekado_>that’s building /gnu/store/35wj4qpy4vlq43x3m4bvjfn9y956f27a-strace-5.13.drv
<vagrantc>wow, did #linux-libre not move off of freenode?
<attila_lendvai>lfam, notmaximed, one possible way you can lose money is if you stake your crypto in a Proof of Stake setup (as opposed to the old Proof of Work blockchains), then you must adhere to the consensus protocol, otherwise you are punished from your staked money. using a different dependency can result in your executable not behaving as expected your peers, and you may get punished.
<lfam>That's absurd
<lfam>Obviously I have made up my mind on the whole subject
<lfam>I'm not sure if %base-firmware is used in the installation image or not
<notmaximed>attila_lendvia: Is Proof of Stake & swarm & etc. roughly equivalent to bitcoin mining / merely participating in the consensus protocol?
<notmaximed>Or is it a ‘client’ thing?
<lfam>Anyways, to reiterate: if cryptocurrency projects are basing their development on Guix, I hope they will contribute to Guix somehow
<lfam>Otherwise, it's really not fair to make us a target
<vagrantc>ah, #gnu-linux-libre
<notmaximed>If it's daemon stuff like GNUnet's or IPFS' daemon (not a crypto, but has decentralised protocols), then I don't see how money could be lost.
<notmaximed>As long as you don't participate in the equivalent of mining.
<jacereda>lfam: if I'm reading correctly, it uses the default linux-libre kernel, and that one includes openfwwf-firmware (b43-open)
<notmaximed>go-ethereum depends on docker? (
<lfam>jacereda: Where do you see that?
<jacereda>lfam: in firmware.scm and install.scm
<notmaximed>attila_lendvai: Ignoring the DDC-style reproducibility test, is using the exact same version of _every_ package required? Because the dependency go-isatty doesn't seem relevant to the consensus protocol.
<lfam>jacereda: I don't see the strings 'firmware' or 'b43' in gnu/system/install.scm
<jacereda>lfam: installation-os doesn't have a kernel argument, so I guess that means "use the default kernel"
<lfam>Right, but is there something that leads you to think that the default kernel comes with this particular firmware package?
<lfam>The firmware you need is part of %base-firmware, but I'm not sure that %base-firmware is used by the installation image
<lfam>It's available by default on Guix System, but the installation image may not use the defaults
<vagrantc>... so the natural question is why not?
<lfam>And, if your wifi card isn't working, it's likely that the firmware is missing
<lfam>An oversight, I suppose, vagrantc
<vagrantc>but from the message, it suggests that linux-libre will not be able to load it
<jacereda>lfam: if you don't say otherwise, firmware defaults to %base-firmware
<lfam>The whole point of these firmware packages is for linux-libre to load them
<vagrantc>i don't know how it could if the kernel is spitting out 'Direct firmware load for /*(DEBLOBBED)*/ failed with error -2'
<lfam>And that message is definitely related to loading of this firmware in question?
<lfam>And not some other component of the system?
<opalvaults[m]>new install on laptop, getting a huge backtrace when trying to guix install. `ERROR: In procedure getcwd: In procedure getcwd: No such file or directory
<jacereda>lfam: yes, it's this message, it should be attempting to load b43-open/ucode5.fw
<jacereda>but the deblobbing process probably messed that
<opalvaults[m]>can't use any guix cli commands
<lfam>Well, that's a regression if I understand correctly, jacereda. We wouldn't have added this firmware if it didn't work with linux-libre, unless I am mistaken
<lfam>That's the point of including it in our repository
<lfam>We'll need to figure out what's going on
<opalvaults[m]>(anyone know of a good pastebin with little/no javascript?)
<apteryx>see the topic
<opalvaults[m]>oh, derp. thanks apteryx
<apteryx>you're welcome!
<vagrantc>anyone up for a campaign to fix typos and various synopsis,description issues that guix lint finds before the next guix release?
<cybersyn>lfam: I agree, if major cryptocurrency companies are building with guix, they should at the very least help the project out as it gets weighed down by the side effects of their industry. its all pro-FLOSS until it comes to paying free software developers
<vagrantc>it would be nice to at least tidy all that up before release... because if not then, when? :)
<opalvaults[m]>here's the backtrace
<opalvaults[m]>+ command used to produce it
<jacereda>I've mounted the iso and it contains the b43-open firmware inside the firmware package, so I guess it's a problem with the kernel sources, the deblobbing process must have messed the filename
<vagrantc>seems like typos and other description/sysnopsis stuff are relatively low-hanging fruit
<notmaximed>Seems like I was disconnected and a few messages didn't come through:
<notmaximed>attila_lendvai: Ignoring the DDC-style reproducibility test, is using the exact same version of _every_ package required? Because the dependency go-isatty doesn't seem relevant to the consensus protocol.
<notmaximed>And go-libpcsclite doesn't seem very relevant either?
<notmaximed>Anyway, ethereum has lots of consensus tests ( and many dependencies don't seem very relevant to the consensus protocol, so I think it is more likely to lose money by filling in the tax forms suboptimally than due to a miscompilation.
*notmaximed is not a consensus expert
<notmaximed>Probably a good idea to use the same versions for the crypto dependencies and bloom map though ...
<opalvaults[m]>resolve'd it. no idea what caused it to happen tho. :(
<vagrantc>jacereda: are you sure it's a b43-open driver and not the b43 driver?
<vagrantc>jacereda: or b43legacy ?
<civodul>cbaines: i'm reconfiguring bayfront with the rsync mcron jobs
<jacereda>vagrantc: it's located in <myiso>/gnu/store/ldpk1mizggf6q31ry7yx647isd9pq9w5-firmware/lib/firmware/b43-open
<cbaines>civodul, ok, is there anything specific that I need to keep in mind?
<jacereda>and the message also has another message later on saying 'Firmware file "b43-open//*(DEBLOBBED)*/.fw not found
<jacereda>looks like it's looking for it in two locations and can't find it in none of them
<civodul>cbaines: no; it'll populate directories in /srv, it shouldn't take too much space (just web site bits)
<jacereda>s/and the message/and the log/
<vagrantc>jacereda: not all b43 devices are supported by that firmware...
<cbaines>civodul, cool, good to know :)
<jacereda>vagrantc: I know, but if it can't load the firmware none of them will be supported I guess
<vagrantc>jacereda: true
<civodul>cbaines: and you can list the mcron jobs with "herd schedule mcron 50" or similar
*civodul goes afk for a bit
<vagrantc>jacereda: looking at the source from guix build --source linux-libre ... it looks like the codepath that would load b43-open is not mangled with the DEBLOBBED patches, so i'm guessing you have an unsupported chipset
<vagrantc>jacereda: drivers/net/wireless/broadcom/b43/main.c
<vagrantc>jacereda: or it's somehow detecting it as a different device for some reason
<jacereda>vagrantc: good, thanks... I'll take a look at that file.
<apteryx>jpoiret: I've yet to try the fix for dbus auth_timeout, but it looks promising!
<jpoiret>did you figure out anything new?
<apteryx>I wonder why we trigger it now and not then; could it be that something is slower than it used to be?
<apteryx>jpoiret: no, I've been focusing on the version-1.4.0 branch, integrating bits from left and right before the big rebuild
<apteryx>I'll take the time to reboot with the fix tonight to test it :-)
<apteryx>thanks a lot for going to the length of swaping a drive into your X200 to reproduce the issue!
<atuin>hi, after an upgrade stumpwm is showing weird fonts, anyone having the same problem?
<apteryx>xorg fixed their default DPI
<apteryx>you'll want to set it with xrandr --dpi if you don't like the properly computed (I assume) DPI setting
<atuin>mmm interesting becasuse when i try the config using Xephyr it seems to work fine
<atuin>that could make sense, the fonts are loaded but the size it's weird
<atuin>i will investigate that then, thanks
<jpoiret>Oh, just read the thread, it's great that the root of that issue was figured out. Although the "elogind is already running" message still being present puzzles me a bit
<jpoiret>but I guess maybe the dbus registration process is asynchronous and takes a long time, so elogind writes its PID before having fully registered its name
<jpoiret>that could cause bugs down the line
<apteryx>jpoiret: I guess I could still leave the synchronization I added with the extra requirements
<apteryx>that should fix these
<apteryx>as SSDs become prevalent I guess more and more software will hit these issues
<apteryx>I also see timeouts on this machine when running WebDriver via Selenium things (chromedriver or geckodriver) -- they just take too long to start and something times out
<apteryx>I suspect that's also the reason I'm often seeing failures from 'guix offload' or substitutes (poor IO and timeouts).
<rekado_>2.7TB copied
<apteryx>jpoiret: was the root cause figured out? what was it?
<apteryx>oh, I think you meant the 'dbus timeout'. Yes, but that doesn't explain why nobody ever hit that, although the default timeout was last touchede years ago
*attila_lendvai had to go AFK, sorry
<apteryx>nobody hit that prior to the recent merge. something has either become much heavier in the software stack or the kernel has become slower at IO
<jgart>Hi, does someone happen to know why python-build-system won't do "the correct thing" with this file?
<jgart>It's a one liner
<jgart>Here's the guix package for that library:
<KE0VVT>rekado_: apteryx: Could I clone my HDD disk to an SSD and see the login issue go away?
<civodul>cbaines: do you have an idea of the build backlog of bordeaux?
<civodul>i'd like to revert the commit that authorizes ci.guix substitutes
<rekado_>we now have two new build nodes
<rekado_>125 with 256G memory and 130 with 192G.
<drakonis>KE0VVT: what login issue?
<drakonis>is it actively preventing you from logging in?
<apteryx>KE0VVT: have you tried the fix yet?
<KE0VVT>apteryx I nuked that system, but I guess I'll get the issue again if I reinstall.
<apteryx>but yes, using an SSD would probably prevent the timeout in the first place
<rekado_>I just found something silly
<rekado_>mjpegtools is actually libmms
<rekado_>the source hash of mjpegtools is that of libmms
<rekado_>copy paste gone wrong
<apteryx>KE0VVT: you could try pulling guix from a fresh installer image to the wip branch
<apteryx>otherwise you can wait a couple days and it'll be in master
<apteryx>and hopefully a new iso image available with it
<KE0VVT>apteryx: I'll wait. It's the workweek, and I should not be fiddling with my computer when I need to use it.
<KE0VVT>I'm very happy to see a fix. I miss having my system under Guix.
<civodul>rekado_: now that's interesting: a real-world case!
<civodul>"GUIX_DOWNLOAD_FALLBACK_TEST=none guix build mjpegtools --check -S" FTW
<rekado_>bleh, node 125 is a weirdo
<rekado_>there seem to be *two* serial interfaces connected at the *same* time
<rekado_>so when I log in I’m both logged in and not logged in
<civodul>Schrödinger's login
<rekado_>my input appears to get split between two different gettys
<rekado_>quite frustrating
<civodul>weird indeed
<rekado_>and it’s one of the old management interfaces that don’t support backspace
<rekado_>mistype any character and you gotta go all over again
<jackhill>apteryx: did you come to a conclusion about GSTREAMER_GL? My build didn't retain a referenct to gst-plugins-bad, but I'm not sure how to check if GSTREAMER_GL is working
<apteryx>it probably will if the needed plugins are installed
<apteryx>not sure how to test it either
<jackhill>ok, thanks! We could probably get help from upstream on testing.
<jackhill>apteryx: so it sounds like the path forword might be do expand on the description as you propose, enable GSTREAMER_GL, and then wrap the browsers with additional plugins, perhaps using lfam's suggestion of gst-plugins/selection. Does that sound right to you?
<apteryx>the first 2 will be on the version-1.4.0 branch (commited locally) -- about wrapping browsers, I'm on the fence about it
<nckx>I'm flailing around with possible solutions to <>. Is there any possible (not even necessarily sane: I want to know my options) of capturing this-operating-system's kernel field when expanding %base-initrd-modules?
<nckx>Yes, ick, etc.
<jackhill>apteryx: cool, thank you! My thoughts are that browsers should work out of the box, especially if we can do it easily, and perhaps minimally with gst-plugins/selection. How do we come to a consensus?
<apteryx>you could send an email to start a discussion around it on guix-devel
<apteryx>it usually receives more attention than guix-patches or guix-bugs
<jackhill>apteryx: ok, sounds good to me, I'll do that. Well, first I guess I'll try to get a proof-of-concept working to discuss :)
<KE0VVT>Will Borg Backup follow symlinks all the way to /gnu/store/? O_O
<apteryx>jackhill: OK!
<KE0VVT>Man, I love how EVERYTHING is a package in Guix. Browser extensions, Emacs packages, everything!
<singpolyma>KE0VVT: is that not normal? Both of those are packages in Debian also
<vagrantc>the community isn't packaged yet...
<vagrantc>hard to bootstrap
<KE0VVT>singpolyma - I did not know. Wow, even in Fedora: mozilla-ublock-origin.noarch : An efficient blocker for Firefox
<vagrantc>reproducibility? eesh.
<apteryx>KE0VVT: I've used rsync once to migrate disks; worked great
<apteryx>if you had btrfs snapshots you could also 'btrfs send' them
<KE0VVT>Everyone likes Alacritty, but it does not work on my machine. I have to use Foot.
*jackhill happy foot user here
<jackhill>KE0VVT: I assume your graphics card doesn't support a new enough API for alacritty (or kitty)? I've found that they do work with LIBGL_ALWAYS_SOFTWARE=true
<KE0VVT>"LIBGL_ALWAYS_SOFTWARE=true alacritty" works, jackhill. :D
<jackhill>KE0VVT: you're welcome
<KE0VVT>jackhill: Sadly, "LIBGL_ALWAYS_SOFTWARE=true" in ".bashrc" does not let me run "alacritty".
<apteryx>KE0VVT: you'd want that in .bash_profile, and export it (then relogin)
<nckx>Normal back-up software doesn't follow symlinks at all.
<nckx>Borg is no different.
<roptat>uh, I suddenly have this error in my guix checkout: "guix build: error: gcry_md_hash_buffer: Function not implemented"
<roptat>I just entered a new guix shell
<nckx>rekado_: Did you restart mumi since ~yesterday, or did it decide to resume fetching new bug numbers on its own?
<roptat>guix describe says it's from dec 18