IRC channel logs

2023-04-17.log

back to list of logs

<apteryx>wow, I could build jami on core-updates, and it weighs 1610.2 MiB. That's down about 200 MiB from current master. It's nearly half what is used to be ^^'.
<jackhill>apteryx: šŸŽ‰
<podiki[m]>if anyone can take a quick look: https://paste.debian.net/1277566/
<podiki[m]>style question, should the system check just be a let for the whole package? or even to a rust? boolean directly?
<apteryx>podiki[m]: I think there's a new fancier way to check for rust support
<apteryx>package-transitive-supported-systems
<oriansj>does anyone know how to specify a kernel version in guix?
<oriansj>I have tried both: (kernel linux-libre "4.19.280") and (kernel linux-libre@4.19.280) with no success
<podiki[m]>I would think (kernel linux-libre-4.19) for instance, i.e. the variable name
<podiki[m]>which is usual linux-libre-x.y form I believe
<oriansj>yep, seems to be working
<oriansj>now, hopefully this version will work with intel-vaapi-driver-g45-h264 on my librebooted x200
<podiki[m]>apteryx: did not know about that, but doesn't seem to show aarch64 for rust
<podiki[m]>which I guess is true but I thought for e.g. librsvg aarch64 can use current version
<bjc>system built with core-updates \o/
<podiki[m]>woo!
<apteryx>cool! how many local patches do you have?
<bjc>well, it booted. i'm getting a bunch of ā€œunable to set localeā€ messages, though
<bjc>sorry: ā€œguile: warning: failed to install localeā€
<apteryx>did you need to author a few patches?
<bjc>i'm still sitting on a couple in order to get this far
<bjc>i can't actually run a pre-inst-env guix atm. i'm hoping bootstrap+configure will fix it
<podiki[m]>with my local fix for samba wine builds on core-updates; need to check same patch applies on master
<podiki[m]>samba is also out dated and has CVEs too, let me see if latest version builds as is
<podiki[m]>submitted https://issues.guix.gnu.org/62894
<unmatched-paren>morning guix :)
<jlicht>hey guix
<abrenon>hi guix
<jlicht>o/
<civodul>Hello Guix!
<minima>\o/
<abrenon>o/ civodul
<andreas-e>Hello all!
<abrenon>: )
<cbaines>morning o/
<civodul>i was sunbathing while y'all were working on core-updates :-)
<civodul>how did it go? what's left to be done?
<andreas-e>See my last mail on guix-devel. Mainly Python I would say, but every little piece helps.
<patched[m]>How do you guys go about specifying the available home directories? Optimally, I'd like to be able to specify it from my home configuration, but don't know how I'd best do it.
<civodul>patched[m]: hi! what do you mean by "available home directories"? usually there's just one of them?
<civodul>andreas-e: awesome, i'll check it out and try upgrading my home
<andreas-e>The tests of ghc@9.0 currently fail, which is also a Python problem ;-) "testlib.py:AttributeError: module 'collections' has no attribute 'Iterable'". A matter of adding ".abc" to "collections"?
<andreas-e>But maybe we can drop it altogether, since ghc@9.2 is bootstrapped from ghc@8.10?
<civodul>maybe we should send a poll to see whether Haskellers might need 9.0 for development purposes?
<abrenon>I know I don't (my system is at 9.2.5 and I'm happy with that)
<Guest758>Hi. guix pull can be very slow sometimes. do *all* the commits get pulled all the time? if yes, would that be necessary or is --depth 1 possible?
<andreas-e>Is it the first time you pull? Only the new commits are pulled.
<andreas-e>The git checkout is cached in ~/.cache/guix/checkouts
<andreas-e>What takes most of the time for me is building the derivation.
<Guest758>not the first time, but i leave some time between pulling.. good to know that it's only the new commit
<Guest758>true, it's the derivation building that takes time. thanks for pointing me to the path
<patched[m]><civodul> "patched: hi! what do you mean by..." <- I mean the directories in home, like "documents", "temp", et.c....
<zimoun>andreas-e: ghc-* packages are currently building on core-updates x86_64. So it seems fine, no? For i686, well GHC is still an issue.
<andreas-e>Yes, it is advancing well. The excessive build time is more a general problem - we need days on a powerful build farm to bootstrap to the latest version!
<andreas-e>OpenJDK is similar, but there one build takes about half an hour and not 6.
<andreas-e>For i686, I launched it on berlin by hand without offloading to see what will happen. Probably a one hour timeout...
<civodul>andreas-e: i'm going for lunch, but if that offloading bug happens, could you let me know the details so i can ssh in and see what's going on?
<andreas-e>Okay! I need to provoke it by building things by hand. I do not know whether it happens with cuirass.
<sughosha>Hi, how to add a patch from a local file into a package? I tried (patches (local-file "filename.patch")) but this gave error.
<sughosha>I found the solution. I had to use (list).
<bjc>after reconfiguring my system w/ core-updates and rebooting, ā€˜./pre-inst-env -- guix ...ā€™ is now failing with: git/bindings.scm:66:8: In procedure git_libgit2_init: Function not implemented
<bjc>system guix works, though. i'm not sure what the problem is
<zimoun>civodul: for GHC, you can see kind of timeout already on master for i686 https://ci.guix.gnu.org/build/478772/log/raw
<PurpleSym>Unfortunately there's no better bootstrap path for GHC other than building the entire chain.
<PurpleSym>Perhaps we can just disable the offending tests on i686? The staging merge unfortunately intergered with me testing that option.
<abrenon>why is the log on CI about 9.2.5 and people (at least Andreas in his email) mention 9.0 ?
<PurpleSym>Because multiple versions of GHC are currently broken on i686: 8.10, 9.0 and 9.0.
<PurpleSym>9.2, sorry.
<abrenon>I had assumed so much, don't worry : )
<abrenon>oh, that's only on i686 ? sorry, I hadn't followed this issue at all
<abrenon>ACTION will stop the useless local build trying to reproduce on x86_64ā€¦
<civodul>overdrive1 is back! apparently it was stuck since March 29th, oops
<PurpleSym>Annoyingly python-anyio is also broken now, which is needed by the Jupyter universe.
<zimoun>abrenon: just pointing to civodul another timeout failure about GHC, since they wanted one for investigations. :-)
<abrenon>but that's unrelated ?
<civodul>zimoun: i need one that's happening right now though, so i can debug it :-)
<zimoun>what the current troubles of ghc on core-updates? yes and no. ;-)
<zimoun>civodul: thatā€™s impossible because now ghc had been built by hand. So the only way would to relaunch the build.
<zimoun>While relaunch the build of 9.2.5 seems possible.
<zimoun>As I am suggesting. :-)
<zimoun>civodul: for debugging GHC and Cuirass, you could use this: https://ci.guix.gnu.org/build/478772/details by relaunching the build.
<civodul>good idea, lemme see
<zimoun>thatā€™s not exactly the same, but it is also a timeout and current master. So I guess it would help similarly for the merge of core-updates.
<civodul>i've just launched it, let's hope it times out
<bjc>zimoun: for x86_64, i was able to compile everything up to the latest ghc, and a whole mess of haskell stuff for pandoc
<abrenon>I'm not familiar at all with the way branches are used in guix, what is the purpose of core-updates ? is the problem affecting only i686 on core-updates or am I misunderstanding yet something else ?
<zimoun>bjc: me too, and PurpleSym too. :-) It seems an issue with Berlin and/or Cuirass.
<zimoun>abrenon: for what core-updates is, maybe give a look at: https://lists.gnu.org/archive/html/guix-devel/2023-04/msg00179.html
<zimoun>another example is Julia: it builds fine on my machine for both x86_64 and i686 but fails for i686 by the CI.
<abrenon>thanks for the link ! (and sorry for such newb questions, I feel like I'm being dragged into something much too big for me, you clearly all overestimate me)
<zimoun>no worry :-) And Guix is much too big for most of us. ;-)
<efraim>zimoun: cpu optimization possibly?
<efraim>or too many cores. it always seems to be one of those two
<zimoun>efraim: about Julia, yeah maybe. andreas-e reported OutOfMemory for i686 so I am guessing a configuration of Berlin.
<abrenon>oh, this story rings a bell, it had just been too long since I heard of it and I had forgotten 'core-updates' was the name of this beast
<efraim>ACTION ran into that with openjdk on i686 on a 24-core machine
<zimoun>efraim: Well, I am building with 128 cores for i686 and it passes.
<efraim>OOM isn't hard with a few parallel julia packages
<PotentialUser-72>hi all, the link to the latest binary seems to be broken: https://ci.guix.gnu.org/search/latest/archive?query=spec:tarball+status:success+system:x86_64-linux+guix-binary.tar.xz
<PotentialUser-72>should i report an issue?
<abrenon>hmmm I get a 502 HTTP error, so I guess the problem is transient and with the web server ?
<zimoun>PurpleSym: well, I propose to cut a bit the boostrap chain of GHC (I have just sent an email). Any objection?
<PotentialUser-72>i don't know. the link seems to have been broken since at least 4 days, that's when the guix-install-action failed for us on github
<PurpleSym>zimoun: Will look at it later, on mobile right now.
<zimoun>PurpleSym, ok. I am going to try a patch for core-updates.
<andreas-e>abrenon: ghc@9.0 is broken also on x86_64. I suggested to remove it because it is not needed for bootstrapping 9.2. It is used (but hopefully not needed) for 9.4.
<andreas-e>zimoun: It looks as if you still need 7.10; from your line: 7.8.4
<andreas-e>-> 8.0.2 (needs >= 7.10)
<abrenon>PotentialUser-72: ouch, 4 days ? ok it's more serious than I thought
<PotentialUser-72>abrenon perhaps, i just checked a few of the most recent nightly builds at https://ci.guix.gnu.org/build/640530/details, i can't download any of those
<PotentialUser-72>abrenon builds itself look fine, though
<abrenon>yeah, which is why I hoped it was just the web server failing a bit
<zimoun>yeah, typo. ;-) 8.0.1 (needs 7.8 at least) instead of 8.0.2 (needs 7.70 at least) https://www.haskell.org/ghc/download_ghc_8_0_1.html https://www.haskell.org/ghc/download_ghc_8_0_2.html
<zimoun>s/7.70/7.10 ;-)
<zimoun>andreas-e: I am trying. I will report on guix-devel.
<andreas-e>zimoun: Thanks!
<civodul>still no timeout for my ghc build on berlin :-/
<zimoun>civodul: manual build or build via Cuirass?
<andreas-e>I got a timeout after one hour silent time in 8.10 on i686 in a "galois" test, with a manual build without offloading.
<civodul>zimoun: manual
<civodul>andreas-e: is it still "running"?
<andreas-e>No, I killed it. I think I misunderstood your request. I thought you wanted an example where the ssh connection for sending inputs stalls.
<civodul>heh :-) i'd like live processes that are stuck (about to time out)
<andreas-e>Well, it killed itself after one hour.
<civodul>right, there's a 1h time window to debug it
<civodul>that makes it more fun
<andreas-e>Okay, I will start another one.
<tschilptschilp23>Hi guix! I just want to report that checks on python-pytest-trio seem to fail for guix 2d06dfc050114dba44e791d8decc8eaa705fee01 -- http://paste.debian.net/1277613. I don't even have it in my home-configuration but there's the dependency calibre<-python-jeepney<-python-pytest-trio, which made me notice it!
<andreas-e>It also fails on core-updates. I hope someone can fix it there and then we just merge :-)
<abrenon>still building guix on core-updates, maybe I'll be able to help on one package someday ^^
<jpoiret>tschilptschilp23: I think it was reported in https://issues.guix.gnu.org/62871 if you want to follow there
<PurpleSym>Letā€™s just downgrade trio to 0.21. python-anyio does not have support for trio 0.22 either and there is literally no way to fix it.
<apteryx>I'm working on python-pytest-trio for the record
<PurpleSym>Which path are you taking for the fix, apteryx ?
<apteryx>I got caught up in tho aiohttp stuff
<apteryx>closing in on trio, but I may end up with the same conclusion as you have :-)
<PurpleSym>Alright, thanks for taking care of that one šŸ™‚
<civodul>am i right that pandoc is not known to be broken on core-updates, just being built?
<civodul>(on x86_64)
<bjc>pandoc built for me just fine
<civodul>ok
<civodul>"server is somewhat slow" are we building too much on berlin? :-)
<apteryx>not sure, but I keep getting those, as well as timeouts
<apteryx>the load seemed reasonable to me, but guix publish yesterday had rss of 13 GiB
<apteryx>and uses 10 cores or so
<apteryx>which seems abusive unless we have so many users :-)
<apteryx>ACTION wonders how the GC copes with a 13 GiB process
<apteryx>civodul: also, hi! :-)
<apteryx>PurpleSym: did you get passed python-pyzmq?
<PurpleSym>apteryx: I didnā€™t look at python-pyzmq, no. Just tried to get JupyterLab working again, but ran out of time, because compiling everything on a laptop is kindof hopeless.
<podiki[m]>hi guix
<abrenon>'later !
<podiki[m]>I have a fix for samba on i686 for core-updates (same will be needed on master I think) https://issues.guix.gnu.org/62894
<podiki[m]>could use input on style/rust architectures
<civodul>apteryx: hey! 'guix publish' CPU usage is capped by --workers, which is 8 on berlin
<civodul>RSS is 3Ā GiB right now, but i think it's dominated by temporary zstd/lzip buffers
<civodul>it goes up and down
<apteryx>OK! I restarted it tomorrow after it reached 13 GiB
<apteryx>for the offload timeout, it could be that berlin hasn't been reconfigured/restarted since the timeout was bumped in 53d718f61b4f59bf240515a8f2000972d3dca7b8
<apteryx>it's been a while since I've done so, and without the warrantee of the 2nd machine (node 129), I'm a bit wary to do so.
<civodul>oh
<civodul>andreas-e: my GHC i686 build process on berlin shows this in 'top': https://paste.debian.net/1277623/
<civodul>a python process at 100% CPU not printing anything
<civodul>and tons of zombies
<apteryx>zombies would be because of our PID 1 not reaping finished process issue
<civodul>the max-silent timeout in this case may not be an offload bug, but an actual build issue
<civodul>ah
<apteryx>for the zombie process issue, tracked in #30948
<bjc>is shepherd not reaping a known issue? i've been noticing issues with it lately, but haven't gone digging yet
<bjc>oh
<apteryx>it'd be nice to fix that one, it sometimes break test suites that waitpid to die
<apteryx>(a common victim seems to be python-dbus, used in some GNOME components)
<apteryx>bjc: PID in the build container is not Shepherd but the Guile script :-)
<apteryx>*pid 1
<andreas-e>apteryx: I think the commit you mention may be what we need for the offload thing.
<andreas-e>The ghc silent timeout is indeed a different problem.
<bjc>apteryx: ok. guess the shepherd thing needs attention then =)
<apteryx>hm, I've not seen that from shepherd, but perhaps I haven't been paying attention
<bjc>i don't have a minimal example, but i notice it with home shepherd when i reconfigure. if i make home shepherd service changes, at least under some conditions, it'll leave behind zombies
<civodul>bjc: would be great if you could gather more info, ideally a reproducer
<civodul>i don't remember seeing that
<bjc>it's something i've been meaning to look into when i'm not distracted with other things
<civodul>sure, np!
<civodul>i spend quite some time on the Shepherd these days, so it's a good time for me to investigate bugs :-)
<bjc>i'm hoping that moving things to actors improves shepherd's abilities with noticing things like processes not dying when ā€˜herd stopā€™ is called
<bjc>when i was writing my emacs service and kept screwing up the elisp to kill emacs, it was an issue
<civodul>andreas-e: i finally got: building of `/gnu/store/in0gfsahn1z6ym232wg3cayzim51zng8-ghc-9.2.5.drv' timed out after 3600 seconds of silence
<civodul>but like i wrote, it looks like a real build issue, not an offloading issue
<andreas-e>It is not an offloading issue. We mixed up two things, because I misunderstood what you wanted to do.
<andreas-e>The offloading problem on berlin appears after a line "exporting..." and the build never starts. Then it also times out after one hour.
<civodul>andreas-e: ah, that's the one i wanted to debug
<andreas-e>ghc is a different problem. It also appears when not offloading.
<civodul>ok
<civodul>got it!
<civodul>bjc: "processes not dying" upon "herd stop"; is that a thing you experience these days? (i don't)
<bjc>i haven't checked. it was only an issue while i was developing that one service
<bjc>it was less that the process didn't die: that was expected since i didn't code that properly. it was that shepherd said it was dead when it wasn't
<apteryx>civodul: I do get my systems using nfs to hang when shutting down, someone had debugged it was nfs itself blocking on a syscall
<bjc>i assume the ā€˜make-kill-destructorā€™ works, this was with custom ā€˜stopā€™ code
<bjc>and since we're talking about it, i have a strong suspicion that the reason ā€˜rebootā€™ hangs when using nfs is due to shepherd not killing things. when i logged out yesterday to reboot, i tried to umount the nfs shares manually and was told the file systems were in use. sure enough, my home shepherd still had a bunch of things running (despite being logged out). killing those processes allowed me to umount and reboot
<civodul>i haven't seen a bug report on that one either :-)
<civodul>but yeah, it's possible that issues are in the services themselves rather than in shepherd
<bjc>i didn't want to write one until i had more details =)
<apteryx>it seems we failed to produce one indeed; and I'm short on details other than it happens every time :-)
<civodul>that's a start :-)
<civodul>"killing things" happens in the 'user-processes' service, in Guix
<bjc>oh, guix handles that, not shepherd? that explains some things
<bjc>i couldn't figure out how to encode an organized shutdown procedure in shepherd. it not being able to do that would be why =)
<minima>hi, sorry, when i get a package's source code from pypi-uri, where is it exactly that i need to launch 'guix hash -rx .'?
<minima>i download the tar.gz, unpack it, and i'd expect just to run guix hash from the tar root
<minima>but the hash i get doesn't match the one i see on guix, so i must be doing something wrong
<minima>(yes, i DLed the correct version)
<lfam>minima: If the file is downloaded as a tarball, run `guix hash foo.tar.gz`
<lfam>Let us know if that works as expected
<minima>lfam: oh amazing, thanks - will get back to you in a min, but i'm confident that'll sort it
<lfam>In general, the best practice for calculating hashes is to calculate the hash before any other action, such as unpacking
<lfam>It saves resources and avoids attack vectors in the decompression or unserialization
<minima>lfam: brilliant, that's it, matching hashes now :)
<lfam>Awesome
<minima>oh i see re doing it as early in the process as possible, that makes sense
<minima>thanks
<lfam>The recursive `guix hash` is primarily intended for source distribution techniques that create directories, like `git clone`
<zimoun>why core-updates is again rebuilding all? https://ci.guix.gnu.org/eval/407899/dashboard
<zimoun>From https://git.savannah.gnu.org/cgit/guix.git/log/?id=d3c7ca3c40fd613cfb3fb8f41e8ea064b438414a, I am missing why?
<andreas-e>Sorry, I made a mistake. I wanted to restart builds for powerpc, but apparently restarted many others instead.
<andreas-e>(or additionally; powerpc packages are also being built)
<apteryx>/etc/guix/machines.scm doesn't leverage openssh config files, right?
<apteryx>I tried putting a host in /root/.ssh/config but it still wouldn't know about it
<apteryx>how can I see the full 'sshd' command line spawned by inetd? 'pgrep -a sshd' doesn't show it
<apteryx>and cat'ing the /proc/pid/cmdline doesn't either
<apteryx>also, is the 'pid-file' of openssh-configuration still relevant since migrating to an inetd style service?
<apteryx>ah, inetd-style? is conditional to the shepherd version used
<apteryx>ah; 'pgrep -a ssh' lists the full SSH command lines
<apteryx>nevermind, that's for my autossh service :-)
<zimoun>andreas-e, hehe. Thanks.
<zimoun>I am still trying to cut a bit the GHC story. :-)
<andreas-e>The substitutes will all be there!
<civodul>andreas-e: offload fix slated to arrive within 24h \o/
<civodul>until then, i need to go to the bakery
<andreas-e>civodul: Excellent, thanks a lot!
<andreas-e>Bon appƩtit!
<zeropoint>anybody have suggestions for the best way to add random symlinks to guix home? I'm trying to add links to another disk to my home directory and doesn't seem like there's a straightforward service to extend unless I'm missing something.
<zacchae[m]>zeropoint: You can always write a guile script that creates the symlinks
<zacchae[m]>I do see a home-symlink-manager-service-type, but I'm unsure how general purpose it is. A script in a home-activation-service-type should do though
<GNUtoo>Hi, I've 2 questions, (1) How do you repair from a guix system reconfigure ran with sudo -E (2) how do you properly do a guix system reconfigure within a guix git? More precisely how to do sudo guix system reconfigure with ./pre-inst-env ?
<zeropoint>zacchae: yeah the home-symlink-manager-service-type seemed not the right place when I was looking at it, home-activation-service-type seems useful. will try it out. thanks!
<GNUtoo>For (1) I've the following error: "Throw to key `record-abi-mismatch-error' with args `(abi-check "~a: record ABI mismatch; recompilation needed" (#<record-type <svn-reference>>) ())'." and I chowned with my username .cache and I only had "Operation not permitted" with .cache/gnome-disks*
<teddd>why is guile code from my channels not accessible inside 'guix repl' ?
<GNUtoo>So I'm unsure what I'm missing
<zacchae[m]>GNUtoo: For (1), you should just reboot into a non-broken system by selecting a previous reconfigure from grub
<teddd>for example 'guix describe' says I have rde in my channels. But ',use (rde)' fails
<GNUtoo>ok
<GNUtoo>And then I guess that once rebooted I'd need to do (2) and I'd be ok somehow
<GNUtoo>thanks, I think it makes sense, the current system is probably somewhere in /root/.cache
<unmatched-paren>evening, guix! :)
<juli>Hello everyone. I just had a quick question - does anyone know how to provide a custom stop command to Shepherd in a Shepherd service? I was trying to setup a shepherd service for Emacs using the home-shepherd-service-* functionality and wanted to use the '(client-save-kill-emacs)' function from https://www.emacswiki.org/emacs/EmacsAsDaemon#h5o-10
<unmatched-paren>juli: https://git.sr.ht/~whereiseveryone/guixrus/tree/master/item/guixrus/home/services/emacs.scm
<unmatched-paren>juli: actually there's an idea: i should add a field that lets you define an elisp expression to be run on service shutdown
<tschilptschilp23> On guix 2d06dfc050114dba44e791d8decc8eaa705fee01 the package gfeeds seems to have troubles building with a webkit-related error: http://paste.debian.net/1277639
<tschilptschilp23>ACTION likes to way of guix reminding about a long overdue home-configuration cleanup :)
<apteryx>tschilptschilp23: is there a new version available for it?
<apteryx>perhaps an update could fix that
<apteryx>we've got the latest webkitgtk on master; it's fresh
<tschilptschilp23>apteryx -- I just pulled to 2d06d this afternoon, so I guess it's OK new?
<apteryx>I meant a potential update for the broken package
<apteryx>(new release upstream, say)
<mekeor[m]>hello. does anybody know a package in guix proper, that is built using rust, meson and gtk4? i could use it as a template for packaging rnote :)
<tschilptschilp23>apteryx: OK, now I got it -- I haven't been using gfeeds for the past half year or so, so I'm currently going the easy way of taking it out of my home-configuration. Maybe I manage to look into the definition, but I cannot promise!
<efraim>mekeor[m]: newsboat and librsvg both mix cargo with the gnu build system, I imagine it wouldn't be that much different than mixing with meson
<unmatched-paren>mekeor[m]: i suspect there's a few of those kinds of packages in gnome.scm
<juli>unmatched-paren: looking at that file... does make-forkexec-constructor need to be modified to return #f to match Shepherd convention (a non-running service is represented by #f)?
<juli>also, I just realized you put up some blog posts on Dissecting Guix - good stuff!
<mekeor[m]>thanks, efraim and (
<unmatched-paren>juli: thanks; there's a third in the pipeline :) https://issues.guix.gnu.org/62356
<unmatched-paren>juli: i wasn't aware of such a convention (though i'm not quite sure what's meant by a "non-running service" here?)
<unmatched-paren>actually it's probably nicer to read like this: https://issues.guix.gnu.org/issue/62356/attachment/3/
<juli>I may be misunderstanding, but going off the explanation of the stop slot here: https://www.gnu.org/software/shepherd/manual/html_node/Slots-of-services.html
<unmatched-paren>ACTION has basically never looked at the shepherd manual :P
<juli>gotcha XD well, I'll muck around and see if I can get things working. Thanks for the resources!
<apteryx>tschilptschilp23: OK! I'll do a quick check
<unmatched-paren>juli: there's some interesting information there, but i suspect return values and all that are managed by MAKE-FORKEXEC-CONSTRUCTOR and its ilk
<unmatched-paren>ACTION afk
<mirai>forkexec constructor returns a pid
<teddd>hi guix!
<teddd>I just configured l2md with mu4e and I can read guix-devel in my usual mail reader now :o
<mirai>juli: the return value of (start ā€¦) is passed to (stop ā€¦)
<tschilptschilp23>Does anyone have an idea how to tell, which package exactly pulls python-pytest-trio python-jeepney as its dependency -- I first noticed calibre, took it out of my home-configuration, thought it's done, as I had another error with gfeeds, took that one out, and now the python-pytest-trio reoccured, but without a hint which package actually wants jeepney and trio...
<juli>mirai: I was actually wondering about that, thanks!
<tschilptschilp23>I'm thinking about something like 'apt-cache rdepends PACKAGE' out of the dpkg-world...
<tschilptschilp23>ACTION just learned ~guix graph --type=reverse-package python-pytest-trio | xdot -~
<tschilptschilp23>OK, it's ~guix graph --type=reverse-bag python-pytest-trio | xdot -~ what gives the info I needed.
<tschilptschilp23>ACTION will browse matrix through the browser again
<mekeor[m]>teddd: i'm using https://yhetil.org which uses public-inbox which provides IMAP interfaces for relevant guix mailing lists. that way, i don't even need a separate program (like l2md). i just can use the same imap-client i use for mail.
<mekeor[m]>teddd: anyway, congrats! it's awesome to use the mail-program for mailing-lists :)
<podiki[m]>curious in how that compares to just subscribing to the mailing list, I don't know much about public-inbox
<podiki[m]>nice for having the archives to search locally?
<teddd>mekeor[m]: Thanks:) Ah nice setup that you have! I didn't know public-inbox can do imap directly. I could just use it with offlineimap then
<teddd>mekeor[m]: I heard that git protocol (used by l2md) is faster than IMAP though.
<teddd>mekeor[m]: How do you deal with other mailing lists ? I'm trying to make all sort of news feeds converge to my local Maildir ^^
<tschilptschilp23>argh, linux-libre-lts 6.1.24 does not like my laptop's display...
<tschilptschilp23>back to rolling kernels.
<mekeor[m]>teddd: oh. personally, i've only needed guix and emacs. so, yhetil.org is a perfect fit for me.
<teddd>mekeor[m]: makes sense. That's also almost all I use.
<mekeor[m]>also, i once wrote this program that downloads mboxes from gnu.org and converts them to maildir using mblaze: https://paste.rs/TKz
<mekeor[m]>i guess, there are many ways... :)
<apteryx>sneek: later tell tschilptschilp23 bah, gfeeds 2.2 builds, doesn't run because it requires python 3.10
<sneek>Okay.
<apteryx>I'll dump the update on my core-updates stack and move on
<mekeor[m]>imho, gnu should maintain a public-inbox instance :)
<teddd>mekeor[m]: interesting.
<teddd>Yes I agree.
<tschilptschilp23>how does it come, that downloading packages regularely goes down to speeds of some 50kB/s -- are we seriously low on bandwidth, or is this some silent advertisment for skipping substitutes?
<sneek>Welcome back tschilptschilp23, you have 1 message!
<sneek>tschilptschilp23, apteryx says: bah, gfeeds 2.2 builds, doesn't run because it requires python 3.10
<tschilptschilp23>apteryx: ahh, that makes sense, thanks for the info!
<tschilptschilp23>ACTION is hypnotized by erc's colors without a desktop environment...
<teddd>mekeor[m]: there is also mb2md package
<teddd>mekeor[m]: I might use some of your script
<podiki[m]>what is it like to use public-inbox as opposed to subscribing directly? or is it for the archives?
<tschilptschilp23>ACTION actually thinks this would now be the time for a rollback, as the thought link between a kernel switch and a failing gnome-shell is really thin.
<teddd>For me it is for the offline archives. I can index all the mails using mu and search quickly through them in mu4e.
<podiki[m]>makes sense
<teddd>offline, faster, more flexible search queries / navigation
<podiki[m]>i tend to search on the guix mailing list archive page directly, but it is not the best
<podiki[m]>yeah
<teddd>podiki[m]: yes that's also what I did until ... 1 hour ago ^^
<podiki[m]>:)
<teddd>Also you can directly reply to mails while browsing
<tschilptschilp23>ACTION but is seriously attracted by erc's colors on tty.
<teddd>tschilptschilp23: ever tried to live in the tty? I was often tempted
<tschilptschilp23>teddd: I am kind of forcefully tempted at the moment :)
<teddd>haha I wish you to get out of it stronger soon
<tschilptschilp23>Do you have an idea how to draw nice sinus-curves in the tty, I seem to have some time right now...
<mekeor[m]>maybe launch icecat instead of a x-wm, and for the rest, you can use tty :D
<teddd>mekeor[m]: that's probably all we need
<tschilptschilp23>admittedly I really like chromium :D
<tschilptschilp23>but it's faar away at the moment!
<teddd>tschilptschilp23: no idea. There is artist-mode. And python stuff to do text-based art
<teddd>aalib
<teddd>you can try all the cli-based games
<ieure>The original rogue is a delight to play on a text terminal, enjoy firing it up on my amber screen VT220.
<tschilptschilp23>aalib sounds like I finally need to learn some C!
<apteryx>do we have a package providing v4l2-ctl ?
<apteryx>lechner: is juix.org down?
<tschilptschilp23>ACTION is blinded by colors again...
<tschilptschilp23>linux-libre vs linux-libre-lts seriously seems to make a difference regarding getting a gnome shell or a black screen here :)
<apteryx>about v4l2-ctl -> v4l-utils
<Guest19>if i do guix pull, guix home reconfigure recompiles my package i defined in my own channel.Ā  why? didn't change anything to it
<unmatched-paren>Guest19: it might be that one of its dependencies was updated in guix
<tschilptschilp23>apteryx: you mean I could run linux-libre-lts in non-tty-style as well? Sounds like I need to dig in deeper how to put this into my system-configuration...
<Guest19>unmatched-paren ah okay.Ā  I thought grafts are used to stop it from recompiling everything.Ā  Did I understand it wrong?
<tschilptschilp23>just changing (kernel linux-libre) to (kernel linux-libre-lts) makes my screen blank after login from the greeter...
<jpoiret>Guest19: grafts are only used for time-sensitive changes, like security fixes
<Guest19>ah I understand. okay, guess I just have to deal with it
<civodul>ACTION prepares to reconfigure the berlin build nodes to fix https://issues.guix.gnu.org/61839
<teddd>night guix!
<unmatched-paren>\o
<tschilptschilp23>night!
<civodul>unmatched-paren: hey! just saw your next "Dissecting" episode, haven't yet taken the time to read it but will do!
<tschilptschilp23>bye guix!
<mekeor[m]>which build system should i use if the build needs both gtk and meson?
<Guest19>If I define a substitue for the Guix daemon I still need to manually define it for guix weather with the --substitute-url flag.Ā  Shouldn't guix automatically use it?Ā  I would say this is a bug
<unmatched-paren>Guest19: yeah, known issue
<unmatched-paren>mekeor[m]: probably meson... i think?
<podiki[m]>i don't know that we have an actual bug report though, maybe just one of those things as quirks people rediscover
<Guest19>I tried https://issues.guix.gnu.org/search?query=weather+substitute+is%3Aopen but having trouble finding it.Ā  do you may have it at hand?
<podiki[m]>i didn't find anything either
<Guest19>also, sometimes I just want to update my own channel.Ā  is it possible to do someting like guix pull <channel> since -C requires a file
<Guest19>well, technically since it is my own channel I guess I have the file.Ā  But I guess you know what I mean
<tschilptschilp23>argh, audacity also fails to see ffmpeg on guix 2d06dfc050114dba44e791d8decc8eaa705fee01 ... but this one I cannot throw out, mph.
<tschilptschilp23>ACTION shouldn't have tried out if basic functionality is given after pulling and reconfiguring.
<apteryx>tschilptschilp23: is it easy to test?
<apteryx>seems it's fixed but I guess not yet released? https://github.com/audacity/audacity/issues/4480
<apteryx>we'll have to package an unreleased snapshot (latest commit)
<tschilptschilp23>apteryx: I think so -- I have audacity in my home-configuration and ffmpeg in the system-configuration. When I start it up, it tells me that ffmpeg has been configured successfully in the past, but now cannot find it anymore, and I need to reconfigure. If I go to preferences->libraries and point it to the ffmpeg.so.60 that's in the store it does not accept it. Maybe I just should put a symlink with the name ffmpeg.so elswhere and
<tschilptschilp23>try...
<tschilptschilp23>I guess this will not work, as there are symlinks in the store, but audacity resolves those to the 'original' names. what a mess.
<tschilptschilp23>I will now roll back system and home, this is all a little too much for my nerves ;)
<Guest19>guix weather htop --substitute-urls=https://gnu.fail returns an error.Ā  shouldn't this cleanly exit and just say that it can't resolve the domain? it even says guix weather: warning: gnu.fail: host not found: Name or service not known in the beginning
<lfam>Yes
<Guest19>okay, guess I found another bug
<mekeor[m]>why should it exit cleanly? :)
<jackhill>Hi, I tried to use `guix system image` to build a docker container that would run some services and as a test, I'm trying sshd and postgres. After building the image with guix and loading it into docker, I can run the image. However, ssh-daemon and postgres don't start. If I try `herd restart ssh-daemon` I see "Throw to key `%exception' with args `("#<&netlink-response-error errno: 1>")'."
<lfam>mekeor[m]: It can exit with an error code, but it shouldn't crash, which it does now
<jackhill>I suspect that this is somehow related to a bad interaction between guix networknig and docker controlled networking. What is the right way to use the docker image type?