IRC channel logs

2021-12-03.log

back to list of logs

<guixy>Now I'm getting `ModuleNotFoundError: No module named 'pygame._sdl2' for another package
<rekado>jpoiret: probably no good reason other than that I was working on c-u-f. Sorry
<jpoiret>Ah no problem at all! Would you mind backporting it to master? I'm working on the xdg-desktop-portal screensharing and we need a new pipewire for wireplumber
<jpoiret>Also, since pipewire-pulse is a drop-in replacement for pulseaudio, maybe we could consider switching to it? I find it already works better than pulseaudio for a couple of things (bluetooth audio for one)
<jpoiret>I'm saying that because adding pipewire and wireplumber to %desktop-services would bring them in anyway (and we'd want that to be able to have proper screencasting on wayland)
<mbakke>jpoiret: switching to pipewire sounds great
<singpolyma>pipewire is so depressing
<singpolyma>Yet another reset to zero
<mbakke>it ticks in at 391 MiB though, it would be good to move some plugins to separate outputs in order to save space
<guixy>It looks like python-pygame got a major version bump.
<guixy>Packages built for the newer pygame won't work 1.9.1 or whatever it is right now. pygame doesn't have many dependencies, so I will update it when I send a patch series to add tuxemon.
<patched[m]>How do I get an installed wm to show up in the wm selection on login? Specifically dwl.
<jffujmcj>Apparently you need to write it down in /etc/config.scm rather than just "guix package -i "
<patched[m]>Hmm, and then sudo guix system reconfigure /etc/config.scm? Tried that at least but it didn't work...
<jffujmcj>Worked for me with mate. Maybe because mate is a desktop and this added to services rather than packages. Yeah, it's complicated. So far seems more complicated than gentoo :)
<dissent>hey guix, i'm attempting to build pqiv for the repo. would someone be willing to double-check this failed build log? https://termbin.com/r7m7
<mbakke>c-u-f is a game of whack-a-bug, one one is down new ones appear
<KE0VVT>mbakke: Change the chart from green and red dots to moles and holes. ;-)
<vagrantc>hah
<mbakke>dissent: looks like the 'configure' script does not support CONFIG_SHELL (which is a default flag in gnu-build-system)
<mbakke>you probably need to override the phase to only pass --prefix, see e.g. fio for an example
<mbakke>oh, the script says "Some of the usual autotools options are supported", perhaps it can be patched to support CONFIG_SHELL too :)
<dissent>would this mean substitutes?
<raghavgururajan>We have pipewire service?
<patched[m]><jffujmcj> "Worked for me with mate. Maybe..." <- Hmm, for the time being, how can I remove the login screen and just use the login tty then?
<patched[m]>So I can just start dwl from there instead
<dissent>patched[m]: someone figured a relatively simple way to start X from the tty. although you lose xorg-configuration.
<dissent>jsoo has a more complicated way that uses xorg-config. i'll send the links.
<patched[m]>dwl uses wayland. So it should simply work by writing `dwl`?
<dissent>oh.. i'm not sure.
<dissent>here is what i used for x, perhaps it can be manipulated
<dissent> https://mail.gnu.org/archive/html/help-guix/2021-10/msg00116.html
<vagrantc>that's how i use sway (a wayland thingy)
<dissent>here are jsoo's config https://git.sr.ht/~jsoo/dotfiles/tree/release/item/guix
<dissent>patched[m]: figured it out. forgot to add the arguments.
<Gooberpatrol_66>can someone explain this? extra content takes a string as an argument. string-concatenate returns a string. but if i do (extra-content string-concatenate("string1" string2")), i get an "invalid field specifier" error.
<Gooberpatrol_66>(this is in config.scm and then running reconfigure)
<Gooberpatrol_66>*extra-content
<Gooberpatrol_66>it's the name of a function
<oriansj>minor bug to report: https://paste.debian.net/1221772/
<dissent>woo first package submitted, hope it doesn't need much tweaking.
<boomerChad>rekado_ I just tried building the patch for openjdk@17 and it was successful
<vivien>Gooberpatrol_66, do you mean (extra-content (string-concatenate '("string1" "string2")))?
<Gooberpatrol_66>vivien: that gives me "wrong type to apply: ("string1" "string2")"
<apteryx>the annoying xbk warnings in xorg are probably due to https://gitlab.freedesktop.org/xorg/lib/libx11/-/merge_requests/79
<apteryx>(on c-u-f)
<apteryx>xkb*
<vivien>"invalid field specifier" errors are usually used by guix records to inform you that the record you are creating does not support the field you are trying to set. It’s hard to understand the problem without the configuration code.
<vivien>(string-concatenate '("a" "b")) gives me "ab"
***jonsger1 is now known as jonsger
<the_tubular>Anyone has a nice config.scm where a lot of lines are commented ?
<the_tubular>So I can learn what they do
<KE0VVT>the_tubular: not sure how i could comment on the code more
<the_tubular>"more" ? Comapred to what ?
<the_tubular>I haven't seen your configs KE0VVT
<apteryx>sneek later tell zimoun I restarted both
<sneek>Will do.
<KE0VVT>the_tubular: https://bpa.st/HAQA
<Zambyte>the_tubular: You can look in the guix repo under gnu/system/examples/*.tmpl for example configs
<Zambyte>Cross referencing those with `info guix` should get you pretty far :)
<the_tubular>Sorry KE0VVT, missed your message. I looked at your config and actually understands all of it, I might be looking for some more "complicated" ideas
<the_tubular>I'd be cruois seeing a config with stuff like sshfs, or maybe even bcachefs, see you implement those in guix
<the_tubular>Looking for *
<dissent>how would the guix community feel about having https://archlinux.org/packages/community/any/arch-wiki-lite/ added to the repos?
<dissent>useful for reaching the arch wiki from the command line
<jackhill>dissent: it's both a browser for the wiki, and the content as text files?
<dissent>jackhill, yes if i recall correctly the package install the wiki as text files as well.
<jackhill>dissent: the general concept of the thing seems good, but if the wiki contents recommend non-free software (which I think they might) that would be problematic, yes.
<dissent>jackill, i see and surely it does.
<jgart>dissent, you could put it in a guix channel so that people can still get it if upstream does not want it
<dissent>jgart, yeah probably like we discussed before.
<jpoiret>raghavgururajan: not yet, but should not be too hard to implement
<jpoiret>i feel like the defaults will suit most people too
<jpoiret>apteryx re the xkb MR: great :)
<jpoiret>patched[m]: what WM are you trying to use?
<jpoiret>ah sorry, dwl, didn't see that at first
<jpoiret>patched[m]: I see that dwl does not provide a .desktop file, so cannot be launched by DMs by default
<jpoiret>what's the baseline for cpu instructions supported on Guix? SSE/SSE2, or nothing?
<jpoiret>for now, pipewire has build-time detection of cpu instruction sets
<civodul>Hello Guix!
<PurpleSym>sneek: later tell zimoun: The evaluation for GHC on i686 finally finished, but the CI failed to build most packages due to “that” CI bug. ghc-sha ran out of memory, but succeeds locally.
<sneek>Got it.
<zimoun>hi!
<sneek>Welcome back zimoun, you have 3 messages!
<sneek>zimoun, jbv1[m] says: I have used it but not on guix, time to build and size of the sysimage varies with what you put in it. The default sysimage that is currently present in our julia package is the sys.so file I think, and it is around 200M.
<sneek>zimoun, apteryx says: I restarted both
<sneek>zimoun, PurpleSym says: The evaluation for GHC on i686 finally finished, but the CI failed to build most packages due to “that” CI bug. ghc-sha ran out of memory, but succeeds locally.
<zimoun>PurpleSym, apteryx: CI failed on many ghc-* i686 on c-u-f but they build locally. I though that restarting some will restart all the failed dependency. But apparantly if I do not miss.
<zimoun>Well, it appears complicated to restart all manually. What can we do?
<zimoun>For example ghc-aeson is considered as failed. And if we follow all the failed because failed dependency, we end to ghc-tempory which is considered as failed because dependency. apteryx restarted ghc-exception so now it is not anymore but although apteryx restarted ghc-aeson, it is still failed. civodul, idea?
<zimoun> https://ci.guix.gnu.org/build/1858476/details
<zimoun>*ghc-temporary
<civodul>zimoun: hi! i've restarted this one now
<zimoun>civodul, thanks. But the cascade will not happen. I mean, do we have to watch, manually restart, wait, watch, manually restart, loop? Because the unexpected Cuirass failure was about ghc-exceptions (800+ dependencies). ghc-temporary is only over 800+ :-)
<civodul>zimoun: i'm not sure; i think dependents are rebuilt when you restart something, but not always?
<civodul>mothacehe would know, or we can check the code
<civodul>we can also do that with a SQL query as a last resort
<zimoun>Ok. Let wait the effect of your recent actions.
<zimoun>civodul, BBB is broken for chromium on master, as you know. ;-) Do you know if it is also the case on c-u-f?
<vivien>Hello !
*florhizome[m] uploaded an image: (163KiB) < https://libera.ems.host/_matrix/media/r0/download/matrix.org/XQFWDfVnQfEZmAjUntjoLTne/IMG_20211203_100638.jpg >
<florhizome[m]>This is my current bootscreen...
<florhizome[m]>i don't know what's broken there, maybe sddm, maybe something with the latest Kernel update...
<florhizome[m]>I can just press enter and log in on the tty to continue, so it's not grave
<florhizome[m]>not sure if the appearing error msgs have something to do with it, i had similar ones before
<civodul>zimoun: no, would be good to see if we have the same problems as in https://issues.guix.gnu.org/50773
<civodul>howdy vivien!
<civodul>florhizome[m]: did it hang at that point?
<florhizome[m]>civodul: yup just stops right there
<civodul>florhizome[m]: it has actually booted though, there's a console prompt
<civodul>you might want to "sudo herd restart xorg-server"
<florhizome[m]>weird: the services field in dbus-configuration has a default setting of '() so when i tried to add a service (clightd, Screen management Tool) i thought '(clightd) should work. What worked was `(clightd). I think because it needs to be evaluated, the dbus Service Type needs to Look for dbus files in the package. But why then take a quoted empty list as default? quoting doesn't seem to make sense Here?
<florhizome[m]>civodul: Yes the problem is more that sddm should show up (i have it set on wayland). I can just hit enter, log in and Start my compositor, sure (;
<jpoiret>quoting means that you want the following to be interpreted as a list, not as a procedure application
<jpoiret>so '() is the empty list, whereas () is syntax error
<jpoiret>also, there shouldn't be any difference between '(clightd) and `(clightd), so it is kind of weird
<jpoiret>are you sure you don't mean `(,clight)?
<florhizome[m]>Yeah i was wondering, i tried that, too and thought it was equivalent.
<florhizome[m]>Now `(clightd) stopped to work, but `(,clightd) still does
<florhizome[m]>If i wanted to put multiple services, i'd do `( ,list(1 2 3)) where 1 2 3 are packages?
<florhizome[m]>maybe i'm just loosing my mind ...ehehe
<florhizome[m]><jpoiret> "so '() is the empty list..." <- i understand, but maybe we could have something different as default value, bc you Will always want to evaluate these values...
<jpoiret>then (list clightd) is what you want
<jpoiret>you can do (list clightd daemond otherdaemond) as well
<jpoiret>is list defined simply as (define (list . result) result)? preposterous
<jpoiret>civodul: are there any instruction sets that we can take as granted on Guix? I noticed pipewire tries to detect cpu features at build time
<jpoiret>i'll try looking at the chromium bug by the way, i've already been looking at browser source and libwebrtc in particular yesterday for firefox
<civodul>jpoiret: no, we compile for the baseline architecture
<civodul>but i'm working on a package transformation to address that
<civodul>anyhow, packages should never use -march=native or similar
<jpoiret>oh, nice :) i saw the discussion about that on guix-devel
<jpoiret>today's joke: google themselves say that checking out the source of webrtc needs 6.4G
<zimoun>mothacehe, ghc-* for i686 on c-u-f are red because Cuirass failed with something unexpected. For instance ghc-exceptions was reported as failed because it was 800+ dependents, bang! Yesterday, apteryx restarted it and now it is fine (green) https://ci.guix.gnu.org/build/1858027/details Then ghc-active and ghc-aseson-compat had been restarted; which should trigger the restart of missing, I guess. Is it the case?
<zimoun>Other said, all the marked ’Failed (dependency)’ are they requiring a manual restart?
<mothacehe>zimoun: when a package is marked as failed-dependency and all its dependencies are later on successful (because of manual restarts), then it is restarted.
<mothacehe>so no manuel restart is required on failed dependency packages
<zimoun>Thanks. This morning, I had this question and civodul was not sure. :-)
<zimoun>mothacehe, how can I solve this «ghc: out of memory (requested 1048576 bytes)» ? https://ci.guix.gnu.org/build/1858643/log/raw
<mothacehe>you can have a look to db-update-resumable-builds! for the exact mecanism
<mothacehe>in cuirass
<zimoun>thanks! I am going to give a look.
<mothacehe>regarding the oom error, i'm not sure
<mothacehe>if the worker is performing other builds concurrently it can be low on available ram
<zimoun>PurpleSym, ghc-sha also runs out of memory for me
<mothacehe>we have zabbix installed on workers
<mothacehe>we could use it to check for the ram usage on workers
<mothacehe>you can also use the chart here: https://ci.guix.gnu.org/machine/hydra-guix-105 if you identify the correct machine
<zimoun>how can you identify the machine?
<mothacehe>oh its not written in the build details
<mothacehe>then you cannot sorry
<zimoun>moreover, it provides charts for only the last 2 hours when the build is finished since 16h.
<mothacehe>it was hydra-guix-108
<zimoun>thanks
<zimoun>PurpleSym: how many RAM your build machine has?
<zimoun>Checking on my machine, the out-of-memory for ghc-sha is not about RAM. Because I have locally much more than requested and it fails too. Hum?
<mothacehe>zimoun: according to zabbix the machine 108 had always more than 100GiB of memory available during the last two days
<mothacehe>so either zabbix is wrong or the problem is elsewhere
<zimoun>mothacehe: thanks for confirming the problem is elsewhere.
<zimoun>mothacehe: https://ci.guix.gnu.org/build/1858638/log/raw is it a Cuirass issue?
<zimoun>I guess yes, could you restart https://ci.guix.gnu.org/build/1858638/details ?
<zimoun>idem https://ci.guix.gnu.org/build/1858536/details
<mothacehe> yes a guix publish issue more likely
<zimoun>and probably many more https://ci.guix.gnu.org/build/1858361/log/raw
<PurpleSym>32GiB, zimoun.
<zimoun>PurpleSym, is it real i686 or ’-s i686’?
<mothacehe>zimoun: restarted all three
<rekado_>zimoun: r-flowclust is broken on "master". I'll fix it later today.
<zimoun>mothacehe, thanks https://ci.guix.gnu.org/build/1858321/details
<zimoun>mothacehe: https://ci.guix.gnu.org/build/1858300/details
<zimoun>mothacehe: https://ci.guix.gnu.org/build/1858276/details
<zimoun>mothacehe: https://ci.guix.gnu.org/build/1858209/details
<zimoun>mothacehe: https://ci.guix.gnu.org/build/1858132/details
<zimoun>for the firs page ;-)
<zimoun>Well, let wait and see what becomes green.
<rekado_>(still doing QA work, checking each and every package in (gnu packages bioinformatics))
<zimoun>rekado_: master or c-u-f?
<mothacehe>rekado_: i have an unmerged cuirass feature to be able to generate custom user dashboards for specific manifests, could be useful here I guess: https://issues.guix.gnu.org/47929
<rekado_>zimoun: both!
<rekado_>mothacehe: oh, neat!
*rekado_ ---> groceries
<civodul>mothacehe: yup that looks nice!
<civodul>i had overlooked that one
<mothacehe>civodul: thanks! A potential issue with that one is that the user manifests can be read in the cuirass database.
<mothacehe>I have a new kind of backtraces in cuirass-remote-server, does it ring a bell: https://paste.debian.net/1221805/
<civodul>mothacehe: means that the guix-daemon process hung up
<civodul>crashed, probably
<civodul>now you need to find out why
<civodul>ideally by stracing said guix-daemon process when that happens (the SessionPID in "guix processes")
<PurpleSym>zimoun: With `--system`. Maybe it hits the 4GB address space limit?
<civodul>not easy to do i suppose
<mothacehe>civodul: i see thanks, i might want to catch those errors in cuirass though as they shouldn't be fatal to the remote-server
<civodul>maybe yes
<civodul>mothacehe: BTW i'm going to reconfigure berlin to get the fix for the wrong narsize
<mothacehe>ok just did it a few minutes ago to increase the GC threshold to 15TiB
<mothacehe>and upgrade cuirass
<civodul>ah good
<civodul>so maybe i don't need to do it
<civodul>did you "herd restart guix-daemon"?
<mothacehe>nope but you can go ahead
<civodul>alright
<civodul>done!
<attila_lendvai>mothacehe, re removing "-vga std": do you mean to completely remove it, or conditionally, only when --no-graphic is provided?
<attila_lendvai>(context: https://issues.guix.gnu.org/52204)
<attila_lendvai>mothacehe, '-vga std' is the default since QEMU 2.2; i guess you mean completely remove it, right?
<reza[m]>Are there any examples of home configuration files?
<reza[m]>for example if want to set this environment variable: `export SSH_AUTH_SOCK="$(gpgconf --list-dirs agent-ssh-socket)"`
<mothacehe>attila_lendvai: i mean having -nographic and -vga std seems counterintuitive
<attila_lendvai>mothacehe, what if i just remove '-vga std', as this is the default since forever?
<slyfox> https://en.wikipedia.org/wiki/VGA_text_mode :)
<mothacehe>attila_lendvai: seems fine
*attila_lendvai is sending a new patch
<mothacehe>thanks, this should be a standalone patch i think
<attila_lendvai>mothacehe, i saw that too late, already sent. i think it's fine (commit log noise is also a kind of noise), but let me know if i should record it in two patches.
<jpoiret>just to make sure, the proper place to create runtime dirs for services is in activation-service-type extensions, right?
<attila_lendvai>jpoiret, that's also my understanding (but i'm new to all this).
<zimoun>mothacehe: «tar: wl-pprint-annotated-0.1.0.1: Cannot mkdir: No space left on device» https://ci.guix.gnu.org/build/1858496/log/raw
<zimoun>idem https://ci.guix.gnu.org/build/1858659/log/raw
<mothacehe>wew, sad
<civodul>could workers refuse builds when disk space is low, similar to what (guix scripts offload) does?
<zimoun>what is the next step so? Wait GC for some free more space?
<mothacehe>it happened on hydra-guix-101 that has 114G of free disk space
<mothacehe>oh but according to https://ci.guix.gnu.org/machine/hydra-guix-101
<mothacehe>the disk space used to be at 0%
<mothacehe>so yes we should definitely refuse new builds when the disk space is low
<zimoun>basically, 7 ghc-* packages are currently failing https://ci.guix.gnu.org/search?query=spec%3Acore-updates-frozen+system%3Ai686-linux+status%3Afailed+ghc
<zimoun>but only one is realy broken: ghc-sha
<civodul>mothacehe: offload.scm refuses skips machine with low disk or wrong clock, which are two common issues
<civodul>-refuses
<mothacehe>zimoun: feel free to restart them
<zimoun>no, 3 broken! ghc-sha, ghc-bloomfilter and ghc-tar; PurpleSym any idea
<zimoun>mothacehe: but no left space ;-)
<PurpleSym>zimoun: I’d say bloomfilter and tar simply don’t support 32 bit architectures.
<zimoun>:-)
***jonsger1 is now known as jonsger
<zimoun>git-annex depends on ghc-bloomfilter. )-:
<PurpleSym>Debian simply reverts https://github.com/bos/bloomfilter/commit/44b01ba38b4fcdb5a85f44fa2f3af1f29cde8f40, which we could also do.
<apteryx>civodul: rustc execution is fast now in the VM!
<apteryx>the ld-wrapper shebang was an x86_64 binary; I'm not sure how it could even run on 32-bit
<apteryx>s/ld-wrapper/rustc/
<civodul>apteryx: because we don't have "real" 32-bit x86 machines i guess? :-)
<civodul>which is a problem, because if we get things wrong along these lines, we don't notice
<PurpleSym>zimoun: And Debian simply disables the test-suite for ghc-tar on i686.
<apteryx>no, it was running in a "real" i686 32-bit Debian 10 VM
<apteryx>at least when it was on PATH and invoking it by 'rustc'
<apteryx>but cargo which tries to execve it was failing with a format error
<apteryx>strange
<apteryx>but perhaps this was not the issue; looking at trace I saw it was also trying to load libgcc_s.so, failing to, and using that from Debian instead
<apteryx>I've removed the wrapping phase entirely and now it runs better
<apteryx>not being wrapped it uses the glibc_s from the host, as well as its libc I guess
<apteryx>this is the strace of running ./hello built by that rustc in the debian vm: https://paste.debian.net/1221821/
<apteryx>civodul: would it be too dirty to 'guix pack' a relocatable dynamically link binary of rustc as the 'bootstrap seed' ?
<apteryx>i wonder if the relocation wrappers can be cross-built and would be effective in the build container
<apteryx>otherwise building rustc statically seems to be a challenge, due to it using features (macros) that rely on being dynamically linked
<apteryx>one suggestion was to pre-expand all of the code using proc-macros with the 'cargo expand' tool (expands one crate/library at a time), then remove the proc-macros, but the expansion is supposedly lossy and may not compile/run correctly
<civodul>apteryx: too dirty/fragile, presumably we could only use the "fakechroot" execution engine, and we'd have to check whether it's good enough for this use case
<zimoun>PurpleSym, yes. On the other hand git-annex is broken on master https://ci.guix.gnu.org/build/1841444/details but not because ghc-bloomfilter
<civodul>apteryx: also, a real i686 CPU cannot execute x86_64 code, that's for sure; as for the VM, if it's running with qemu-system-x86_64 (or similar), it's as if it had access to an x86_64 CPU
<civodul>you need qemu-system-i386 if you want to be sure
<apteryx>interesting; I'm running it in gnome-boxes, and it spawns a qemu-system-x86_64 process
<bung6[m]>ı want setup guix but ı cant
*bung6[m] uploaded an image: (255KiB) < https://libera.ems.host/_matrix/media/r0/download/matrix.org/GlmESrddQycPYCRKgnAlQgqO/resim.png >
<bung6[m]>with 3 hour
<apteryx>civodul: about a statically linked rustc; would it need to be statically linked even to glibc, or is this not required?
<apteryx>bung6[m]: 3 hours on building the manpage database? that's not normal, unless your storage/cpu is from 1990. I'd suggest interrupting it and retrying
<apteryx>things it built already will be in your store already, so you won't have to rebuild/redownload what it already did so far
<civodul>apteryx: yes; basically, it should be able to run in a totally empty chroot, just like the "guile-boostrap" package
<civodul>you can try that by unpacking it and then running it in "guix shell -C"
***jonsger1 is now known as jonsger
*apteryx discoverers the "Edit XML" in gnome-boxes; and adjusts <emulator>/home/maxim/.guix-profile/bin/qemu-system-i386</emulator> as well as <type arch="i686" machine="pc-q35-5.2">hvm</type>... but it doesn't work due to not supporting <cpu mode="host-passthrough"/>
<apteryx>deleting the cpu xml block seems to have done it
<bung6[m]><apteryx> "bung6: 3 hours on building the..." <- How do I setup?
<apteryx>bung6[m]: Can you make your question more specific? I don't follow.
<bung6[m]>Now, no disk selection place.
<bung6[m]>It only comes in alive
<apteryx>ah, so you were able to complete the 'guix system init', and tried to reboot?
<bung6[m]>how guix system init?
<apteryx>see: https://guix.gnu.org/manual/en/html_node/Invoking-guix-system.html
<bung6[m]>ı want vm
<apteryx>'guix system vm' should be documented in that same section; there's also a VM image available from the guix website
<apteryx>that you can run with QEMU or import in GNOME-Boxes
<roptat>apteryx, no bung6[m] is trying to install guix inside a VM, but they don't have guix yet
<roptat>ah nevermind, the VM image from the website is a good idea actually
<bung6[m]>ı have qemu file
<bung6[m]>but there is not disk option
<bung6[m]>bro
<roptat>bung6[m], what do you mean "not disk option"?
<roptat>can you show us?
<roptat>try this command: qemu-system-x86_64 -nic user,model=virtio-net-pci -enable-kvm -m 1024 -device virtio-blk,drive=myhd -drive if=none,file=guix-system-vm-image-1.3.0.x86_64-linux.qcow2,id=myhd
<bung6[m]>Unable to add disk space in flat installation
<Malsasa>Hello, hello!
<roptat>bung6[m], I don't understand what that means
<roptat>hi Malsasa
<Malsasa>Hello, GNU Community. I am a new Guix user. Greetings.
<Malsasa>Nice to meet you roptat.
<roptat>welcome :)
<roptat>bung6[m], just try the command above and it should work
<roptat>or give us a screenshot/error message
<Malsasa>Tonight, I ran guix install emacs and it returned "error: load thunk from file: invalid argument".  How to solve this issue so I can install programs normally again?
<roptat>is this error from guix or emacs?
<rekado_>is this a very old installation of Guix?
<rekado_>it almost sounds like a Guile error (loading incompatible .go files)
<Malsasa>roptat: from Guix, confirmed, as installing other packages returns exactly the same error.
<Malsasa>rekado_: no, it is a very new, just one day age.
<roptat>Malsasa, how did you install guix? what does "type guix" tell you?
<Malsasa>guix pull also returns the same error.
<Malsasa>Using root to do either one, pull or install, also returns the same.
<apteryx>civodul: still works in qemu-system-i386!
<Malsasa>roptat: several days ago, this GuixSD works normally and very well. GNOME running very nice.
<bung6[m]>roptat: ı am waiting guix setup
<bung6[m]>maybe it will work
<roptat>Malsasa, ah so it's the guix system, not a foreign distro? that's even more weird
<Malsasa>roptat: "guix is hashed (/home/master/blablabla...)
<roptat>Malsasa, .config/guix/current/bin/guix, right? not .guix-profile/bin/guix
<Malsasa>roptat: yes, the former one is correct.
<roptat>you should have multiple directories called /var/guix/profiles/per-user/$USER/current-guix-*-link
<roptat>can you try to run guix from the second most recent one? (if you have 1 2 3 and 4, then run 3 for instance: /var/guix/profiles/per-user/$USER/current-guix-3-link/bin/guix pull)
<Malsasa>roptat: perhaps this was because of my computer freezed yesterday and I did a force shutdown when guix was doing pull?
<roptat>ah maybe something went wrong in the store because of that
<roptat>so do you have other current-guix generations?
<Malsasa>roptat: to be honest, I suspect that too.
<Malsasa>roptat: let me roll back first, my friend.
*Malsasa doing guix garbage collection
<roptat>I was thinking a rollback would not work because the guix command is broken
<roptat>but if that works, that's wonderful :)
<bung6[m]><roptat> "try this command: qemu-system-x8..." <- it is live guix
<bung6[m]>not normaly
<roptat>Malsasa, also, I would suggest "guix gc --verify=contents,repair"
<roptat>bung6[m], yes it's a live guix system, so you can try it
<roptat>bung6[m], it's already installed, you don't have to do anything else
<bung6[m]>roptat: ı want normaly guix
<Malsasa>Wow, garbage collecting freed me some almost 2GB space.
<roptat>bung6[m], it is a normal guix system
<PurpleSym>zimoun: ghc-persistent’s failure also sounds like “32 bits is not supported”. Debian is a little behind – no patches. Maybe just disable the test-suite on i686? Can you send patches for tar, bloomfilter and this one?
<roptat>Malsasa, ran guix gc the other day, it freed 30GB ^^'
<bung6[m]>roptat: no want for example 60 gb disk guix
<bung6[m]>please help me
<roptat>ah then you want to expand the image space
<bung6[m]>yes
<Malsasa>roptat: wow, that is very awesome.
<roptat>bung6[m], note that you already have 30GB on this image
<bung6[m]>puffff
<roptat>so if you want more space (say 60GB), exit the VM and run this: "qemu-img resize guix-system-vm-image-1.3.0.x86_64-linux.qcow2 +30G", then "virt-resize --expand /dev/sda2 hosta-orig.qcow2 uix-system-vm-image-1.3.0.x86_64-linux.qcow2", and finally restart the VM
<roptat>bah no don't do this
<roptat>bung6[m], exit the VM, then "qemu-img resize guix-system-vm-image-1.3.0.x86_64-linux.qcow2 +30G", then "cp uix-system-vm-image-1.3.0.x86_64-linux.qcow2 guix-system.orig.qcow2", then "virt-resize --expand /dev/sda2 hosta-orig.qcow2 uix-system-vm-image-1.3.0.x86_64-linux.qcow2" and then start the VM again
<roptat>bung6[m], exit the VM, then "qemu-img resize guix-system-vm-image-1.3.0.x86_64-linux.qcow2 +30G", then "cp uix-system-vm-image-1.3.0.x86_64-linux.qcow2 guix-system.orig.qcow2", then "virt-resize --expand /dev/sda2 guix-system.orig.qcow2 uix-system-vm-image-1.3.0.x86_64-linux.qcow2" and then start the VM again
<roptat>bung6[m], sorry, the last one ^
<roptat>I should learn to think before I type :/
<roptat>ah again made a mistake in the commands
<roptat>bung6[m], exit the VM, then "qemu-img resize guix-system-vm-image-1.3.0.x86_64-linux.qcow2 +30G", then "cp guix-system-vm-image-1.3.0.x86_64-linux.qcow2 guix-system.orig.qcow2", then "virt-resize --expand /dev/sda2 guix-system.orig.qcow2 guix-system-vm-image-1.3.0.x86_64-linux.qcow2" and then start the VM again
<roptat>(sorry for all the messages, this time it's the right one
<civodul>has anyone used nss-pam-ldapd as an NSS service on Guix System?
<civodul>rekado_ maybe?
<roptat>now in the VM, you can check that you have 60GB, for instance the file manager will tell you it has 53GiB of free space
<bung6[m]>pufff
<bung6[m]>is there a gui ?
<roptat>no, that's qemu...
<bung6[m]>offf
<bung6[m]>lets start again pls ı am new user but ı like free software
<apteryx>use something like virt-manager or GNOME-Boxes if you are not comfortable with the QEMU command line (I'd be hard-pressed to find someone who is)
<roptat>bung6[m], ok, first run this and tell me when it's done: "qemu-img resize guix-system-vm-image-1.3.0.x86_64-linux.qcow2 +30G"
<zimoun>mothacehe added Guix support to upstream gnome-boxes :-) It should work out of the box ;-) for recent gnome-boxes on any distro, I guess.
<roptat>zimoun, unfortunately not on my system :/
<roptat>(but it's fedora 34, it's one version late)
<zimoun>zut!
<roptat>ah zimoun since you're there, should we start planning the next guix days?
<roptat>did we decide whether we should organize something before or after fosdem?
<zimoun>past year, it was on the next Monday, right? Maybe something similar, no?
<roptat>sure, sounds good
<roptat>I don't think I was involved last year
<zimoun>We were exhausted by the November days :-)
*bung6[m] uploaded an image: (21KiB) < https://libera.ems.host/_matrix/media/r0/download/matrix.org/OADgMSNtlQXKcYLSGUSlQKGB/resim.png >
<bung6[m]>roptat:
<zimoun>Using Jami provided by apteryx and a BBB instance, it sounds good. It could be nice to be able to package LibreAdventure https://vcs.fsf.org/?p=libreadventure.git;a=tree
<roptat>bung6[m], "cp guix-system-vm-image-1.3.0.x86_64-linux.qcow2 guix-system.orig.qcow2"
*bung6[m] uploaded an image: (28KiB) < https://libera.ems.host/_matrix/media/r0/download/matrix.org/YiewUmhOGRaNQgRRqORQZIir/resim.png >
*bung6[m] uploaded an image: (29KiB) < https://libera.ems.host/_matrix/media/r0/download/matrix.org/bPYGyFIiuuZwTTEBYKPBiZbu/resim.png >
*bung6[m] uploaded an image: (111KiB) < https://libera.ems.host/_matrix/media/r0/download/matrix.org/tHygGDEjvfKPrLBXlYwepbMP/resim.png >
<roptat>bung6[m], "virt-resize --expand /dev/sda2 guix-system.orig.qcow2 guix-system-vm-image-1.3.0.x86_64-linux.qcow2"
<apteryx>zimoun: oh, a public event hosted on Jami? I think that'd be a world premiere ;-)
<roptat>zimoun, what do you want to use Jami for exactly?
*bung6[m] uploaded an image: (81KiB) < https://libera.ems.host/_matrix/media/r0/download/matrix.org/XLBpnbUuctLtqIAaFNihfpIG/resim.png >
<roptat>bung6[m], great!
<unmatched-paren>hi guix!
<roptat>bung6[m], now you can run the VM again with: "qemu-system-x86_64 -nic user,model=virtio-net-pci -enable-kvm -m 1024 -device virtio-blk,drive=myhd -drive if=none,file=guix-system-vm-image-1.3.0.x86_64-linux.qcow2,id=myhd"
<bung6[m]>roptat: than?
<roptat>and you can remove "guix-system.orig.qcow2" if the VM works
<zimoun>roptat: for freely chatting about related topics. As here, but using voice and maybe video / screensharing. Well, BBB for the main track. Jami and/or LibreAdventure for less “core” business.
<bung6[m]>manual or automatic which one?
<bung6[m]>roptat: but it is live :/
*bung6[m] uploaded an image: (196KiB) < https://libera.ems.host/_matrix/media/r0/download/matrix.org/bGeIYDNcSPgZCAWCPtQEmvZp/resim.png >
<roptat>bung6[m], that's it, it works
<bung6[m]>ı want not live
<roptat>it's not a live system (despite the name), changes will persist reboot
*roptat needs to go, see you later
<roptat>zimoun, I see
<bung6[m]>How I get in again
<rekado_>civodul: I have indeed!
<rekado_>civodul: I might still have an os-config for that
<apteryx>zimoun: oh, then freely chatting about free topics would be better here, I believe.
*rekado_ tries to find it
<apteryx>jami doesn't yet do group chats (although the technology to do it as landed in the form of git-synchronized conversations -- dubbed "swarm")
<apteryx>has*
<attila_lendvai>apteryx, git-synchronized? are you talking about https://www.ethswarm.org/ ?
<apteryx>that's probably something else; I'm talking about this: https://git.jami.net/savoirfairelinux/jami-project/-/wikis/technical/2.3.%20Swarm
<apteryx>zimoun: more precision w.r.t. to group chat: jami provides a common chat area for a live video conference channel, but doesn't yet do standalone text-only group chat
<zimoun>apteryx: by chatting, I mean discuss but using voice. I think jami-qt is less intimitading than IRC for newcomers to discuss. I do not know. :-)
<unmatched-paren>how do i use guix download to get the checksum of a specific commit in a git repo?
<podiki[m]>I think guix download is only for archives
<podiki[m]>you would git clone, checkout, and then do guix hash (with some option I'm forgetting for git repo)
<podiki[m]>"guix hash -rx ." according to my zsh history
<podiki[m]>(in the top git folder)
<attila_lendvai>apteryx, that other swarm is also a storage/communication layer, besides other things for chat. but it's more like torrent on steroids than git.
<podiki[m]>unmatched-paren: ⬆️
<unmatched-paren>thanks
<attila_lendvai>apteryx, FYI, the swarm link you have sent display "This project has no wiki pages" for me. it may require login or something...
<roptat>bung6[m], now you can run the VM again with: "qemu-system-x86_64 -nic user,model=virtio-net-pci -enable-kvm -m 1024 -device virtio-blk,drive=myhd -drive if=none,file=guix-system-vm-image-1.3.0.x86_64-linux.qcow2,id=myhd"
<bung6[m]>ı am trying trisqquel
<bung6[m]>now
<roptat>ok, good luck :)
<bung6[m]>but ı like guix too
<bung6[m]>np
<bung6[m]>I try it to another time too
<bung6[m]>is this problem you?
<roptat>no problem
<apteryx>attila_lendvai: works in a private browser tab: https://git.jami.net/savoirfairelinux/jami-project/-/wikis/technical/2.3.%20Swarm; I am not logged in there
<roptat>apteryx, that's because of the semicolon https://git.jami.net/savoirfairelinux/jami-project/-/wikis/technical/2.3.%20Swarm works
<zimoun>PurpleSym, after fighting with Cuirass, now the situation of ghc-* on i686 for c-u-f is pretty similar to the one for master. Substitutes are now there so it is not blocking for the merge. :-)
<zimoun>efraim: how https://issues.guix.gnu.org/52186#2 is going for you?
<PurpleSym>zimoun: Ah, I see. Do we have a date for the merge already? I haven’t been following the status of c-u-f at all.
<zimoun>PurpleSym, weeks ago. ;-) Around Nov. 23rd.
<podiki[m]>with the librsvg having a stop gap solution pushed the other day, I think the time is nearly upon us
<podiki[m]>(librsvg with rust, so older version for non-x86-64 for now)
<jpoiret>alright, looks like it'll be hard to have screensharing working properly on guix. We need to automatically run pipewire+wireplumber inside the user session, as it fundamentally needs access to the xdg-desktop-portal through a dbus user session
<jpoiret>systemd handles it via `systemd --user` units
<dstolfa>jpoiret: can't you create a shepherd user service similarly to a systemd one?
<jpoiret>system-wide looks harder, as pipewire can run without it but wireplumber looks like it really wants the portal
<jpoiret>well, we can, but there's no easy plug and play technique
<unmatched-paren>is there anything wrong with doing this:
<unmatched-paren>(define-public (rust-crate crate version)
<unmatched-paren> 1 (list (string-append "rust-" crate)
<unmatched-paren> 63 (eval (string->symbol (string-append "rust-" crate "-" version)) (interaction- environment))))
<unmatched-paren>sorry that got a bit messed up
<unmatched-paren>This https://paste.debian.net/1221848/
<jpoiret>i'd argue against eval
<jpoiret>you might want to look at module-ref instead, see https://www.gnu.org/software/guile/manual/html_node/Module-System-Reflection.html
<jpoiret>(that would also make it easier to handle errors)
<unmatched-paren>my hands were getting tired of typing the name of every rust crate :)
<unmatched-paren>*typing the name twice
<jpoiret>i think the problem is that you should use (current-module) instead of (interaction-environment)
<jpoiret>but again, i think eval is the wrong idea here
<unmatched-paren>so would it be best if i just type it all manually?
<unmatched-paren>my idea was that instead of ("rust-regex" ,rust-regex-1), you'd type (rust-crate "regex" "1"), but if it's inefficient then i guess it's better manually
<jpoiret>i think you just want a macro intsead
<jpoiret>instead * however, note that label-less package inputs will soon land in master
<Gooberpatrol_66>can you print the logs of a particular daemon in guix?
<zimoun>Gooberpatrol_66: I am not sure what you mean. “guix build foo --log-file” shows the log for ’foo’.
<Gooberpatrol_66>zimoun: like the logs of a shepherd service
<zimoun>I do not know.
<jpoiret>you can use the #:log-file keyword argument of make-forkexec-constructor if it uses standard {in,out}put, see https://www.gnu.org/software/shepherd/manual/html_node/Service-De_002d-and-Constructors.html#Service-De_002d-and-Constructors
<jpoiret>but some daemons log using syslog instead
<jpoiret>if so, you can look at /var/log/{debug,messages}
<Gooberpatrol_66>hmm, i found /var/log/messages, which says "ssh-daemon could not be started", but it doesn't print any output from it
<jpoiret>i think shepherd is missing (maybe it has it already?) a way to start a service and attach its std{in,out} to the current one
<roptat>jpoiret, didn't you just say there was #:log-file?
<jpoiret>yes, but that requires reconfiguring
<roptat>wouldn't it be enough to use this on all our services?
<roptat>ah I see
<jpoiret>you could maybe have `herd start --undaemonize daemond`
<roptat>but it would still fix this issue going forward, right?
<jpoiret>or `herd start --attach` *
<jpoiret>yes, i think so
<jpoiret>i think that everyone writing a shepherd service should look into what the best log method is and enabling it
<jpoiret>although there might be some security concerns with logging every single service, i'm not 100% sure
<roptat>I think it's ok in most cases, and we can probably even log to a file owned by root?
<roptat>most likely every service will have something to say on its stdout, even if it sends logs to syslogd when all is fine
<roptat>for instance, I know nginx will start complaining about syntax errors in its configuration file and crash before it starts using syslogd and it's annoying
<roptat>Gooberpatrol_66, the only thing I can suggest now is to find the configuration file and start ssh by hand to see what it's complaining about
<roptat>Gooberpatrol_66, if you do "guix gc -R /run/current-system | grep sshd" you'll find the service file (shepherd-ssh-daemon-ssh-sshd.scm). Inside it is the command the shepherd uses to start sshd, inside the make-forkexec-constructor
<roptat>in my case, it does something like /gnu/store/vh3xdcbg695rgansg736jisd218q4fw2-openssh-8.8p1/sbin/sshd -D -f /gnu/store/iifdfd2djs65vfb19g0kvr09inbhccdp-sshd_config
<roptat>so if you run the same command, you should see why it's failing
<roptat>of course in your case the store paths will be different
***zimoun` is now known as zimoun
<jgart>$ guix shell httpie -- http get https://files.pythonhosted.org/packages/aa/51/59965dead3960a97358f289c7c11ebc1f6c5d28710fab5d421000fe60353/asttokens-2.0.5.tar.gz | tar -zt | ${PAGER-less}
<jgart>The above command shows the tar file on PyPi in less or can be easily changed to use vim or whatever to view it.
<jgart>I'd like a script like this instead:
<jgart>$ check-pypi-tests asttokens
<jgart>The use case is checking quickly/conveniently from my terminal on PyPi if the python package in question includes tests or not.
<jgart>The issue I see is getting the hash in the url for any package on PyPi from just the canonical package name as an input/arg. Guixers, WDYT?
<jgart>just brainstorming this script aloud. Any thoughts/advice is much appreciated
<rekado_>hmm, I lost my browser history and all my bookmarks since the last icecat troubles
<rekado_>I rebooted, started the new icecat, it came up empty and I couldn't open a new tab (in the terminal I'd see errors).
<rekado_>so I started the old icecat again, copied my tabs from backup, but history and bookmarks are gone.
<rekado_>any idea how to recover those?
<rekado_>I had just gotten into the habit of bookmarking things, and I feel ... betrayed.
<lispmacs[work]>rekado_: maybe open with --ProfileManager and confirm which profile you are using?
<jgart>nckx, Do you happen to remember why you wrote this phase to inhibit terminfo install?
<jgart>nckx, https://git.savannah.gnu.org/cgit/guix.git/commit/?id=fc1a7b58d73089067c37fbf59555dbea598e8160
<jgart>rekado_, maybe there's a way to search for them from the sqlite database that they are saved to?
<jonsger>rekado_: I didn't touched icecat, only icedove :P
<unmatched-paren>do i use rust-gcc@0.3 instead of rust-cc@1? rust-gcc doesn't seem to have a 1.0.0 version
<apteryx>jgart: why not use wget?
<jgart>apteryx, any command that provides a GET would work
<jgart>The biggest issue I'm trying to figure out is how to construct the full URL from just the canonical PyPi package name
<opalvaults>question for anyone who has a similar setup, I'd like to start using GUIX on my desktop. I imagine the nouvaeu drivers work just fine for a multi-monitor setup?
<jgart>apteryx, asttokens == https://files.pythonhosted.org/packages/aa/51/59965dead3960a97358f289c7c11ebc1f6c5d28710fab5d421000fe60353/asttokens-2.0.5.tar.gz
<jgart>apteryx, $ check-pypi-tests asttokens
<unmatched-paren>opalvaults: i'm pretty sure it depends on your desktop
<unmatched-paren>i think (think) sway and gnome both work pretty well with multi-monitor
<unmatched-paren>i guess you should try them all and see what works
<opalvaults>unmatched-paren: good advice, thank you! I have a AMD CPU with Integrated graphics and an Nvidia card. I was hoping to be able to run Gnome 40 when it's out of the frozen repo.
<lilyp>jgart: w.r.t. checking for tests, do you have a moment to talk about our lord and saviour grep?
<opalvaults>I'd ditch the nvidia card but my mini atx mb only has one hdmi. anyways. i'll report back here with results. Is there a good place to report results of certain builds for future newcomers?
<jgart>lilyp, can you unpack that statement?
<unmatched-paren>no idea, sorry
<jgart>lilyp, how were you thinking of solving the above with grep?
<opalvaults>unmatched-paren: thanks for responding anyways :)
<unmatched-paren>why is cmake-minimal an undefined variable? i have gnu packages cmake imported
<roptat>unmatched-paren, is this the very first error?
<jgart>unmatched-paren, (define-public cmake-minimal
<jgart>why is it that `guix edit cmake-minimal` doesn't work?
<roptat>jgart, because the package name is probably just cmake
<jgart>if it is being exported by define-public macro?
<roptat>ah nevermind
<jgart>roptat, nm to me or someone else?
<apteryx>jgart: guix import pypi already does so, no?
<jpoiret>since i'm reading some xdg-desktop-portal stuff, civodul: are you running that ungoogled-chromium under wayland? what DE are you using?
<roptat>jgart, cmake-minimal inherits from cmake-bootstrap, which has (properties '((hidden? . #t))), meaning it can't be found from the CLI
<jgart>apteryx, does `guix import pypi asttokens` put anything in the store? It looks like not from the stdout/err I get
<jgart>unless it does so behind the scenes, haven't looked that far
<unmatched-paren>roptat: wdym? i had a few 'undefined' errors before, from packages that weren't in guix yet
<jgart>roptat, ah cool! thanks for pointing that out
<unmatched-paren>i'm trying to do wezterm (turns out it doesn't actually need c-u-f rust)
<jgart>TIL, (hidden? . #t)
<unmatched-paren>it has a LOT of dependencies
<jpoiret>unmatched-paren: basically any rust package
<unmatched-paren>typical rust -> https://paste.debian.net/1221873/
<unmatched-paren>it's a lot even for rust tbh
<unmatched-paren>it's got five billion subcrates (the repo's a workspace)
***tex is now known as Guest6318
<Guest6318>Hello, just installed Guix, works fine, but how can I switch pulseaudio to pipewire? Need that for good BT support. I am able to kill pulseaudio and start pipewire manually, but that is lame.
<jgart>Is it true that Rust doesn't host old versions of packages long term? What is guix's policy around keeping old packaged rust programs after some years have gone by? Or where can I read about it?
<jpoiret>Guest6318: what a coincidence
<jpoiret>i've been working on pipewire today
<unmatched-paren>jgart: huh? rust's policy is that crates.io will be a permanent archive of code, and to that end they don't even allow deleting crates
<jpoiret>out-of-the-box pipewire support will take a bit more work, and pipewire-pulse should then ensue
<unmatched-paren>they take backwards compatibility _very_ seriously
<jpoiret>the issue is that we don't really have an equivalent for `systemd --user` right now
<Guest6318>jpoiret Successfully? :) I am learning guile, I think I understand the basics, trying to learn more. Do you have anything to share?
<jpoiret>user shepherd would work, but it still requires manual intervention for now, we don't launch one automatically
<jgart>jpoiret, Guest6318 https://github.com/bqv/rc/blob/live/rc/home/leaf.scm#L233
<jgart>jpoiret, Guest6318, someone already wrote a pipewire servive they just have not contributed it to upstream
<jgart>bqv
<jgart>might find some ideas in there
<unmatched-paren>(but you can yank a crate to prevent people from depending on it in the future, but the code is still there, and packages that already depend on it can continue depending on it)
<jgart>they have an iwd service also that would be nice to have in upstream: https://github.com/bqv/rc/blob/live/rc/services/iwd.scm
<jpoiret>well, yes, that's what i would write too. But that requires a user shepherd instance
<jgart>unmatched-paren, where can I read that?
<jgart>regarding crates policy
<unmatched-paren>somewhere in docs.rust-lang.org, lemme find it
<jpoiret>but yes, i think that for now, using a user shepherd is the way forward.
<jgart>I had heard that regarding rust crates from carbslinux maintainer
<jpoiret>Guest6318: what WM/DE do you use?
<jpoiret>you could autostart them using xdg-autostart if you have a compatible DE, otherwise just include it in your config
<Guest6318>jpoiret jgart As I really don't understand it too much, I was trying to find out what starts the pulseaudio process, and then would just create service for starting pipewire. But that's my 5min understanding of the situation which of-course is way off the reality? :D I am using i3.
<jpoiret>when using pipewire, you don't use pulseaudio-the-server but rather a pipewire-based pulseaudio-compatible server
<unmatched-paren>jgart, found it, it's in chapter 14 section 2 of the rust book: https://doc.rust-lang.org/book/ch14-02-publishing-to-crates-io.html#removing-versions-from-cratesio-with-cargo-yank
<jpoiret>it replaces pulseaudio-the-server
<unmatched-paren>also https://doc.rust-lang.org/book/ch14-02-publishing-to-crates-io.html#publishing-to-cratesio
<podiki[m]>on the shepard and logging discussion, would love better logging
<unmatched-paren>"Be careful when publishing a crate because a publish is permanent. The version can never be overwritten, and the code cannot be deleted. One major goal of crates.io is to act as a permanent archive of code so that builds of all projects that depend on crates from crates.io will continue to work. Allowing version deletions would make fulfilling that goal impossible."
<podiki[m]>for a simple user sherpherd service I redirected output to syslog, which is okay
<podiki[m]>but proper logging and filtering controls would be great (I miss that about journalctl)
<jpoiret>now, pulseaudio is supposed to be started through a service manager. pipewire-pulse is supposed to be started by a user service manager (because it needs to connect to the session dbus, not the system one)
<Guest6318>jpoiret Yes, I know. Something is starting pulseaudio (don't know what and why, looked at its service and didn't find the start / stop service)...
<jpoiret>that's dbus
<podiki[m]>pulse audio also has an autostart desktop file
<unmatched-paren>jgart: this also means that they are immune to a left-pad incident :)
<podiki[m]>in my case: .config/guix/profiles/desktop/desktop/etc/xdg/autostart/pulseaudio.desktop will run start-pulseaudio-x11
<jgart>unmatched-paren, you're referring to this? https://www.theregister.com/2016/03/23/npm_left_pad_chaos/
<unmatched-paren>yes
<jpoiret>hmmm, i might be wrong on that one. It's not dbus. Maybe the pulseaudio library tries to start a server if it cannot find one, and finds it through its config file?
<jgart>k, thnx
<unmatched-paren>since there is no `cargo unpublish`, this cannot happen to crates.io
<jpoiret>i'm on a WM that does not comply with xdg-autostart
<jpoiret>oh, but maybe it's GDM
<jgart>unmatched-paren, that's good to know
<Guest6318>i am using guix, gdm, i3. I don't have .config/guix/profiles... I ahve .config/guix/channels.scm and current. I have .guix-profile/etc/xdg/autostart/blueman.desktop. I cannot find any xdg that would start pulseaudio.
<Guest6318>And that blueman applet doesn't look like it is starting automatically...
<jpoiret>wonderful! we have so many ways to start services, all oblivious to all others: shepherd, dbus, xdg-autostart, Xsession.d?
<jpoiret>Guest6318: i think it should be in /run/current-system/profile/etc/xdg/ instead
<podiki[m]>you could try installing dex and seeing what it says with a dry run (dex will run xdg autostart for plain WMs, I use it)
<podiki[m]>sorry, my profile directory is because I use multiple profiles
<Guest6318>thanks, yes, it is in the /run/current-system/...
<jpoiret>well, anyways, see you guix, be gone for the weekend
<jpoiret>Guest6318: anyways, first you'll have to remove the pulseaudio-service-type from the %desktop-services
<Guest6318>thanks and enjoy your weekend!
<jpoiret>then look into the link jgart posted for an approach using a user shepherd
<Guest6318>ok, thank you folks!
<ngz>Lot of "no space left on device" errors from ci.guix.gnu.org
<ngz>This does not bode well.
<Guest6318>so the pulseaudio-service-type "just" generates the pulseaudio.desktop in the xdg/autostart? (I am trying to understand how guix services works, and this one doesn't contain start/stop handlers...)
<podiki[m]>I'm not sure, but as noted probably has the dbus service files that programs will use
<podiki[m]>and if you are on a plain WM, you may want to use a dbus launch command for your WM (I needed that for various dbus errors I had)
<KE0VVT>the_tubular: Ah. I just write in what I need.
<unmatched-paren>...and now openssl, libx11 and xcb-util are all showing up as undefined...
<unmatched-paren>????
<rekado_>lispmacs[work]: it's the correct profile
<unmatched-paren>all the correct modules are imported... xorg, tls, cmake, etc are all #:use-moduled
<unmatched-paren>this is bizarre
<rekado_>jgart: I'll try sifting through all those files in the profile and then look through the sqlite dbs.
<rekado_>unmatched-paren: I haven't followed, but: are there any prior errors?
<unmatched-paren>no, if you mean multiple errors are appearing and this is a later one
<unmatched-paren>;;; note: source file /home/paren/code/clones/guix/guix/config.scm
<unmatched-paren>;;; newer than compiled /home/paren/code/clones/guix/guix/config.go
<unmatched-paren>ice-9/eval.scm:223:20: In procedure proc:
<unmatched-paren>error: libxcb: unbound variable
<unmatched-paren>hint: Did you forget a `use-modules' form?
<unmatched-paren>i'll try making again?
<unmatched-paren>ok looks like there's a missing close paren in crates-io.scm...
<civodul>ngz: should be fixed now; there were two build machines where guix-daemon was stuck somehow and "guix gc" would fail with "build daemon out of memory"
<jgart>rekado_, yeah I imagine looking through sqlite would be more painful. Hopefully it works out with the profile
<ngz>civodul: ok, thanks. I was afraid of all those little Emacs packages failing to build.
<ngz>s/afraid of/shocked by/ :)
<unmatched-paren>found it :)
<unmatched-paren>it works \o/
<unmatched-paren>now i can go back to the relaxing guix import crate -> nvim crates-io.scm -> ./pre-inst-env guix build -f cycle >:)
<jgart>unmatched-paren, what are some of your goals for rust packaging in guix? What would you like to acheive?
<jgart>If you don't mind sharing
<unmatched-paren>jgart: wdym?
<unmatched-paren>i'm just packaging the stuff that i need/want tbh
<unmatched-paren>rn i'm doing wezterm, which is the best terminal emulator i could find
<unmatched-paren>little bit bloated, but having too many features is better than not having all the features i want
<unmatched-paren>(imo)
<jgart>how is wezterm different from alacritty?
<jgart>in other words, what are some pros it has over alacritty
<unmatched-paren>alacritty is intentionally minimalist, wezterm is intentionally maximalist
<unmatched-paren>wezterm has multiplexing, lua configuration, ligatures, etc
<lilyp>jgart: by using some grep as the tail of your pipe rather than less
<unmatched-paren>alacritty does not implement those things, on purpose
<lilyp>though note you're sacrificing speed for exactness here; if the tests are in a different location than you're expecting you have to adapt your regexp
<jgart>unmatched-paren, oh ok cool
<unmatched-paren>the goal of alacritty seems to be to create a terminal that runs at 39291019 fps
<jgart>lilyp, yeah that's why it might just be easier to inspect it with less or $EDITOR
<unmatched-paren>right now, i'm typing in gnome-terminal, which sadly does not have ligatures
<jgart>but the issue I was talking about is a different one
<lilyp>care to elaborate?
<jgart>how do I derive the long url with hash from the PyPi canonical name?
<jgart>let me post here the relevant part
<jgart>one sec
<jgart>lilyp, this works:
<jgart>guix shell httpie -- http get https://files.pythonhosted.org/packages/aa/51/59965dead3960a97358f289c7c11ebc1f6c5d28710fab5d421000fe60353/asttokens-2.0.5.tar.gz | tar -zt | ${PAGER-less}
<jgart>or you can use wget instead of httpie, if you like
<lilyp>oh, I see
<jgart>the script interface I want is the following:
<lilyp>so you're trying to dereference the origin is what you're interested in?
<jgart>check-pypi-tests asttokens
<lilyp>I'm pretty sure instead of wget or httpie you can use guile's (web uri)
<jgart>I want to be able to just type the above
<lilyp>and invoke some mini script in a guix repl
<jgart>and get the output of the tarball piped to less
<lilyp>obviously, since that's a rather specific use case you'd have to code that up yourself, but it ought to be fairly simple
<jgart>lilyp, I don't know how to get .../59965dead3960a97358f289c7c11ebc1f6c5d28710fab5d421000fe60353 for every canonical name on PyPi
<jgart>I don't know how to construct or get that URL, that is
<lilyp>you don't need to?
<jgart>maybe there's a better way to go about it
<lilyp>guix knows
<jgart>yup that's one question I have
<jgart>lilyp, with `guix import pypi ...`?
<jgart>or the functions that make it up?
<jgart>that are are not exposed from the CLI
<jgart>the `guix import ...` CLI, that is
<lilyp>see pypi-uri in (guix build-system python)
<jgart>;; The PyPI API (notice the rhyme) is "documented" at:
<jgart>65 ;; <https://warehouse.readthedocs.io/api-reference/json/>.
<jgart>on line 65
<jgart>of pypi.scm
<jgart>I'll check the build-system also
<jgart>lilyp, It looks like guix makes a GET to /pypi/sampleproject/json
<jgart>and decodes the json later to sexps, I'm presuming...
<lilyp>obviously, as any lisp would
<jgart>right
<jgart>just thinking aloud
<lilyp>releases is a map of string to some struct
<lilyp>this struct holds filename and url
<lilyp>url is the url you want to send to httpie (or handle in Guile)
<jgart>where are you seeing `releases` at?
<lilyp>filename is the filename you want to store it as (most likely)
<lilyp>the documentation you sent
<jgart>oh ok
<lilyp>has info.releases.VERSION
<lilyp>there's also a raw url field, but I'd avoid that
*jgart actually reads the docs shared above
<jgart>TIL, PyPi has a json API
<jgart>lilyp, where should I put pyproject only packages?
<jgart>what branch?
<jgart>core-updates?
<lilyp>i think you can actually build them on master but with headaches
<lilyp>has the pyproject build system been merged on c-u-f yet?
<jgart>oh this just got way easier
<jgart> http get https://pypi.org/pypi/asttokens/json
<jgart>that has the url I need
<jgart>done, great!
<jgart>lilyp, not sure about the status of that
<opalvaults>guix/grub is unable to see filesystem on nvme drive after install. anyone know if really new nvme drives are supported?
<opalvaults>just used the guided option, nothing no non-default channels, etc.
<jgart>lilyp, http get https://pypi.org/pypi/$PKG/json | jq '...'
<opalvaults>s/nothing no/did not use
<jgart>I'll probably prototype a quick hack that way first with jq and httpie
<jgart>might be enough for what I need
<lilyp>opalvaults: that probably depends on your nvme drive and settings
<lilyp>for instance, a few guix folks fall into the ext4 large_dirs trap (unsupported by grub)
<opalvaults>that's what i thought too, it's UEFI boot with the correct partitioning (1 esp,boot, 1 root as per the guided install)
<opalvaults>guix sees the drives and partitions fine + no errors until reboot
<opalvaults>it's a 2021 x1 carbon
<opalvaults>maybe too new?
<opalvaults>ill try the devel image too just to make sure there hasn't been any patches for this thing i guess
<lilyp>opalvaults: did you also check, that the drive letters don't change/use labels or UUIDs?
<opalvaults>lilyp: the /dev/ block device names? or the UUID's themselves. i'll get an lsblk before i reboot to make sure
<opalvaults>i'm kind of skeptical right now that the installer is reformatting the esp partition unless i manually do it but i might be paranoid
<lilyp>/dev/block sounds like a bad decision imo
<opalvaults>??
<jonsger>hm, we should update telegram-desktop. They will drop support of our outdated version...
<lilyp>I'd rather pick /dev/sd[a-z] if that's known to be stable
<lilyp>or again label/UUID if not
<opalvaults>it's standardized afaik to be nvme[0-9]p[0-9]
<lilyp>ah yes, those are also stable in my personal experience
<opalvaults>unsure if guix uses udev by as far as i can tell that's done for the guix installer already so i have no control over it. the UUID's shouldn't change but like i said i'll do a blkid before i reboot to make sure udev didn't do something stupid
<lilyp>there's no udev that soon in the boot chain
<lilyp>you need to boot before you can udev
<opalvaults>oh right, derp.
<unmatched-paren>the crates-io.scm is so annoying to edit :(
<opalvaults>i'm fairly certain there's a bug where the installer doesn't reformat the esp partition if it already exists.
<unmatched-paren>maybe we could break it up a bit? rn we have crates-io, crates-gtk and crates-graphics
<unmatched-paren>opalvaults: yes there is
<opalvaults>oh
<unmatched-paren>i have experienced it first hand :)
<opalvaults>well that should solve my problem then lmao
<opalvaults>because i've now manually reformatted it
<unmatched-paren>it does not format the esp partition by default
<unmatched-paren>for some reason
<opalvaults>debian also sometimes won't unmount the esp partition by default, so apparently gnu/linux and efi don't get along sometimes
<unmatched-paren>when i first installed guix i got launched into a grub shell since my old debian had been wiped but its grub hadn't
<unmatched-paren>it confused me for ages :)
<opalvaults>ahhh, yeah that's what's happening to me
<opalvaults>it was trying to grab my old debian UUID then
<opalvaults>or some such nonsense
<opalvaults>fingers crossed. i'd really like to switch all of my machines to guix this weekend
<unmatched-paren>you should be able to mash f12 when you boot to switch to guix even if you haven't wiped the esp
<unmatched-paren>right?
<unmatched-paren>there's been at least one other person confused by this exact same thing that i remember :P
<opalvaults>yeah i'm able to use the guix boot option
<opalvaults>but the handoff to grub errors with 'can't find {UUID here}''
<unmatched-paren>i should file a bug but i can't be bothered :)
<opalvaults>i'll do it, just the mailing list?
<unmatched-paren>opalvaults: wait what?
<unmatched-paren>that didn't happen to me
<opalvaults>oh lmao
<opalvaults>I think it might be two sides of the same coin anyways.
<unmatched-paren>i hate firmware
<opalvaults>the fact that it's not updating esp partitions upon reinstall means that if the UUID/literally any other non-static thing chances it could have a chance to screw up the handoff to grub
<opalvaults>so maybe if that gets fixed, it would fix both of our problems? D:
<opalvaults>i hope!
<unmatched-paren>we'd probably be a lot further on technologically if we just had The SSD Firmware(tm) and The BIOS(tm) and The Graphics Driver(tm) in github repos somewhere that any company could use and improve on
<opalvaults>yeah no kidding :(
<unmatched-paren>but we've gotta keep those incompatabilities flowing, don't we!
<opalvaults>libreboot 2021 x1carbon when?
<unmatched-paren>we kind of do have The BIOS (coreboot), but even that needs some blobs to function on anything that isn't an ancient thinkpad from the stone age