<sigfig>is this an appropriate place to ask questions about building guix from source
<sigfig>i have some pretty strange issues with the installation i built locally and i'm not really sure how to diagnose it
<lfam_>sigfig: This or the <email@example.com> mailing lists are the place
<sigfig>i'm attempting to build the standard install image from master, using `guix system gnu/system/install.scm`, with guix built from the same repo and no modifications
<sigfig>this sort of works but during every build action there's a stall for about 100-200 seconds, and strace reports the guix daemon is trying to connect to every host in the 22.214.171.124/24 range
<sigfig>possibly others, i've only caught the output twice so far
<sigfig>i'm running the daemon with --no-substitutions attempting to do everything locally but clearly not succeeding
<sigfig>the length of the delay is probably just due to connect() roundtrips taking a while from my vpn, but why on earth is it scanning a host range when trying to build a derivation
<sigfig>about half the hosts i've checked from that list are up and running an http so i presume these are actually part of the guix ecosystem and are just unusually contiguous
<lfam_>sigfig: My guess is those IPs are used by Cloudfront
<lfam_>Overall though, that's not the expected behavior
<lfam_>I can try to reproduce it. Basically, you built Guix from source, started the daemon with '--no-substitutes' (double-check your spelling there), and then did `guix system gnu/system/install.scm`?
<sigfig>the most information i've gathered so far is, after downloading something (in this case the linux-libre archive) guix prints a message indicating it's building, then issues a series of about 60 connect() calls to these hosts on port 443, closing each one immediately
<lfam_>Copy it out of the store, decompress it, make it writable, then run it with QEMU
<sigfig>ok there appear to be a few major "stall" points in networking here, every transfer performed via http completes then causes a stall on the order of 40 seconds, failed transfer or redirects also do this, but there's no related output in strace or -v 9 for this
<sigfig>this does not appear to be part of the socket handling, since it closes before the stall
<sigfig>this occurs with every http connection as part of building the bash packages, so every individual patch spends between 80 and 120 seconds just spinning depending on how many redirects it gets
<sigfig>starting to get the impression guix is very fragile
<sigfig>let's see if the binaries have similar problems
<rvgn>Dynamicmetaflow Qubes is based on the principle of "Security through Isolation". The same can be conceptually reproduced with `guix system vm` and `guix system container`. Qubes was a brilliant idea; and so was flatpak, docker, LXC etc. But the thing is, these features should have been innately provided by system/package managers and that's one of the visions of Guix Project; as explained by civodul in the con
<Dynamicmetaflow>Thanks, I think I did listen to this some time ago but I think it will be a good reminder since I have a little more experience with Guix
<Dynamicmetaflow>So I completely agree with the statement you quoted about guix, the question that I have is how to replicate something like the Qubes Manager that lists VM's so we could manage them with Guix.
<Dynamicmetaflow>From a prior conversation, I think an interface could be great like how emacs-guix has one but one for managing and starting VM's etc
<rvgn>Dynamicmetaflow I will not be able to answer that question now as I am not well experienced with them either, apart from conceptually understanding them.
<rvgn>QUBES: Installing program abc inside work domain that can be accessed and available only in that domain. Installing program xyz in the host domain which can be used and made available in all other security domains.
<rvgn>GUIX: Installing program abc inside guix-vm-1 which has dedicated gnu/store and that program will be accessible and available only inside that vm. Installing a program inside host's gnu/store which is accessible and available to all other guix-vm-x that shares the host's store.
<sigfig>released v1.0.1 binaries do in fact have this issue when running the daemon with --no-substitutes, i still see nonsense connect calls to the same assortment of servers
<sigfig>tho the time it wastes on those is only about thirty seconds
<rvgn>Dynamicmetaflow That's one example comparison as far as I understand. :-)
<Dynamicmetaflow>Thank you rvgn :-), I'm with you, after using Qubes, while I appreciate all of the efforts and contributions and progress made and particular for the security landscape, I'm left with an itch and thinking, I should and I know I can do this with Guix.
<Dynamicmetaflow>The one part I'm unsure about that Qubes has what's called HVM, where the ethernet and wireless networks for example run on a separate VM. Then vm abc has access to the network vm, and vm XYZ does not, or has limited access
<sigfig>problem happens reliably if you just `guix build linux-libre` (which, as i've just discovered, absolutely destroys your terminal with the stdout of any other currently running build jobs)
<Dynamicmetaflow>I think my general takeaway after spending a few days with qubes getting it up and running is that it provides interfaces and tools to manage the vm's allow the model of security thhrough isolation, the same can be done with guix it's a matter of creating similar tools, interfaces or adapting existing ones
<Dynamicmetaflow>so I'm going to dedicate more time with guix tools and see what comes from it all
<sigfig>using the release binaries i was able to get a guix system working, but shortly after i noticed the gcc 5.5.0 derivations aren't reproducible, in the sense that any build fails with a linking error that is a bit obscured by guile stacktraces
<apteryx>Basically it takes the subvole name from rootflags=subvol=your-volume-name (a kernel argument specified in your Guix config), and uses that name to prefix the linux and initrd paths that are written to /boot/grub/grub.cfg
<apteryx>so, if you have defined rootflags=subvol=rootfs, then your grub.cfg will have linux and initrd paths starting with /rootfs/gnu/store... instead of /gnu/store...
<apteryx>The last bit I need to add is documentation; I'm trying to see where it'd fit.
<pkill9_>i'm getting an unbound variable error in the package module, with a correct module suggestion, even though I've put the variable in 'exports' and used "define-public" instead of "define" (just to test if it works), what could be the issue
<nckx>Although I do wonder how to tell the shepherd not to halt but call a custom command like kexec.
<Marlin[m]>most likely it will take some time for the traditional unix-like way of doing stuff to be "bridged" to guix
<chipb>well, I'm rather uncertain that other distros will make my bootloader any easier to construct. kexec isn't exactly hard to build.
<chipb>I guess dracut might make it marginally straightforward, but why not just play in some scheme instead. ;-)
<chipb>oh, bugger. I guess it does take cooperation with init to do cleanly.
<chipb>still, that seems like it Can't Be Too Bad...
<nckx>chipb: Yes, actually, in your case a graceful shutdown probably wouldn't matter, since everything's going to be mounted ro and you don't care about the ‘pre’ system at all once the real kernel loads.
<nckx>Still, it'd be nice to support ‘fast kexec reboots’ like it's 2008 and still cool.
<chipb>oh geez. or I could to the sane thing and just put the vmlinuz/initrd on the same EFI partition as grub and be done with it...
<nckx>chipb: …er, oh, I thought the whole issue was that your UEFI firmware didn't see the drive? Or do you have a non-NVME EFI partition?
*nckx gives up farting around with ibus for now; if anyone's experienced, I beg you for aid.
<chipb>well, the EFI partition being on thumbstick.
<nckx>That's not supported out-of-the-box either (you'll still get to write that Scheme code) but yeah, probably more sane 😛
<pkill9_>how feasible would it be to use inferior packages as inputs for packages? lol
<chipb>it's all to happy to deal with that. and let me install to an NVMe drive (sans an annoying prompt on boot of "unsupported drive"). just not a thing if I don't use the USB. heh.
<pkill9_>that could end up as a guix-specific dependency hell
<chipb>same laptop just hung on me a second time today. debian 10 seems unhappy on it?
<chipb>ahem. with the plain sata m.2 drive, that is.
<nckx>I don't want to speak for j-r, but I read the question as ‘how can I run a user's shepherd instance & services, as that user, but before they log in?’ Is that correct?
<nckx>What with all the ‘modern’ session/logind/seat magic, I don't really know the canonical answer to that anymore.
<j-r>yes. The particular use case at the momement is starting znc as a user when the the system starts.
<rvgn>efraim Just saw your email regarding virt-manager bug. I get the same output for `ls /sys/fs/cgroup/unified/`. But as shown in the error, the virt-manager is searching the directory "/sys/fs/cgroup/unified/machine" (not "/sys/fs/cgroup/unified/"), which does not exist. o.O
<j-r>I want to have the system shepherd have a service that has something like #:start (make-forkexec-constructor '("/home/james/.guix-profile/bin/shepherd") #:user "james" #:group "james")
<rekado>j-r: we don’t have a mechanism for user services yet. This would have to be done outside of the operating system configuration (e.g. by launching a shepherd instance upon login).
<j-r>I'm trying to solve for when my linode reboot all the services start back. I don't won't to run znc as root.
<j-r>I have such working (mostly) on Debian. A systemd unit starts shepherd as root, and root's shepherd starts a shepherd instance for a couple of users. My goal is to migrate from Debian to Guix.