<lfam>ZombieChicken: Can you send a message to firstname.lastname@example.org with the complete error messages and the command you used to trigger the error? Also the output of `guix --version` or a Git commit if you are working from Git
<ZombieChicken>I'm not working from git, and the error was in qemu, so it is kind of hard to get ahold of
<ng0>afaik it went into core-updates, which has been merged yesterday or sunday or something
<jmd>ng0: I've not updated libreboot that recently.
<jmd>civodul: At least half of that bug has been fixed I think.
<ng0>i maybe have to change my associated email address in may again, it looks like I can find no middle party to use as an address to lower the price I need to pay for the domain I use rigfht now, so I might just let all but libertad.pw expire. sorry for the .mailcap annoyance.
<ng0>hopefully one of the autonomous centres (AZ) here can act as the owning party, so I don't have to put my name and address into the world wide addressbook into the open. for me it's about the feature of name "protection", and that is 60 euro / year for one of my domains
<lfam>ng0: I think Chris Marusich fixed the problem in 3382bfe9ea199086134d90e45e3d759aefed3dcf (system: Avoid using device paths in <menu-entry> device field.). At least, he made that change based on your report
<lfam>Other related kernels, like the BSDs, sort of combine /dev/random and /dev/urandom by trusting the CSPRNG enough to only block until the entropy measurement is high enough, and then they never block again. There is no difference in behaviour between /dev/random and /dev/urandom on those systems, AFAIK
<ZombieChicken>lfam: Yeah. Some of the better CSPRNG out there (like Fortuna) will probably never see light in the Linux Kernel because they don't want to replace the current system (which, iirc, isn't exactly /that/ good)
<lfam>Is there some analysis of the current Linux CSPRNG that explains why it's not that good?
<alezost>ZombieChicken: yeah, I would also like to run X as user, but IIUC it can be done only using systemd, dunno
<ZombieChicken>Well, that says to me that it should be possible, and at worst needs some apulse-style sanity layer
<cbaines>davexunit, I'm using source just with a file already, but doing that for many packages would mean having to write some scripts around Guix to manage cloning and checking out the appropriate revisions
<cbaines>I might end up doing that, but I wanted to see if I could do this a bit more elegantly
<davexunit>cbaines: either I misunderstood you, or you are greatly confused.
<alezost>ZombieChicken: btw, I don't setuid X server, I run it with sudo instead (my /etc/sudoers allows my user to do it without password); I'm not sure if it's counted to be "better" than setuid though
<lfam>Those bugs are remote code execution on both Git clients and servers
<cbaines>civodul, well, where I want to go next is to specify the commits in some continuious integration system, and have the tests run against those
<davexunit>if you're trying to dodge the hash check, it's not going to happen.
<davexunit>any derivation that has network access must know its hash in advance.
<cbaines>and the only two options that I currently know of to make that work in an automated way are 1: doing the clone and checkout, and then using local-file or 2: somehow computing the hashes, and then using those with git-fetch
<civodul>cbaines: in that case you'd need to inject source like --with-source does
<civodul>so either you use --with-source, if doing that at the command-line level is ok