<apteryx>any idea how I could synchronize the script so that it doesn't try to call committer.scm while an edited module is in the progress of being saved?
<apteryx>hmm, it seems the file update is supposed to be atomic, so if that's true 'edit-expression' shouldn't need to be manually synchronized I guess
<apteryx>perhaps it's the source-location-map that goes stale
<peterpolidoro>I am trying to import a pypi package and it says it needs both python-pyqt5 and python-qtpy and that python-pyqt5 does not exist. Since the quix python-pyqt5 package does not seem to exist, does that mean that only python-qtpy is really needed?
<jpoiret>peterpolidoro: if it's not in guix, you'll need to also import it
<peterpolidoro>I am assuming that is a package that lots of people might need, do you think it is not already included in guix because of licensing issues or something?
<bjc>meena: guix offers significant advantages for some people over traditional unix methodolgy, but it's not for everyone
<jpoiret>i'm just worried it might become an additional maintenance burden when upstream rust tooling changes significantly
<jpoiret>meena: the thing is, when building haskell packages manually, you'd also need the newer versions of the dependencies, its just that haskell has its own tooling for that that we can't use on guix unfortunately
<apteryx>oh, another bug with package-definition-location; try it on python2-pytest-warnings.
<apteryx>it gives the line next (under) the define-public ... binding
<guixsd-n00b>Hi! Is there any way to introduce symbols definitions from a context into a gexp? I know there is 'with-imported-modules, but I was hoping that there was an alternative for simple things, like using a variable; maybe something like '(with-context-symbos symbol-list gexp-body ...)
<bjc>^- i would be interested in an answer to this, as well
<civodul>apteryx: (package-definition-location python2-pytest-warnings) says line 2403 for me, which looks correct
<apteryx>sneek: later tell civodul weird, here also now
<maximed>peterpolidoro: Submit the package definition to firstname.lastname@example.org, even if only as a [WIP PATCH]. When it is applied to guix proper, the issue will be closed by whoever applied it (unless they forgot to do that). You will be notified by e-mail (by ‘Debbugs’) that the issue has been closed.
<maximed>(basically, notification instead of polling)
<bjc>lilyp: it's a work-of-art, unfortunately, but as i recall i had to swap the pulse daemon to running single-instance as root, then allow any local connections (ie, over the unix socket), and finally bind-mount the /run/pulse/native socket over
<bjc>part of the issue is that at least some of that work was so i could get pulse audio also working with qemu's sound emulation, and i can't remember which was for what
<lilyp>yeah, I think we need better solutions than running pulse as root :)
<bjc>i agree. the current situation is very frustrating, and it seems like all the powers-that-be have agreed there's no good reason to run your audio server as root
<bjc>pipewire is no better. it makes me wonder if i should try just going alsa or something
<maximed>... though now rust-hashbrown fails to build
<lechner>bjc: it seems that user accounts can also be managed directly in Guix
<bjc>yes, they can. but they just use /etc/passwd et al
<bjc>i'm asking because, at least on the unix side of the pond, if you're using ldap for user management you're doing that on purpose for a reason
<unmatched-paren>maximed: is antioxidant-build-system a working name, or will it be renamed to rust-build-system or something once it's ready? asking because although i like it, but it seems a bit out of place among the other build systems that are just `(language-name|build-system-name)-build-system`
<civodul>perhaps worth pushing on a branch and have it built on ci.guix?
<civodul>or otherwise push on staging if you're confident?
<rekado>apteryx: we now have 100TB on the SAN and another 10TB for SSD-backed storage; these disks currently don’t show up on the OS yet (or at least not correctly), but we could play with those as well.
<rekado>I mean: in case booting from the local SSD array with btrfs just won’t work we could put ext4 on the SAN and set up the new system there.
<rekado>then move the SSDs to node 129 and reboot as often as you want :)
<apteryx>civodul: it's already been built on the ci; see wip-ipython-polyglossia :-)