IRC channel logs

2023-05-23.log

back to list of logs

<zimoun>hi!
<civodul>o/
<efraim>has anyone looked at the Nix HPC talk from fosdem in the nix room?
<efraim> https://fosdem.org/2023/schedule/event/nix_and_nixos_playing_with_nix_in_hpc_environments/
<efraim>builds occur on another machine, then the actual builds are copied to the cluster (I assume to a NFS share)
<zimoun>efraim: not yet. :-)
<efraim>I haven't finished watching it yet, I'll probably watch it more than once
<efraim>If they can have a daemon but not running as root, then I wonder if all they would need to do is patch out any actual building of anything
<efraim>would make profile hooks hard
<efraim>ACTION goes off to think about it
<civodul>efraim: in the cluster setups i'm aware off, we have guix-daemon running on a separate machine/VM and exporting /gnu/store and /var/guix over NFS
<civodul>sounds similar?
<civodul> https://guix.gnu.org/cookbook/en/html_node/Installing-Guix-on-a-Cluster.html
<efraim>that's what we're doing too
<civodul>ACTION watches
<civodul>i didn't get their affiliation? BSC?
<civodul>not clear to me why they bother with namespaces
<civodul>i mean it's nice to have, but sounds like a serious source of complexity here
<civodul>(they lack "guix shell -C", too :-))
<efraim>looks like bsc.es for the website
<efraim>I think the first part with the namespaces was for daemonless building
<efraim>s/daemonless building/building with a rootless daemon/
<civodul>hmm "rootless daemon" appears as future work in the conclusion
<civodul>my understanding is that they use separate namespaces to get isolated *run-time* environments
<efraim>one of the questions at the end said apparently the last major blocker for rootless daemons was merged
<civodul>to avoid interference
<civodul>they don't even mention Guix-HPC, that's a shame :-)
<efraim>I feel like the answer to that one was "so don't do that!" but people set all kinds of things in their .bashrc, so who knows what's loaded with LD_PRELOAD
<civodul>yeah
<civodul>hence "guix shell --check"
<civodul>would be worth getting in touch with these folks
<efraim>do we have a package for intel's compiler?
<efraim>looked like one of their examples used that
<civodul>the non-free channels have that
<civodul>we've done most of the things die-hard HPC folks want to do, i can tell you :-)
<efraim>I never managed to get cuda working
<civodul>rekado: just saw a "bad" suggestion from 'guix refresh': python-mpi4py: consider removing this input: openmpi
<civodul>here we'd need a 'refresh-extra-input' property
<rekado>oh, right, I didn’t even think of removals
<rekado>I ignore those a lot
<zimoun>civodul: yesterday, I told I have hard times with Guix. Again, just now: guix substitute: warning: try `--no-substitutes' if the problem persists.
<zimoun>I am just running “guix pull”, so now I am at 3f59fd6. Then “guix shell -m manifest.scm”
<zimoun>Bang!
<zimoun>This error “guix substitute: error: connect*: Connection timed out” is appearing very often for me.
<zimoun>On various machines, on various network.
<zimoun>Well, the error seems raised by “guix substitute: warning: while fetching https://ci.guix.gnu.org/nar/lzip/v1xbz7475bd61038cvlwwh4f961ncqlq-poppler-data-0.4.11: server is somewhat slow”
<civodul>zimoun: could you provide more info: guix-daemon store file name, full output, etc.
<civodul>preferably to bug-guix :-)
<zimoun>One of the issue is that they are sporadic.
<zimoun>Well, I will try to report with the info I have.
<civodul>cool, thanks!
<zimoun>Ah, are some guix-daemon incompatibilities around?
<zimoun>I mean, old guix-daemon could be the issue, no?
<civodul>yes, see https://issues.guix.gnu.org/63024#11
<civodul>since this is a bug in the daemon, fixes can only be deployed by updating the daemon
<zimoun>I thought this was about Guile, not the daemon.
<zimoun>anyway, updating guix-daemon does not change.
<civodul>you also have to restart it
<civodul>but anyway, send all the details to bug-guix :-)
<civodul>(there are two fixes: one in (guix ftp-client), and one in Guile proper)
<zimoun>hehe! Yeah, restart for sure.