IRC channel logs


back to list of logs

<zimoun>rekado_: cool for GWL and your colleagues feedback!
<zimoun>about the error, is it usual error handling with Scheme or does Wisp shadow things?
<civodul>just did my (remote) Guix hands-on
<civodul>now looking at a talk saying how Spack is popular, able to compile things, "not sacrificing performance because of pre-built binaries", and all
<civodul>all the talking points!
<zimoun>yeah I am listening…
<zimoun>how was this morning training?
<civodul>i think it was good!
<civodul>but it's very frustrating to do that remotely
<civodul>you never know what people are up to
<civodul>speaking of Spack... how did we fail this badly?
<zimoun>cool! Difficulties? I have read the Chat but nothing meaningful aside the usual locales
<civodul>yeah locales, but not too bad
<civodul>people noting that "guix pull" is kinda slow (esp. the first time)
<zimoun>maybe by advising colleagues to use Spack when already working on something similar :-) Joke aside, it is Python and not so different than modulefiles (an improvement somehow)
<civodul>heh :-)
<civodul>dunno but there's a lesson to be learned i guess
<zimoun>well, IMHO, mainly it is kind of “Worse is better” behind.
<zimoun>Until now, I am missing on how Spack could help in Reproducible Science.
<civodul>it doesn't, that's for sure :-)
<efraim>clearly the answer is to package guix for spack
<civodul>true :-)
<efraim>or spack for guix
<civodul>the spec thing to handle variants is really nice
<zimoun>“spack install guix; guix install gcc; spack compiler add;” :-)
<civodul>though i think package transformation options do well now (more verbose, less cryptic)
<civodul>heh :-)
<zimoun>what is an “environement propre”? :-)
<civodul>also: the myth of "vendor-optimized installations"
<zimoun>in the 3 boxes of Konrad: My Apps, Colleagues Stuff, Env. because Spack does not consider “system” (glibc etc.), Spack is not addressing the root of the issue; as Konrad explained yesterday. Or I miss something.
<civodul>yeah they made it clear that Spack doesn't address reproducibility issues by not taking into account "system dependencies", as they call it
<civodul>they state 3 reasons for choosing Spack: (1) Python, (2) "HPC-oriented", and (3) modules
<rekado_>civodul: what do you mean by “how did we fail this badly?”?
<rekado_>failed to get our message across?
<civodul>rekado_: these people chose Spack one or two years ago and dismiss Guix altogether
*civodul frustrated
<civodul>that said, it's funny and insightful that in the same training sessions there are very different viewpoints
<efraim>does spack have the intel compiler and non-root installs?
<civodul>of course :-)
<efraim>not much we can do about the intel compiler
<zimoun>“they” is maybe the biggest french computer
<rekado_>oh, so this is an internal workshop/talk, part of the same event where you just had your session on Guix?
<civodul>i also had a hands-on session this morning
<civodul>immediately followed by the Spack talk, which ignores what came before entirely :-)
<rekado_>are these really the same audiences?
<civodul>yeah, it's mainly a sysadmin audience
<rekado_>the same group is expected to be using Guix *and* Spack at the same site?
<civodul>i'm not sure what the intent is!
<civodul>i guess they wanted to expose people to all these tools
<zimoun>ignores too the first talk presenting the challenging in reproducibility
<civodul>so there's Guix, Spack, Singularity, k8s even later this week
<civodul>i guess they wanted to show tools that deal with deployment in general
<civodul>how's everything at MDC these days?
<rekado_>nothing exciting; Guix is pretty well established and I’m getting the occasional request for new packages, but it has slowed down a bit.
<rekado_>it’s still terribly slow due to NFS, though.
<zimoun>the theme of the training is: reproducibility in HPC world. Weird to escape the reproducibility…
<rekado_>but I think I managed to teach patience, which is a skill that will outlive Guix.
<civodul>heh :-)
<civodul>is it the guix command that's terribly slow or things installed via Guix?
<rekado_>I really wanted to let writes to /gnu/store happen locally, but this complicates everything and requires a more robust setup.
<rekado_>it’s any access to /gnu/store that’s slow
<rekado_>because /gnu/store lives on a redundant NFS share.
<rekado_>so any fs action that guix-daemon performs is serialized
<civodul>oh, i remember you tried to tweak the NFS mount options in the past
<civodul>guix-daemon accesses the store over NFS as well?
<rekado_>that was a design decision early on, because in 2014 we weren’t sure if we actually wanted to go all in with Guix
<civodul>what i've seen/recommended so far is having guix-daemon live on the machine that exports the store
<civodul>ah ok
<civodul>in 2014, you're a real pioneer!
<civodul>you'd deserve the Guix User Award
<rekado_>the original idea was to sneak by the admins and present /gnu/store as just another shared file system that they can mount. We didn’t want to host a file server that would be permanently mounted on the cluster nodes.
<rekado_>hehe :)
<rekado_>I still have those concerns, because right now the worst case is that people cannot *change* their profiles, but they can always access them.
<rekado_>if we had a separate file server (hosting guix-daemon and /gnu/store) then any reboot or downtime would mean that users cannot even *access* their profiles.
<rekado_>using a file server that *already* hosts their home directories on the cluster is an easy way to tie uptime of Guix to uptime of the cluster.
<rekado_>if that file server goes down nobody can compute anything anyway
<rekado_>so I’ve been very cautious about moving away from the current setup
<rekado_>if we set up a separate file server that runs guix-daemon it would have to be a redundant setup
<rekado_>and this again complicates things
<civodul>hmm true
<rekado_>two VMs on two indepedent VM hosts, active/passive configuration, live migration, etc
<rekado_>it’s messy
<civodul>OTOH, i would think it's OK to tie /gnu/store availability to "Guix head node" availability
<rekado_>and NFS connections don’t survive live migration, so it’s extra complicated
<rekado_>it is becoming a more and more reasonable choice, especially since non-HPC servers are mounting /gnu/store as well.
<rekado_>I’d really like to use Guix-generated VMs for this.
<zimoun>civodul: what?! Conda+Spack+manual install?! By one of the French leader in HPC machines… Frustated is a weak word.
<rekado_>I should work on this during the summer
<rekado_>Conda is … hmm… it’s such a very odd thing to see in HPC.
<efraim>we have 12 physical machines, guix installed on the head and exported to the other nodes. want to change the other nodes to Guix System but keep the store on the head node
<rekado_>zimoun: conda is really common, but it’s also so very quirky and it even has a habit of writing to user files that would not be acceptable for any other tool.
<rekado_>Conda writes into the user’s ~/.bashrc
<rekado_>just like that
<rekado_>doesn’t ask for permission
<rekado_>one of my colleagues got very angry because it silently broke his carefully designed ~/.bashrc
<zimoun>Before switching to Guix, I was an extensive user of Conda. Once you understand what Guix is, Conda looks so poor in comparison.
<rekado_>efraim: what cluster scheduler do you use?
***rekado_ is now known as rekado
<rekado>efraim: ah, then you have a good chance to build the whole cluster with Guix System.
<rekado>all with “guix deploy”, too
<civodul>oh we have a slurm service now?
<efraim>I have to test it still
<civodul>that'll be nice
<efraim> munge and slurm
<efraim>I wrote one for lizardfs too, but I made entries for all the options so I need to cut out most of it and put it back in as "and here's your free text field" for everything else
<civodul>looks like you did all the hard work already, neat!
***zimoun` is now known as zimoun
<rekado>the GWL looks up packages via an inferior Guix now, and … it seems to just work.
<civodul>rekado: yay!
<rekado>I really expected this to be much more difficult
<rekado>I’m now in the process of converting all these ad-hoc (format (current-error-port) …) and (error …) expressions to srfi-35 conditions; I can see the light at the end of the tunnel
<rekado>(I’m testing all of this with pigx-rnaseq.w, which still had a bunch of previously undetected errors)
<zimoun>rekado: cool!
<zimoun>wow! your presentation at FOSDEM will just rock! :-)
<rekado>I hope I can finish pigx-rnaseq.w by then.
<rekado>previously I had only worked on much smaller example workflows — and only when working on PiGx did I realize that we couldn’t express certain things in GWL at all.
<rekado>one thing that I find confusing and a little ugly is that process templates must be parameterized before you can visualize the graph
<zimoun>you would like a graph based on symbols?
<rekado>the generated graph for pigx-rnaseq.w is not pretty
<rekado>and it’s not the same for different sample sheets
<rekado>so you can’t get a good overview on what it does in general
*rekado -> afk
<civodul>rekado: thumbs on making all these changes!
<civodul>zimoun: problem is that presentations are usually just the tip of the iceberg :-)
<zimoun>A Medical Doctor in my lab is using Guix on their personal laptop! \o/ That’s balance my day after the Spack morning :-)
<civodul>heh, that's quite something :-)
<civodul>(a "physician" i guess?)
<zimoun>yeah, physician but here the world is cut in 2: the PhD and the MD. :-)