<rekado_>zimoun: thanks for sharing the article on the supercooled water controversy! <zimoun>rekado_: thanks go to Konrad, from their Monday’s talk. :-) <civodul>zimoun: hey! did you attend the container talk this morning? how was it? <rekado_>zimoun: heh, I read some of the comments and found one to be really well-written … and then I saw it was Konrad’s comment :) <zimoun>rekado_: oh, I have not seen the comments… going to read. <zimoun>civodul: yeah, I was. Late as usual, but there. :-) <zimoun>not as Spack ;-) I mean it explains the pros/cons about containers. Conclusion Docker sucks for HPC but Singularity is okish <civodul>"Conclusion: containers don't guarantee reproducibility... but they have other qualities" lol <civodul>i find it problematic that many illustrations come from the marketing depts of these companies <civodul>suggests the talking points come from there as well <zimoun>the good point is the presenter explained that containers are the the graal for reproducibility <zimoun>well, they explained security issues etc. <zimoun>well, from my understanding, it was an “honnest“ presentation of containers in the scope of Reproducible Science targetting HPC. <zimoun>they said that is poor as Konrad explained on Monday, etc. <zimoun>On the other hand, Guix cannot fight against “packages ready-to-use” of Tensorflow and PyTorch &co. Even with more manpower, some beasts are still ugly beasts… <civodul>yeah not having tensorflow, pytorch, & co is more and more of a proble <zimoun>Well, Debian folks have hard time with “machine learning stuff” too. I am not reading carefully their mailing-lists (debian-med and debian-ai, e.g., https://lists.debian.org/debian-ai/) but their recent messages seem underline that tensorflow and pytorch are ugly beast. <rekado_>for us the biggest obstacle to tensorflow is bazel <rekado_>there are no fundamental problems with packaging bazel; it’s just a lot of busy work for little gain in itself. <rekado_>by that I mean that we would override the inputs anyway and not let bazel download things, so using bazel to build tensorflow is merely an inconvenient convenience. <rekado_>we should have a hackathon to map out the open problems and chip away at it. <rekado_>another hackathon may be needed for some CRAN stragglers… <civodul>rekado_: we could definitely plan for an on-line ML packaging hackathon, like zimoun organized a while back <zimoun>yeah it is doable… maybe in June. :-) <zimoun>rekado_: are you aware about MRAN, a CRAN snappshots archive run by Microsoft <rekado_>zimoun: yes, I know that it exists, but I hadn’t considered using it to download CRAN source archives. <zimoun>it could be nice to add support to fallback *rekado_ builds all CRAN updates now <zimoun>an example is r-foreign at commit d81fb2a. Hash mismatch, so the time-machine is broken. <rekado_>do you have an idea how to fix missing source code in the time machine? <zimoun>no, yet. :-) I am reading the source code <rekado_>can we perhaps provide a hash-indexed alist of *current* URLs that satisfy the vanished sources? <rekado_>the *current* Guix would inject that as a fallback to download the requested sources. <rekado_>this way we could retroactively fix old versions of Guix <zimoun>an alist fixing in-place upstream replacement. yeah maybe <rekado_>hmm, trying to run pigx-rnaseq.w and … it’s not working at a really basic level. <rekado_>looks like the profile isn’t actually built. <rekado_>or rather: I get one profile that’s correct, and a reference to another one that doesn’t actually exist <zimoun>there is 2 issues: fixing the current past (as you said retroactively) and fixing the future past (fallback to MRAN in addition to SWH) <rekado_>this stuff is hard to test automatically <rekado_>once my big block of CRAN updates is done, I’ll look into adding MRAN support. <rekado_>zimoun: yes, I’ll get on to it after the CRAN updates :) <rekado_>and I also want to take a look at your recursive importer patches <zimoun>the recursive importer patches just changed the returned value by <foo>->guix-name to put all as (values #f ’()) instead of #f when failing. Otherwise, some corner cases are broken. There are trivial patches. :-) <zimoun>about your profile, it looks like recent lfam issue, see bug#45992 <zimoun>civodul: are you following TPs sessions from the training? ***rekado_ is now known as rekado
<rekado>that’s also the source of the profile bug I encountered <rekado>but somewhere else a profile is referenced that hasn’t even been built <rekado>I’m hoping that most of this code is obsolete and could be replaced with computed-file <civodul>rekado: tip of the day: did you know there's now a lowerable <profile> record type? :-) <civodul>at first sight, it could be useful here <civodul>would allow you to de-monadify this code <civodul>but then, you prolly shouldn't call build-derivations right here; the result can be gc'd right away <rekado>all I remember is that I was trying to get everything built and return a launcher script that then uses those things. <rekado>but I barely understand how it was supposed to work <rekado>AFAIU I’ll still need “run-with-store” and “mlet %store-monad” to actually build the derivation. <civodul>no, you could use 'build-derivation', which is non-monadic <civodul>but in general, 'build-derivation' should be done as the last thing of the program *rekado has left the comfort zone behind <civodul>in between you should only pass file-like objects around <civodul>if the "gwl" command returns a script, there should also be an option to register a GC root <rekado>so, now that the profile-compiler exists, I don’t need to call profile-derivation but can just embed a profile directly in a gexp? <rekado>I’m feeling especially dense right now… <rekado>“computed-file” is the declarative counterpart of gexp->derivation <rekado>but … how can I pass it to build-derivations, which expects … well, a derivation? *rekado thinks it’s done via lower-object <zimoun`>oh, I miss how a derivation could be built wihtout monade… Need to read more. <rekado>zimoun: you’ll be able to see it in process->script later <rekado>the basics already work, but I’m now working on adding all the other features back in <rekado>for containers I need to get the closure of the script, so that all modules are mapped into the container <rekado>it’s a little unfortunate that I have to provide this manually; maybe I can find a shortcut