<tune>interesting that the versions look different than when I run "guix --version" as user
<tune>I'll just do a pull with the guix version I used to fix my user previously
<reepca>Hm... I want to be able to use derivation-path? in (guix database), but I don't want to pull in (guix store), since it would cause a cyclic dependency. But I can't move derivation-path? to (guix derivations), because (guix store) uses it and (guix derivations) uses (guix store), so that would also be a cyclic dependency...
<reepca>maybe I should just #:re-export it in (guix derivations) since I'll be using that anyway
<reepca>I'm remembering why using git was a bit of a headache... rebasing really doesn't work well when you can't push with -f.
<reepca>civodul: I've started pushing my recent work to the guile-daemon branch on savannah. I realize now that I should have split it into separate commits, but I think the changes to register-items (transactional registration and registering derivation outputs when the deriver is registered) may be of interest to the master branch as well
<civodul>hi reepca! it's really great you're back working on this!
<civodul>sure, i think we should try to merge things piecemeal as much as possible
<civodul>i haven't looked at the branch yet, but do ping me when you think a specific change should be merged on master
<reepca>I take it said changes should be in isolated commits?
<g_bor>I used the following algoritm: I removed the derivations with no input, recursively. This way the number of layers is the longest path in the graph, the length is 109, and on the top there is maven :)
<g_bor>For the next step I will need the information which module corresponds to the nodes.
<g_bor>I would like to the following: List the modules in layer0, create a module-layer0 module for each one that is present. Then add in the layer0 packages to the module-layer0 modules. Then check if the module-graph is a DAG. If it is, then add in layer1 packages, and additional layer1 modules. Then check again, if it is still a DAG. If not, then freeze the previous modules, and start over from the layer1, and so on.
<htgoebel>g_bor: Are you working on a maven build-system?
<roptat>civodul, re osm, it's best to send a link to the node or a marker (on the right panel, there's a "share" tab where you can check a box to add the marker) because there are so many nodes on the map it's hard to find the right one ;)
<roptat>civodul, my train arrives at 7:50pm wednesday, and I'll probably have a big luggage, so I'll drop by my hostel first
<civodul>roptat: ok i'll do that next time, i'm not an advanced osm user as you can see
<roptat>you could also test outside of a guix container first, by creating an empty maven project with "mvn archetype:generate" (which should be easier to build) and run maven with "mvn build --offline"
<roptat>the first thing it complains about is that it can't find maven-compiler-plugin (I think it used to complain about maven-resources-plugin first, before), and that is because it's not present in ~/.m2/repository/org/apache/maven/plugins/maven-compiler-plugin/3.8.0/maven-compiler-plugin-3.8.0.jar
<roptat>and it should contain the plugin.xml file + metadata for maven to understand that the file is valid
<Copenhagen_Bram>The bootstrapping is cool. OriansJ tells me they're making it possible to bootstrap from a tiny program written in Hex all the way to GuixSD
<bavier`>Copenhagen_Bram: it would be cool if you'd like to run `guix challenge` and report any reproducibility issues, since you don't mind building locally ;)
<Copenhagen_Bram>But... imagine if you tried to install Linux From Scratch?? It would take so long if you compiled everything, longer than this if you're doing it manually. I bet LFS probably involves downloading at least some binaries
<cbaines>Copenhagen_Bram, if you have more than one machine, you can setup offloading so that you can build packages across them
<Copenhagen_Bram>bavier`: could I run something like `guix system --no-substitutes --check init` to challenge every package that I install?
<rekado>if you’re not using substitutes you don’t need “--check” as you’ll have local builds for everything
<bavier`>Copenhagen_Bram: actually, there's a snippet in the manual, section 5.1, that 'builds all the available packages'
<kmicu>Copenhagen_Bram: keep in mind that compiling will be that long after each core update. Reproducible builds let us trust binaries; compiling sources giving the same binary is wasteful. (Unless you already have solar panels then go ahead I guess ;)
<vagrantc>a very small, narrowly defined part of netbsd
<rekado>these reproducibility numbers are very confusing
<vagrantc>which is still awesome; having a reproducible core set is great progress and work!
<cbaines>reproducibility stats are always a bit slanted, as it's boring to list the conditions under which you tested the packages
<Copenhagen_Bram>So... should I keep compiling everything or should I go ahead and use substitutes to install, and check and challenge later?
<kmicu>Copenhagen_Bram: Don’t worry too much. Compiling is still orders of magnitude less problematic than mining cryptocurrencies with a useless proof of work. You at least check determinism in Guix packages. Just keep in mind you need to recompile almost everything on the next core update.
<cbaines>you don't say 80% reproducible when the filesystem order was randomised, the system date was altered, the number of cores was different, ...
<bavier`>a collection of reports from 'guix challenge' would be as real-world as we could get as far as reproducibility
<vagrantc>i literally earlier today was thinking about attempting to do some systematic reproducible builds testing for guix...
<bavier`>(and then the questions about trusting the stats would come in)
<OriansJ>rekado: with a standard for the bug reports so we can make it easy for people to find and know the status of those irreproducible issues