***jonsger1 is now known as jonsger
***apteryx is now known as Guest30242
***apteryx_ is now known as apteryx
<wingo>civodul: did compile times improve for guix, also? <civodul>wingo: until now we'd compile everything with -O3, except packages with -O1 <civodul>BTW, for packages, i was thinking it should be #:cps? #f #:peval? #t <civodul>because without peval the "macro writer's bill of rights" is not honored <wingo>yeah tricky, you also want #:resolve-primitives? #t in that case <wingo>civodul: i guess my real question is, what change does the baseline compiler have on time to compile packages <civodul>wingo: for all of gnu/packages/*.scm? <civodul>wingo: from 6m43s to 1m56s (-O0 + peval in both cases) <civodul>the thing is that the load phase remains present, and it's sequential and all <wingo>so that time includes load time? <civodul>yes, the compiler itself is more than 3x faster <wingo>so it gives you some breathing room, you can double the package count now ;) <civodul>and hopefully by then CPUs will be twice as fast...? :-) <wingo>i wonder what the load time distribution is for any given module <wingo>it could be that with parallel compilation, it could be reasonable to just make -jN on the individual modules <wingo>maybe there are reproducibility concerns (gensym numbers etc) that prevent you from doing that <civodul>the load bit is annoying, it doesn't feel right <wingo>yeah. in web browsers they do the unified build thing, for this reason: partition M C++ files into N "unified" files, to amortize cost of loading headers <wingo>works well but can occasionally cause errors of file A includes B and C uses B but forgets to include it, sometimes there can be no error <civodul>i was about to say "but C++ headers are not full source code", but i guess that's not quite true these days <wingo>i think for that reason upstream chromium switched back away from unified builds, since they have enough parallelism on their build farm anyway <wingo>yeah C++ headers are loads of code... <wingo>civodul: you could probably run some experiments to see what the result would be doing one process per module: compute a table of load times for each module with no other modules compiled, add a minimal time for the compile itself, divide by cores <wingo>would be a worst-case; later compiles could use results from earlier ones <wingo>assuming that no individual load would be longer than the average tho <wingo>perhaps that's not a good assumption <wingo>*than the sum of load+compile times divided by core count <civodul>with R6-style phases we could determine when to load at all <wingo>for each module, no compiled files <wingo>sort from quick to slow; store that file in source code. update it occasionally <wingo>compile the files in order from slow to fast. approximates topological sort! <wingo>that way you get the maximum benefit of using previously compiled modules <wingo>sorry, i meant compile them in order from fast to slow <civodul>problem is, every time we build N modules, we first load those N modules, then compile them <wingo>i am saying, load and compile them one by one in sorted order <wingo>you can also do it compiling N by N in sorted order <wingo>assuming there will be some common load time among the packages that can amortize the cost, similar to what you do now, but without being O(n) in module count <civodul>yes, "guix pull" does N by N, but for packages, the value of N is high <civodul>the gnu/packages/*.scm set is split into two parts <wingo>an automatic sorting by load time would be i guess the change <wingo>to approximate a module graph sort ***rekado_ is now known as rekado
<chrislck>{appropriate-time} Greetings, sniped dsmith-work! <ATuin>hi, how can i run the unit tests in a file using srfi-64 <ATuin>manumanumanu: i made it a module, do i need a main function? <ATuin>or is better not have it as a module <ATuin>one more question, how do you test normally functions that are not exported? <ATuin>or you just test the public interface <ATuin>since my tests uses my module i can not reach the internal functions i have there, so far the only solution i have found is to make those public or just test them indirecty via the the public interface <rekado>I use @@ to access procedures that are not exported <rekado>it’s probably better to just test exported procedures, but I like it this way <ATuin>being honest I like that way as well so if @@ does the trick I will use it <ATuin>nice, thanks. that's enough I guess <ATuin>now i need to see if can keep my unittests in a module <manumanumanu>A more general question: does anybody really like srfi-64? <manumanumanu>are there any options? Could we have a neat way to have tests inline? <ATuin>first time I use it, seems to do it's job but bit annoying <ATuin>iniline testing would be nice indeed <rekado>manumanumanu: I’m okay with SRFI-64 <manumanumanu>ATuin: if you don't like the (@@ ...) stuff, you could have your library code in their own .scm file and include it into a test file and a module file <ATuin>I have now that setup mod.scm and test-mod.scm <ATuin>in test-mode.scm I do (use-modules (path to my mod)) <ATuin>but of course I can only access the public definitions <ATuin>anyway I'm still learning and getting used to guile <manumanumanu>rekado: I think I mostly dislike srfi-64 because I never learned to write my own test runners, and that when I found the variable you could set! to not make it write files, I was on guile3 which had declarative modules (and had a regression for srfi-64 where set!ting that variable would do anything). <rekado>the first testing framework I used in Guile was ggspec <ATuin>rekado: I looked at it but I was not sure if what it provices is worth enough to have an extra dependency, SRFI-64 seems do its job and it's included by guile already