IRC channel logs
2023-06-20.log
back to list of logs
<rekado>where Conda packages are built depends on the channel <rekado>bioconda, for example, build packages in a well-known Docker container, <rekado>whereas other channels might not control the environment at all <rekado>Conda package definitions don’t capture all inputs and there’s no hard rule what should be an input and what can be assumed to be available on the target machine. <rekado>we’ve often observed that Conda packages fail to run on RHEL, because the glibc there is older. <rekado>Conda channels provide *binaries*, and they usually have big archives <rekado>so selection of binaries is done by version string <rekado>a dependency solver tries to satisfy requirements that are derived from user inputs <rekado>this solver is known to fail often enough to be annoying, especially in larger environments <rekado>there are other implementations that can reuse the archived binaries; some of them don’t use solvers <rekado>PurpleSym: I wonder how we can make the disappearance of the leibniz psychology substitute server less disruptive to users of guix-science. <rekado>e.g. we could get the existing substitute cache and host it on ci.guix.gnu.org <PurpleSym>rekado: If we trust its binaries, why not? Do we have tools to mirror an entire substitute server? <PurpleSym>(I don’t have access to the server anymore, so rsync won’t work.) <rekado>I don’t think we have specialized tools for that <rekado>once the rstudio upgrades are sufficient to restore plot functionality I’ll get to work on bazel + tensorflow > 1.9 <rekado>then replace them once we’re confident that the bazel build system works correctly <rekado>doesn’t need to be in guix proper <PurpleSym>rekado: Not sure the imported packages work with node-build-system in guix-science. And the change would benefit guix proper too. <rekado>I’m still working on keras (rebuilding tensorflow multiple times takes a long time…), but I’m itching to finish the rstudio upgrade <rekado>after a day’s work I’ve patched tensorflow and keras enough to get down to only 16 test failures, but I don’t think I can reduce this further <rekado>the problem really is that our tensorflow is ancient, and keras is old relative to numpy <rekado>so we can’t get a much newer keras and we can’t stick with the old version either <rekado>(current number of test problems without my patches: 421 failures + 603 errors)