IRC channel logs


back to list of logs

<pmikkelsen_>goodmorning guix
<wingo>so, is there a solution to the problem of finding the configuration file for a service?
<wingo>e.g. i would like to know what the dovecot configuration file looks like
<wingo>for the currently running dovecot service
<wingo>i guess ps is the thing
<wingo>still, would be nice if it were in /etc, somehow...
<civodul>hey wingo!
<civodul>i would use "ps", or "guix gc -R $(guix system build config.scm)| grep dovecot"
<civodul>obviously for debugging purposes only
<efraim>evaluation 109767 has a 6.75% failure rate, to drop down to 5% we'd have to make 310 of those pass building
<civodul>how many are in dependency-failed status?
<oriansj>sneek: later tell reepca you can list your stage0 Forth work as a resounding success
<sneek>Got it.
<oriansj>sneek: botsnack!
<efraim>civodul: I didn't check, but I think about 170 for each of armhf and i686 are Java bootstrap related
<civodul>efraim: oh right, but these are probably really hard to fix
***Piece_Maker is now known as Acou_Bass
<efraim>i don't know about sablevm-classpath, but it looks like porting sablevm itself to aarch64 might be possible
<efraim>and gprolog has actual instructions for porting, so that's on my eventually list
<janneke>civodul: i'd like to be able to `build' a system test just like a package, at least so that it's recorded in the store, wdyt?
<janneke>i wrote an ugly hack some time ago that would allow listing a system test in a package list but that felt as the wrong approach
<civodul>efraim: i think sablevm is unmaintained, but rekado would know better
<civodul>janneke: we discussed this with cbaines at the GHM, and i think we should just add a new "guix system test" command
<civodul>which would essentially be the same as run-system-test.scm
<civodul>would that work for you?
<efraim>civodul: if i have my debian releases correct, sablevm was removed from debian about 8 years ago
<efraim>the issue is even without using the bundled libffi it still tries to configure it, which fails on aarch64 since it didn't exist
<civodul>oh is that the only problem?
<civodul>sounds like it should be possible to work around it
<janneke>civodul: yay, already discussed :-)
<efraim>dnl we *always* have to configure subdirs, for explanation see:
<civodul>efraim: what if you replace libffi/configure with a symlink to 'true'? :-)
<civodul>ok maybe it takes more than this
<janneke>civodul: i think that would work for me, i assume that these test results are published too and that a second invocation (by someone looking at my substitutes) will just fetch the pre-built result substitute?
<efraim>civodul: I was just going to remove AC_CONFIG_SUBDIRS(src/libffi) from
<civodul>janneke: right, the test result is a store item
<civodul>janneke: so we already discussed this? my memory is flaky i think :-)
<civodul>efraim: try to patch 'configure' rather than '', if possible
<janneke>civodul: good. i was wondering about the necessity/feature of being able to install such a result as a package
<efraim>civodul: of course
<janneke>might be unnecessary and weird
<civodul>janneke: it's not a "package", it's a store item
<rekado>efraim, civodul: sablevm is indeed unmaintained.
<civodul>it contains the SRFI-64 test log, and sometimes things like screenshots
<janneke>civodul: no, i was cheering that you and cbaines discussed it while i was thinking about it
<rekado>efraim: could you point me at the error you get with sablevm?
<civodul>i seems unusual, but i think it's ok
<civodul>janneke: heh, ok :-)
<janneke>did you discuss guix deploy at GHM?
<rekado>a little
<janneke>well, i'm happy that davexunit is looking at guix environment, one thing at a time :-)
<rekado>but I don’t think we came up with concrete steps to get to guix ops/deploy.
<rekado>I’d be happy already if we could push pre-built systems to a target machine with guile-ssh.
<janneke>rekado: that's new to me, i'd like to hear/read about such a solution too
<janneke>hmm did i just bad-mouth go without doing proper investigation...
<rekado>that’s all I know about Go, though.
<janneke>rekado: thanks...gnu/packages/golang.scm says (excerpt): As of go-1.5, go cannot be bootstrapped without go-1.4, so we need to use go-1.4 or gccgo-5.
<janneke>sounds like: some extra steps, but still milky
<efraim>I have a patch on the ML to build go with gcc
<davexunit>rekado: does open-connection work with a non-local socket yet?
<davexunit>I always wished that you could just pass a port object to open-connection
<davexunit>that was one of my big hang ups with my initial stab at 'guix deploy'
<efraim>it turns out for sablevm copying arm->aarch64 and 's/ia64/aarch64/g' gets me pretty far
<efraim>uh, just got sablevm-classpath to compile
<davexunit>I'm also a little bit unsure about what the exact feature set of 'guix deploy' should be. if it's just deploying to servers it only covers a fraction of use cases.
<davexunit>it should also support immutable deployments where VMs are never updated, but replaced with new ones.
<davexunit>and then there's the whole "cloud" thing. AWS and OpenStack have declarative infrastructure management tools (CloudFormation and HEAT). people often use wrappers around these APIs in their language of choice to create code that can reproduce their entire production environment.
<davexunit>so I would like 'guix deploy' to be generic and extensible enough to support all of these things.
<davexunit>remote, in-place server updating is just the tip of the iceberg.
<janneke>isnt't that's already pretty useful in itself, and a building block for the rest as well?
<civodul>davexunit: see GUIX_DAEMON_SOCKET in the manual: remote sockets are supported, albeit slowly
<civodul>BTW, i have plans for "guix system reconfigure --remote=host", which is sort of a subset of 'guix deploy'
<civodul>gasp! i wrote an article texlive-2016 where it was typeset in 9pt, and now it's 10pt :-(
<civodul>i had to go back to my profile generation of the time i wrote the thing to understand what was going on
<janneke>civodul: yeah, --remote=host already sounds nice
<davexunit>civodul: that is awesome!
<davexunit>that will be very useful on its own
<davexunit>it will be the job of 'guix deploy' to make that work at scale
<davexunit>though, I do have some questions.
<davexunit>building the system redundantly on N machines doesn't seem like a good idea.
<davexunit>it's common for a developer workstation to have more power than the server being deployed to.
<janneke>davexunit: surely all machines use a common substitut server?
<davexunit>so I thought it would be best to build the system locally, copy the closure to the N systems, and instantiate it
<davexunit>janneke: so now people need to run their own substitute server?
<davexunit>sounds like a usability failure to me
<janneke>davexunit: i think everyone who needs guix deploy atm will want to run a substitute server?
<davexunit>I don't want to.
<janneke>but this could be a design mistake...
<davexunit>because that entails having your own build farm
<davexunit>which can be nice, but most people don't need it
<janneke>installing a couple of offload servers seemed to me to be the first thing to do
<davexunit>I'd like to see what civodul thinks
<janneke>i don't want my people to build on their laptops
<davexunit>but you can configure offload servers if you want to
<davexunit>that is good. that means that the various tools compose well.
<davexunit>I don't want people to *have* to do that
<janneke>davexunit: so...the closure needs to be built once anyway, at least somewhere
<janneke>what about mandating that each slave uses the master as a substitute server?
<janneke>the master can still use offload machines
<janneke>and when it's built, the slaves can simply reconfigure and pull all substitutes
<janneke>ACTION likes the build/offload/substitute mechanism ;-)
<civodul>davexunit: ideally users would have the options of building things locally and sending it to the target machine, or building everything on the target machines
<civodul>both have pros and cons
<davexunit>the reason why local building (with or without offloading) is necessary is for immutable vms
<davexunit>you want to produce the image, upload it, and launch new vms with it
<civodul>the key thing will be 'remote-eval', the (hypothetical) procedure wrapped around Guile-SSH's 'node-eval'
<davexunit>in my day-to-day work, my workflow for deploying new software is to create a base ec2 instance, run my build scripts (chef), make an AMI (disk image) of the result, tear down that ec2 instance, create N new ec2 instances with the new disk image, then finally tear down the old ec2 instanecs running old code
<davexunit>and that's all automated
<davexunit>it spans multiple tools to handle each piece, but guix would be able to do it all.
<civodul>that would be awesome
<davexunit>there's just a lot of use-cases to consider
<davexunit>I want it to work well for managing a simple home server as well as a larger virtualized environment
<rekado>we are still pre 1.0, so we don’t have to get it right with the first attempt
<janneke>rekado: so after 1.0 it's exclusively `first time right'? ;-)
<rekado>no mistakes allowed
<rekado>we’ve had long enough to try until then :)
<janneke>ACTION wonders if we'll see 0.99 in my lifetime then :)
<wingo>trying to do the right thing is great but things will never be perfect :)
<efraim>I got sablevm to build on aarch64, now to see if icedtea@1 will build
<efraim>classpath failed on native/fdlibm, probably the same substitute* as before