IRC channel logs

2017-11-22.log

back to list of logs

<pmikkelsen>hi guix
<bms_>Hello.
<sturm>Any gotchas when running a local GuixSD publish server? I'm running the server, have imported the public key onto the system running Guix on a foreign distro. Then I run:
<sturm>guix build emacs --dry-run --substitute-urls="http://myserver.local:8080", but I'm not even getting a hit show up in the server log
<sturm> http://myserver.local:8080` and get a result back
<sturm>I can run curl against the URL and get the publish default page back
<sturm>I've just tried setting up the publish server as a service with just "(service guix-publish-service-type)". Logs say that it's been disabled for respawning too fast, though no hint as to the error. I've tried removing and recreating the signing key.
<apteryx>Hello Guix!
<bms_>Hello.
<apteryx>How is it going?
<bms_>Pretty good. How are you?
<apteryx>I'm good too! I just finished installing myself at my new place. Thrilled to restart hacking on Guix!
***pksadiq_ is now known as pksadiq
<PlainDave>I've downloaded guixsd-usb-install-0.13.0.x86_64-linux.xz, but I don't know what to do with it. I've installed most of the Ubuntu flavors successfully. Do I change the file extension to .iso?
<brendyn>PlainDave: It's not an .iso
<brendyn>PlainDave: you need to decompress it first, and then use dd to write it over a usb drive
<PlainDave>okay cool
<PlainDave>brendyn, Thanks
<brendyn>are you planning to overwrite your system with guixsd?
<PlainDave>No, I'm just going to install it into a partition.
<brendyn>looks like xdg-open requires mimeinfo
<brendyn>Anyone know how to use proot on Guixsd? I just get: proot info: pid 9385: terminated with signal 11
<vagrantc>aha. figured out a situation in which it removes a package ... if i install a package while "guix package -u" is running ... it removes the package when "guix package -u" finishes
<vagrantc>probably some way of maintaining transactionality
<rekado>brendyn: using proot is demonstrated here: http://guix-hpc.bordeaux.inria.fr/blog/2017/10/using-guix-without-being-root/
<rekado>vagrantc: yes, that’s why I asked if you’re running “guix package” more than once at the same time (e.g. in different terminal sessions).
<rekado>I’m surprised I need to build qt after upgrading.
<civodul>hey rekado
<civodul>did anything related to Qt change?
<efraim>Could have been a dependency
<rekado>now building qtwebkit…
<civodul>both berlin and hydra are busy building stuff
<civodul>i'm not sure if it's just that they're lagging behind
<brendyn>How long does it take for one of the build servers to iterate through all the packages? does it stop to update the guix version at times, or only once it gets to the end?
<civodul>it might be the orc update, no?
<civodul>brendyn: currently it's mostly sequential: 1) update, 2) build
<civodul>i don't have any figures though
<brendyn>i guess it would be different every time depending on what needs to be built
<brendyn>I noticed that hydra blocks all search engines with it's robots.txt. is this intentional?
<civodul>yes
<brendyn>the webm video in this is 404'd now https://www.gnu.org/software/guix//guix-ghm-update-20170825.pdf
<civodul>yeah it's a file:// URL, so it only works on my laptop ;-)
<wigust>Hello Guix, I build a service configuration A with gexp-compiler. Could I refer to another 'config B' inside 'config A' as a path (/gnu/store/...)?
<brendyn>oh right. I though gnu.org/data/video actually existed
<wigust>config B also produced by gexp-compiler
<civodul>wigust: yes, just do #$config-b
<civodul>hi wigust BTW :-)
<wigust>civodul: OK, Thanks!
<efraim>orc -> gstreamer -> qt
<civodul>"guix refresh -l orc" mentions 206 packages, which may be true, but doesn't reflect the fact that one of them is Qt
<rekado>oof!
<rekado>so… I was told that we have yet another rack full of unused servers; a bit more recent hardware than the old Sun hardware of berlin.guixsd.org. As soon as IT wires up the rack switch I can get to install an extension to berlin.guixsd.org.
<rekado>Also: next week we’ll retire a couple of servers that will then go straight to berlin.guixsd.org.
<civodul>rekado: woow!
<civodul>you rock
<civodul>the MDC too ;-)
<kmicu>Thank you rekado!
<jonsger>rekado: those servers are just amd64 or?
<rekado>jonsger: yes.
<rekado>I’m very happy that I get to repurpose servers as I see fit.
<rekado>jonsger: they are also used to build binaries for i686
<jonsger>:)
<brendyn>for some reason, occasionally when im edditing scheme code in emacs, my yank will start glitching such that it prepends some text from my buffer to the stard of my yank
<kmicu>Thank you so much for any additional build servers.
<brendyn>it doesnt happen when i yank into *scratch*
<brendyn>when i run helm-show-kill-ring, i can see the kill just find and enter it, but when i use C-y it gets borked
<kmicu>hi brendyn Are you able to reproduce the issue with 'emacs -Q' (also #emacs channel could provide more helpful feedback.)
<brendyn>probably not. i suspect its caused by all thes spacemacs crap i have installed
<brendyn>kmicu: strangely, that text gets inserted by other commands too like sp-forward-barf-sexp myseriously inserts it
<brendyn>it's like this text is "jammed" in emacs somewhere and keeps coming out when text is input via some commants
<kmicu>Heh, I did not experiance anything like that in my Emacs. I have no idea which plugin could do it. Does that behavior persist after Emacs restart?
<apteryx>Hello, is our kernel patched against blueborne (bluetooth vulnerability): https://www.armis.com/blueborne/#devices
<apteryx>It says kernel up to 4.14 are affected.
<kmicu>brendyn: Do you keep your Emacs config in a version control system? Maybe you could rollback to previously working versions.
<brendyn>kmicu: I think it has been present for a long time. the bug does not occur all the time I'm not sure how to reliably reproduce it. I thought it maybe occured when i copied and unbalanced ( or " it doesnt break when i try to reproduce it
<brendyn>a frustrating bug
<brendyn>so it goes away when i restart emacs
<brendyn>I might try to find a way to profile the elisp the next time it happens
<ng0>wooooo :)
<ng0>I'm at 44% of an very old guix installation
<ng0>pull on 512MB RAM + 6 GB Swap works again
<ng0>great job everyone who worked on working around the leak
<civodul>it needs no more than 6G of swap, great job indeed :-)
<civodul>who wants to try "guix pull --branch=wip-pull-reload"?
<civodul>it's the new, hopefully fixed multiple-derivation 'guix pull'
<civodul>mb[m]1: should work better this time!
<ng0>I'll try it on the server once it is done with this pull.. first probably succesful pull in 12 months
<ng0>I think parts of the reason why compiling is taking so long are a) the little RAM and b) at some point people all over the world started using s.n0.is running searx instance like rabbits on a bad trip down to wonderland.
<ng0>keep the logs for a week and shred them afterwards, what could go wrong... watching the queries live I feel like Google. It's good that I don't pay anything for bandwidth
<civodul>so, anyone willing to try "guix pull --branch=wip-pull-reload"?
<civodul>it won't eat your computer, i promise! :-)
<ng0>give it probably 12 more hours and I can report :)
<civodul>ok :-)
<ng0>or maybe less, depending on wether the server finishes faster or if I'll tunr on the laptop to test it.
<rekado>civodul: I’m trying it now on elephly.net (i686)
<rekado>civodul: it breaks
<rekado>guix/self.scm:589:4: In procedure reload-guix:
<rekado>guix/self.scm:589:4: progress-reporter/bar: unbound variable
<rekado>
<rekado>I’m running “./pre-inst-env guix pull …” in a git checkout at version v0.13.0-4793-g41916bea1.
<lfam>civodul: With Guix at commit d8e257113c48b3, pulling wip-pull-reload commit 62dbd6e10b5317f fails like this: https://paste.debian.net/997077/
<civodul>ACTION looks
<civodul>thanks lfam & rekado
<lfam>Sorry I don't have time to debug it myself ATM
<lfam>Ah, I see another similar report :p
<civodul>i can reproduce it now
<mb[m]1>There are various problems with glibc 2.26 and C++ mode that currently manifests on core-updates.
<mb[m]1>Here is one instance: https://sourceware.org/bugzilla/show_bug.cgi?id=21930
<mb[m]1>And another: https://sourceware.org/bugzilla/show_bug.cgi?id=22296
<mb[m]1>I'm going to dig through the 'release/2.26/master' branch and pick out the relevant fixes, but it's some 40-50 commits ahead of the 2.26 tag.
<jonsger>how can I remove guix from binary installation?
<mb[m]1>jonsger: rm -rf /gnu and rm -rf /var/guix will remove all traces of Guix.
<mb[m]1>Apart from changes to shell startup scripts and guix-daemon user accounts.
<mb[m]1>We should sent someone to next years "GNU Tools Cauldron" and convince the glibc guys to provide dot-releases :P
<mb[m]1>Ooh, the "relink libfoo.so with libthread.so for IFUNC symbol 'longjmp'" problem seems to finally be fixed in 2.27 (and 2.26 release branch).
<mb[m]1>Apparently there's also a problem with LD_PRELOAD in 2.26 that has been fixed: https://sourceware.org/bugzilla/show_bug.cgi?id=22299
<mb[m]1>It's tempting to just pick all the fixes from that branch.
<jonsger>Sleep_Walker: the opensuse package of guix doesn't create the stuff in /var/guix is this intended?
<vagrantc>is it reasonable for a user to run "guix system build foo.scm" and then run "sudo -i guix system reconfigure foo.scm" ? e.g. build everything in one pass, and install it in another?
<Sleep_Walker>jonsger: no, probably not
<vagrantc>i guess if there is non-determinism in some of the builds, reconfigure might end up rebuilding some things...
<vagrantc>or inconsistancies between the user's guix and root's guix...
<jonsger>Sleep_Walker: do you know how I get the stuff into /var/guix
<Sleep_Walker>jonsger: you can just create the directory, start the guix-daemon service and use
<Sleep_Walker>but package is outdated
<Sleep_Walker>for some reason it requires shepherd as build dependency
<Sleep_Walker>I created package for that but somehow people are sensitive to get another package with /sbin/init in it :b
<bavier>vagrantc: differences in the guix versions would be the biggest issue
<jonsger>okay
<bavier>vagrantc: but you could just run the build command as root also
<vagrantc>i like not having to gain root privledges unecessarily
<vagrantc>so i was liking, at least in theory, the build as user, install as root workflow
<vagrantc>though even the root's builds happen as other users, for the most part
<vagrantc>still, executing as little code as root as possible, generally a good principle
<bavier>vagrantc: right, you'd be communicating with the daemon, which is already running with root privileges, and the permissions are dropped as soon as possible for the actual builds
<vagrantc>so, following the desktop.scm shipped with the installer... i end up with gnome and xfce with slim as a display manager ... is this intentional as the recommended display manager for guixsd rather than gdm or lightdm or whatever?
<vagrantc>or is it more of a lowest common denominator?
<vagrantc>ACTION isn't choosy with display managers, as long as they work
<vagrantc>and i haven't had problems with slim, per se
<ng0>gdm is work in progress
<ng0>there are some more packaged, but only one in addition to SLIM has a service
<vagrantc>ok, makes sense
<vagrantc>well, build as user, reconfigure as root worked out from this last run
<vagrantc>e.g. the reconfigure phase was quite brief
<vagrantc>except that i ended up with an empty grub.cfg ...
<vagrantc>!!
<vagrantc>for some reason the filesystem was unmounted uncleanly when i used ctrl-alt-delete to reboot
<vagrantc>and corrupted grub.cfg ... is ctrl-alt-delete not safe on guixsd?
<vagrantc>and all system commands are missing from my logged in user
<vagrantc>and everything else
<vagrantc>ok, that went pretty badly
<vagrantc>but, after fsck'ing the disk from another system ... grub would load, and i could load an older, working profile
<vagrantc>so that's pretty cool
<civodul>mb[m]1: re libc 2.26, what should we do?
<mb[m]1>civodul: I have picked a selection of patches for a total of 849 lines so far.
<mb[m]1>Will submit it to the tracker.
<civodul>woow
<civodul>patches for math.h?
<mb[m]1>But, there are still lots of important-looking fixes left.
<mb[m]1>The amalgamated math.h patch is 688 lines.
<civodul>fun
<civodul>looks like they made a mistake :-)
<civodul>when you think of the companies behind the toolchain, it's a bit crazy that they don't have CI testing all the compiler/libc/etc. combinations
<mb[m]1>Yeah, it seems to be related to a new float128 interface.
<mb[m]1>Yes indeed.
<mb[m]1>We should dispatch someone to next years GNU Tools Cauldron.
<civodul>i went there a couple of years ago actually
<civodul>Prague and Cambridge
<civodul>it's sad that it's so "isolated"
<civodul>well, understandable too
<mb[m]1>The sheer amount of fixes in the 2.26 stable branch spawned a discussion about a stable dot-release process: https://sourceware.org/ml/libc-alpha/2017-09/msg01134.html
<mb[m]1>For a glimpse of the fun, here is the stable 2.26 branch (scroll down to find the tagged 2.26 commit): https://sourceware.org/git/?p=glibc.git;a=shortlog;h=refs/heads/release/2.26/master
<mb[m]1>It might be easier to just pick all commits.
<civodul>mb[m]1: it would be good to chime in and say that we don't just "backport patches"
<civodul>well it's an old thread already
<mb[m]1>Wow, September, right.
<mb[m]1>I just found it from the linked bug report, didn't read it yet.
<mb[m]1>But I'll see what the conclusion was and possibly revive it.
<civodul>rekado: i wonder if we're consuming disk space more quickly than we're GC'ing on berlin
<civodul>which would mean more than 80G/day
<vagrantc>ACTION wonders how much disk space berlin uses
<civodul>vagrantc: currently it's ~200G
<civodul>that also includes the 'guix publish' cache though, not just /gnu/store
<rekado>civodul: soon I’ll add the big storage to berlin
<rekado>then we can stop worrying about this
<rekado>but I just haven’t gotten around to it yet
<rekado>need to take the chance to test the glusterfs package
<rekado>I’ll be in the data centre tomorrow to test the installer image and to power cycle some of the nodes (they need their ILOM reset)
<rekado>civodul: BTW: I’ll be speaking about Guix at “Bio-IT World” in Boston next year. Big bioinfo/sysadmin conference.
<civodul>re storage, neat!
<civodul>kudos for the coming talk as well!
<civodul>we should announce it on the guix-hpc site, hint hint ;-)
<civodul>ACTION -> zZz
<civodul>night!
<vagrantc>so if berlin is using ~200GB of storage, a public proxy with 400-500GB of storage would be useful?