IRC channel logs


back to list of logs

<zimoun>rekado: about the manual toc and accent, I do not know the internal of makeinfo and neither if it is possible to read the .tex before the process by pdftex (if I read correctly), the error seems appearing there. Weird! Thanks for working on it. :-)
<divoplade>leoprikler, patch sent!
<divoplade>(I mean, I sent a patch on guix to accept my mkdir-p function)
***catonano_ is now known as catonano
<OriansJ>does anyone know what %base-initrd-modules actually expands into? because I did grep -iR %base-initrd-modules on the source tree and it doesn't really help to understand what exactly it implies
<OriansJ>hello mzlan
<mzlan>good day
<OriansJ>quite late evening over here
<mzlan>My place is just in the morning
<mzlan>You are a guix mentor
<OriansJ>I am developer working primarily on the guix bootstrap
<mroh>OriansJ: gnu/system/linux-initrd.scm:303 (default-initrd-modules).
<mzlan>i see
<mzlan>sorry I am weak in English ...;)
<OriansJ>mzlan: there are sometimes those who speak other languages as well here (a wait may be required)
<OriansJ>mroh: just to ensure I have translated correctly this is the correct expansion:
<mzlan>this time, i using windows 10...i setup Guix using qemu program..
<bavier[m]1>OriansJ: `guile -c "(use-modules (gnu system linux-initrd))(display %base-initrd-modules)"`
<mzlan>Mr OriansJ, in Guix draft for OS?
<bavier[m]1>mzlan: it is fully functional for many users
<mzlan>okays...guix can doing OS program inside many package..?
<vits-test>sneek: later tell mzlan: some languages:
<sneek>Will do.
<vits-test>sneek: botsnack || botsnack
***amfl_ is now known as amfl
<abcdw>Do anyone encountered fail on build image, with something like that: build of /gnu/store/vl83y5wxw8yax0fww1qmmnrvvac859p4-u-boot-tools-2020.10.drv failed ?
***amfl_ is now known as amfl
<ani> hi guix.
<vits-test>ani: Hello
<efraim>abcdw: which architecture? iirc I tested it on aarch64
<abcdw>efraim: Running on x86_64: guix time-machine -C channels.scm -- system disk-image ../../../src/guix/gnu/system/install.scm
<civodul>Hello Guix!
<abcdw>Hi, Ludo!
<efraim>looks like it's missing openssl on x86_64, although I'm not sure how useful it is to begin with on x86_64
<abcdw>efraim: channel contains guix ad67d20869d7c7168941bc3d20218cb45ed82b5f. install.scm from the same commit as well. I don't know why it builds u-boot at all.
<efraim>looks like it's needed for genimage
<brendyyn>I still can't pull: guix pull: error: Git error: missing delta bases
<brendyyn>anyone know a fix to this?
<civodul>brendyyn: ouch, never seen that
<civodul>perhaps you can "rm -rf ~/.cache/guix/checkouts", which will force a new clone
<brendyyn>its possible my file system is dying. i got some ext4 errors and was in the guile early boot thing
<civodul>oh that's a bad sign indeed
<efraim>we can't just take it out for x86_64, it's needed for the test suite and building arm disks. I'll see about fixing u-boot-tools on x86_64
<civodul>mothacehe: thanks for the Guile-Git release! \o/
<mothacehe>hey guix!
<mothacehe>civodul: yw :)
<brendyyn>one of the things happened the other day is a whole bunch of programs crashed, and my user shepherd failed to notice the syncthing service it started was dead.
<brendyyn>so it never restarted it
<abcdw>efraim: yep, it complains about missing openssl. Is it possible to build iso on x86_64 somehow?
<civodul>mothacehe: we you planning to send an announcement to guile-user?
<efraim>the fastest way now is to either revert the u-boot update or to change genimage to take bash-minimal instead of u-boot-tools and skip the tests
<efraim>i'm working on building u-boot-tools for x86_64 now though
<mothacehe>civodul: sure! I also need to document the curl part to upload the release tarball.
<abcdw>efraim: Oh, ok, thank you for help!
<civodul>mothacehe: awesome :-)
<civodul>yeah i don't even know how uploading to works
<mothacehe>it's not really pleasant!
<civodul>i'll go ahead with the progress report patches:
<wleslie>what would it take to have a guix package version-agnostic? so if I wanted to install an arbitrary version of cpython, I could run `guix environment --ad-hoc python@3.7` and have it just work, even though the latest published version is 3.8
<wleslie>sorry, that question is probably a time sink, just blue-sky thinking
<brendyyn>wleslie: its a fine idea. I think what you mean is, say, 3.7 doesnt exist in guix atm but this would generate that by modifying the current one?
<civodul>wleslie: we have updaters ("guix refresh"), and i've been thinking we could have a "package transformation option" that changes a package's version to the latest upstream version
<wleslie>the thing is, I don't usually want the latest versions of things. the great thing imo about nix and guix is I can have an old version of x and a new version of y and that's perfectly acceptable
<dannym>civodul: I mean the progress report is a good thing to have, but can't savannah be configured so that shallow clones from specific commits from git work (i.e. fetching "unadvertised" git objects) ? Then I suspect cloning guix from git won't take so long to clone in the first place.
<dannym>Right now it falls back to cloning the full repository each time
<brendyyn>wleslie: python 3.7.9 has security fix support until 2023-06. I don't see why it couldn't be readded
<civodul>dannym: i don't know how Savannah can be configured, but the problem is that libgit2 doesn't support shallow clones
<dannym>civodul: Then I think extending libgit2 to support them would be the right fix, not papering over it :)
<dannym>Still nice to have progress reports, though
<dannym>It reminds me of Windows 10 devs allegedly making a very fancy copy progress dialog in explorer because explorer was (and is!) so slow that even copying files in the local network made users think Windows crashed. Well, after the progress report, it didn't get faster ;)
<civodul>even with shallow clones progress report would be in order
<dannym>I agree
<civodul>but i agree, fixing libgit2 would be nice
<brendyyn>dannym: did you just say "papering" because you saw andy say it in #guile? *amused*
<wleslie>brendyyn: thing is, I probably don't care, right. I mean, it's nice that it got that tarbomb fix, but why would anyone use the stdlib http library?
<dannym>brendyyn: Hehe, no. Just a coincidence--and I guess the temptation to paper over things is always big :)
<wleslie>given that most of my python libraries are entirely offline compilers and analysers, it's unlikely any stdlib security bug is going to impact me
<brendyyn>wleslie: perhaps, but distro maintainers like to clean out junk, so unmaintained code can get booted out easily.
<wleslie>that seems antithetical to a system built around reproducable builds
<brendyyn>I suppose we still have python2 ;)
<abcdw>efraim: I try to do something like, but system-disk-image isn't exported and I can't use it directly. Is there another way to accomplish it?
<brendyyn>wleslie: i notice that there are some minor changes to the package definition with 3.7 and 3.8. it may fail to compile if its automatically generated
<wleslie>indeed, it could be like gcc where there are distinct definitions for nearby versions
<brendyyn>strange... i found the commit just before python was updated, but when i use it with time-machine, it still shows python 3.8: guix time-machine 5dc6d5ce9997e4caf66d154f91c3695e02e5386f -- show python
<brendyyn>oh.. #t isn't needed at the end of build phases any more?
<civodul>wleslie: you can use older versions with time-machine, or just by sticking to an older Guix revision that works for you
<brendyyn>its a good change, since i found it difficult to reason about whether it could be avoided or not
<civodul>and there's also for ancient stuff
<wleslie>very cool
<wleslie>12 very cool packages
<vits-test>wleslie: What about: use inferiors + manifest?
<wleslie>well, I did a similar thing to get old automake a couple of days ago by copying a package expression into a standalone file and running it as a guix expression.
<wleslie>the difficulty would be if different versions of software required different versions of dependencies. right now, you get to reconstruct all of that yourself.
<zzappie>civodul: cool I heard about 10 yars challenge in Nature podcast and was wondering at the moment whether guix know about it :)
<wleslie>ideally, all of that software would have come with a guix expression in the first place, but we're not living in an ideal world
<civodul>zzappie: heh :-)
<civodul>it's a great opportunity to showcase how Guix can help
<brendyyn>Is it ok that the installer creates /etc/config.scm with /dev/sdX instead of uuids?
<civodul>/dev/sdX is for the bootloader, isn't it?
<brendyyn>yep and swap
<civodul>it should use UUIDs for file systems
<civodul>for the bootloader, we should maybe use /dev/disk as vagrantc suggested recently
<civodul>and for swap we should be able to use file system UUIDs (same as ext2?)
<brendyyn>and in (uuid "..." 'ext4), what does the ext4 mean? how is that a part of the uuid
<civodul>it's the format, equivalent to 'dce
<allana`>Hi guix! I am trying to define some environment variables in my system configuration. I have searched through the manual and the guix cookbook and I have not found anyhting. I'm assuming that this is possible. Can anyone share a working example?
<civodul>allana`: hi! you can do that by "extending" environment-service-type
<civodul>for example, add this to your 'services' field: (simple-service 'my-var environment-service-type '(("FOO" . "42")))
<civodul>we need more examples in the manual
<jlicht>hey guix!
<allana`>civodul: Thanks!
<jlicht>sneek: later tell roptat: I have some PoC's for features imho would be interesting for the home-manager, but would like to get your input on how to flesh them out, if possible.
<sneek>Will do.
<jlicht>sneek: botsnack.
<mothacehe>new article:
<jlicht>mothacehe: I really like your writing style! One typo: "It contains /the/ pine64-barebones-os variable"
<mothacehe>thanks jlicht, fixed :)
<jlicht>mothacehe: a bit of a tangent perhaps, but I recall there being some golang-packages not liking being compiled with the qemu-binfmt service we have; Won't things like that spoil the fun you allude to in the final points of your post?
<jlicht>it could also just be an ancient issue that has long been fixed, I just remember waiting _very long_ for my rpi to compile syncthing, as it couldn't be offloaded to my main desktop /w qemu-binfmt :-)
<mothacehe>jlicht: yes the situation is a little big delicate. Cross-compilation is fast but only works on a subset of packages (linux.scm roughly). Transparent emulation is really slow and have very few substitutes due to CI issues. It also suffers from
<mothacehe>nonetheless generating barebones images should now work pretty well :)
<jlicht>fair enough :)
<nefix>hello! could someone take a look on what am I missing in my configuration? I'm stuggling with lists + maps + matches. Thanks!
<nefix>I'm getting a no match pattern error
<nefix>(also, the source:
<jlicht>hey nefix! Were you able to solve your previous regarding the git-config?
<divoplade>I'm not an expert, but are you sure you need to quote the list of ssh-host-configuration?
<jlicht>nefix: instead of the '((ssh-host-configuration , go with (list (ssh-host-configuration
<jlicht>What ^ divoplade said indeed!
<vits-test>nefix: is ~ resolving to HOME there?
<vits-test>* in identity-file
<civodul>mothacehe: nice post!
<civodul>it's been a long ride, too :-)
<civodul>just thought that we could also allow passing an <image> to "guix system disk-image"
<civodul>you would do "guix system disk-image my-image.scm", where my-image.scm returns an <image>
<civodul>for those who want tighter control from Scheme
<jlicht>What are the contraints for me creating a wip-branch on the main guix git repository? (FSDG provided, of course)
<civodul>reviewing one patch for each commit that you push
<jlicht>you make a good point
<civodul>more seriously, there's no constraint, but it's nice to let people know on guix-devel what the branch is for
<jlicht>yeah, the branch would to be allow anyone (and specifically, people I know who have worked on/against this) to look at some actual code without going for [PATCH v236]-esque threads on the ML
<wleslie>mothacehe: very slick
<civodul>jlicht: makes sense!
<nefix>jlicht: not really, I'm still struggling with the git-config thing :S
<zimoun>mothacehe: nice post (not read all yet). You should also provide a link to your FOSDEM talkshow in the History part or at the end. IMHO :-)
<nefix>jlicht: with the (list), I get a match error (regarding #f)
<jlicht>I am currently running into the issue that in the build phase of (gnu-build-system), one of my freshly compiled binaries is run to generate more sources. It complained about missing the so-files, so I setenv LD_LIBRARY_PATH in a phase before build. Now everything works, but now my later `validate-runpath'-phase is failing and every so-file not being found.
<nefix>vits-test: eeeh at least that's my intention
<mothacehe>civodul: thanks, I had the exact same idea writting the article :)
<mothacehe>thanks wleslie and zimoun
<jlicht>nefix: could you share a link to both the code and the exact error message please?
<jlicht>preferably something I can feed to `guix home build' so I can follow along at home
<nefix>jlicht: sorry, here you have:
<bhartrihari>Hello, when I run the guix pull command, the connection gets refused by I get the following error: guix pull: error: Git error: failed to connect to Connection refused
<bhartrihari>Any ideas on what could be causing this error?
<jlicht>nefix: I think you found an undocumented issue in guix-home-manager; default-host has a default-value of #f, but leaving it at #f leads to your issue. Solution for now: add a default-host to your ssh-configuration
<jlicht>nefix: you could open an issue at guix-home-manager or if you're feeling adventurous: try to create a patch ;)
<vits-test>bhartrihari: did U played with firewall recently?
<bhartrihari>vits-test: No. I can successfully ping the address.
<bhartrihari>vits-test: What do you suspect might be the issue with firewall? (Could be that my ISP messed up)
<vits-test>bhartrihari: i've same err on curl
<vits-test>port 80
<vits-test>what about https 443?
<vits-test>https works
<nefix>jlicht: ooooh, thats working now, thanks :D. I'm not sure that it's an issue, since home-manager manages your whole ~/.ssh directory, you need to provide a "default" ssh key
<bhartrihari>I had this problem on two devices at the same time.
<vits-test>bhartrihari: can U try `curl` ?
<bhartrihari>vits-test: It's the same error.
<bhartrihari>Connection refused when I try the curl command you suggest.
<vits-test>Guixen, any traceroute maniac around to help bhartrihari?
<bhartrihari>I can access from Icecat just fine.
<vits-test>bhartrihari: Is `guix pull -l| grep URL` has https?
<bhartrihari>vits-test: yes
<vits-test>bhartrihari: is ping git.savannah... sends packets to
<bhartrihari>vits-test: yes.
<civodul>mothacehe: should we add "build products" on cuirass for the binary tarball, and add it to ?
<civodul>that would allow us to call for testing on foreign distros
<vits-test>civodul: Hello: do U've any troubles now with `guix pull`?
<rekado>zimoun: I built the PDF manual successfully… but only with XeTeX.
<mothacehe>civodul: done!
<civodul>mothacehe: you're a hero!
<civodul>thank you!
<mothacehe>Cuirass is performing just fine with its database on a tmpfs mount
<mothacehe>too bad that mmaping the database file doesn't achieve the same result
<zimoun>rekado: on one hand nice! on the other hand I am more confused.
<civodul>mothacehe: that the GC is still running on that same machine certainly doesn't help i/o performance
<zimoun>civodul: does it make sense to deduplicate only once a week on Berlin and not every day? It will save/help about IO perfs when GC.
<bavier[m]1>brasero keeps segfaulting :\
<civodul>zimoun: i don't know; the plan is rather to do away with the single huge store
<civodul>it just doesn't scale
<emacsen>civodul, what would you replace it with?
<civodul>emacsen: we'd let the "guix publish" cache grow indefinitely but GC the store more aggressively so it remains smallish
<civodul>and build nodes would fetch stuff from "guix publish"
<civodul>IOW we wouldn't have to keep things in the store anymore
<mothacehe>we will need to limit the publish cache size somehow though?
<civodul>we'll set a TTL like we already do
<civodul>but we could perhaps make it a bit longer
<cbaines>This is something I've been thinking about, as with, I just have an ever growing S3 bucket of nars... I want to write a gc equivalent that works with a bunch of nar/narinfo files, rather than a store.
<civodul>so far we don't seem to be having scalability problems in "guix publish"
<civodul>"guix publish" GCs items from its cache periodically
<civodul>based on the chosent TTL and the atime (!) of narinfos
<cbaines>I'm still at the ideas phase, but I'd like to have something which would allow marking some nars as "roots" so they don't get deleted, so you can have more control over what's kept for how long
<cbaines>For, you'd probably want to keep substitutes for previous releases around for longer for example
<mothacehe>could we just set infinite TTL on those nar?
<cbaines>From what civodul has said, I think there's just a single TTL, and the atime of the narinfos
<zimoun>hum? I am missing details about what “guix publish” serves compared to what the store is and what the nodes access to (read/write).
<mothacehe>zimoun: some details here:
<jlicht>if we need older version of packages (read: with known vulnerabilities) to bootstrap newer versions of that package, should these be given the `hidden?' property?
<civodul>cbaines: having "roots" for those nars would be useful, indeed
<civodul>jlicht: yup!
<civodul>moin dustyweb!
<cbaines>zimoun, "guix publish" serves nars/narinfos from /gnu/store . When using Cuirass plus guix-daemon offloading, build dependencies get copied from the "head" node to the build nodes, and then the outputs get copied back.
<zimoun>mothacehe, cbaines: thanks. And where the GC is currently happening? On the “head” or on nodes?
<cbaines>zimoun, both
<mothacehe>but the nodes have way smaller disks
<zimoun>why is GC run on “head”?
<cbaines>zimoun, it's particularly costly on the head node though, as it has a much larger store, given it contains everything needed for builds, as well as things guix publish is serving
*civodul applies openocd changes from (heads-up embedded people!)
<civodul>cbaines, mothacehe: as a first step, we could also reduced the Cuirass TTL on berlin and allow nodes to get substitutes from it
<civodul>kind of a hybrid approach
<cbaines>zimoun, in the case of berlin, even though it has an enormous amount of space, it's not sufficient without frequent garbage collection
<civodul>that's because of all these ISO images etc.
<zimoun>ah yeah, I get it. :-)
<civodul>another way to look at it is that we could run "guix gc -F X" where X is 10 times the amount of space consumed daily
<civodul>that way, GC would only run every 10 days
<cbaines>I think the system tests contribute quite a lot, as each and every change to guix results in lots of big images for the system tests
<mothacehe>why nodes have disabled substitutes btw?
<civodul>because we don't want to import binaries from "elsewhere"
<civodul>and because "guix offload" sends them the prerequisites anyway
<mothacehe>cbaines: yes you're right system tests are very expensive
<civodul>the installation tests are expensive
<mothacehe>ok thanks
<civodul>the other tests are OK
<mothacehe>maybe we could limit install tests frequency
<cbaines>on the system tests, my thinking is that it would be more valuable to run those against patches, rather than on
<zimoun>by big storage capacity, what is the order of magnitude? Used? Free?
<mothacehe>cbaines: its valuable to know if they are broken on master rather than having to run them locally
<mothacehe>zimoun: /dev/sdb1 37T 35T 2.1T 95% /gnu
<cbaines>mothacehe, indeed, but there still might be some benefit in separating out builds that happen for substitutes, vs builds that happen so someone can see if they fail
<mothacehe>cbaines: yes agreed, to control their frequency for instance
<mothacehe>but running them against patches would of course be really nice
<cbaines>mothacehe, running them against patches is something I'm hoping to start doing over the next few days
<mothacehe>woo, nice!
<zimoun>mothacehe: thanks.
<mothacehe>an email feedback on those tests, like on Linux would be even better :)
<mothacehe>tricky part is that running install tests on every patch isn't realistic I guess
<cbaines>with the coordinator at least, all the builds can have priorities, so the expensive system tests can just have a lower priority
<mbakke>woo, I have a patch to make ungoogled-chromium load extensions from a Guix profile, and a package for uBlock Origin... still need to work out some details wrt extension signing and creating the extension directory in the correct "format".
<zimoun>civodul: does it make sense to try to StarPU-ize the guix-daemon used by the build farm?
<mbakke>I'll need some help from Mark and bandali to do the same for IceCat. Apparently Mozilla made it really difficult to "sideload" extensions in newer versions:
<mbakke>mothacehe: any thoughts on how to deal with ?
<mothacehe>mbakke: the only reason an evaluation gets aborted (yellow cross), is that Cuirass is restarted while the evaluation is on-going
<mbakke>mothacehe: sorry, I meant the "canceled" (sic) jobs, like here:
<mothacehe>so probably that those evaluations were taking ages (due to berlin I/O issues) and I had to restart Cuirass which caused them to be aborted
<mbakke>according to cbaines that can happen if the .drvs are garbage collected by the time Cuirass tries running those jobs
<civodul>zimoun: i'm not sure what you mean, but i think you're going too far :-)
<mothacehe>the only reason a job can be cancelled is that the derivations disappeared at Cuirass restart
<mothacehe>so cbaines is right I guess
<mothacehe>having the drv GC rooted would fix it I guess
<mothacehe>while increasing our space issues probably :(
<divoplade>Hello! There's something I can't explain with cuirass ^.^ I request that my packages be built for some architectures, but look at, for instance: i686-linux succeeded, but the other architectures are "pending"!
<divoplade>That's weird to me.
<mothacehe>divoplade: you need a machine that is able to perform native builds on those architectures for that
<mothacehe>you can either setup the daemon to offload on such machines
<mothacehe>or setup transparent emulation with QEMU binfmt service
<mothacehe>or do not build for those architectures if you don't need them :)
<divoplade>mothacehe, my machine is natively x86_64, i686 is emulated! That's what I find strange.
<mothacehe>yes building i686 on x86_64 is possible by default
<divoplade>Ha so I need to use the qemu-binfmt-service-type with x86_64 too then?
<mothacehe>no x86_64 should work on x86_64 :p
<divoplade>That's what I thought!
<mothacehe>don't know why it doesn't start
<divoplade>I messed up things badly many times with cuirass config :(
<divoplade>I have done many rm -rfs
<mothacehe>yeah, it's quite hard to have it correctly setup
<mothacehe>but you're almost there!
<divoplade>I'm afraid the rm -rfs have left pieces of half corrupted DBs somewhere
<mothacehe>possible, you can try to wipe /var/lib/cuirass/*, limit builds to the architectures you're interested in and retry
<divoplade>I'm hoping on "eventual convergence", and it seems to work: I have at least 2 successful builds (although it can't find the log file x)
<apteryx>mothacehe: great work regarding this new image API/accompanying blog post!
<mothacehe>thanks apteryx :) Once 1.2 will be out I'll try to bring more improvements and update the doc/cookbook maybe.
<apteryx>sounds good!
<zimoun>civodul: I feel that the coordinator is reinveting the wheel about scheduling over an heterogeneous infra.
<shoxy>Hi, everyone! Does someone know a download link of the uncompressed iso? My hoster can include this iso to my server, so I can install guix there. Of course, I could serve it myself somewhere, but this doesn´t seem like a long-term solution. Thank you very much and also for the beautiful OS ;)
<apteryx>mothacehe: I didn't read back too deeply, but what happens to Cuirass database when the machine reboots (since it's now on a tmpfs, IIUC) ?
<mothacehe>apteryx: lost for now
<mothacehe>I'm making periodical backups
<mothacehe>but that's really a broken approach
<apteryx>OK! Yeah, you probably cannot guarantee database integrity with backups unless we were using Btrfs snapshots
<jlicht>is there some QOS-like thing for IO? So you can reserve some IO for Cuirass?
<apteryx>sounds risky :-). Does the risk provide big gains?
<mothacehe>yes gains are huge, no more 504 errors, consistent query duration time
<mbakke>perhaps we can simply install an SSD on Berlin for the databases?
<cbaines>sqlite has backup support through .backup, so backups can be a thing
<mothacehe>mbakke: I was considering deploying Cuirass on one of the host machines, but this is really tricky
<mothacehe>and SSD on berlin could be a cheap fix
<cbaines>I thought the cuirass database was previously on an SSD?
<mothacehe>cbaines: yes periodical SQLite backup with vacuuming would be slightly better indeed
<apteryx>so the database had contention issues (too many queries made for the slow spinning disks medium?)
<mothacehe>cbaines: it's on the same drive as the store
<mothacehe>not an SSD I guess
<civodul>zimoun: the coordinator implements something that looks like "work stealing", which i think is appropriate here
<mothacehe>apteryx: the problem is not really the frequency/quantity of the requests
<mothacehe>it's more that on page fault, a query can take ages
<mothacehe>blocking a worker and causing worker starvation
<mothacehe>but perf'ing the whole thing could bring a better comprehension
<civodul>mothacehe: re sqlite performance, gc is still running (!); it would be good to check how it performs once gc is over
<divoplade>Guess who's been doing a good old rm -rf /var/lib/* x)
<mothacehe>civodul: I've observed inconsistent query durations and hangs on SQL queries while the gc wasn't running
<civodul>shoxy: links straight to the iso (not iso.xz, if that's what you meant)
<civodul>too late
<civodul>mothacehe: oh really?
<civodul>but that's also with many processes accessing it, right?
<mothacehe>yup, 3 days ago after gc'ing the situation was fine, then the installation tests started
<civodul>presumably that would no longer be the case if we switch to the coordinator
<mothacehe>and it went wrong
<civodul>oh well, i think those now run locally on berlin, now?
<civodul>i think apteryx changed that recently
<mothacehe>yes they do
<civodul>well, that's again a significant hit for i/o
<mothacehe>(and yes the coordinator would solve many of our issues)
<mothacehe>yes, i've been using iotop
<mothacehe>and system tests were causing a lot of pressure
<civodul>i forgot why tests were made to run locally, but we should consider revisiting that
<civodul>(it's also not great that we're hardcoding scheduling decisions in code...)
<apteryx>mothacehe: OK, thanks for the information. Unfortunately my knowledge about SQL(ite) is very thin, so I'll need to read on that 'page fault' topic.
<apteryx>oh, you meant page fault at the memory management level (kernel?)
<apteryx>that's weird. Seems something SQLite should be engineered to cope with well.
<apteryx>but if the undelying IO is dead slow, yes, I can see how that'd be a problem.
<mothacehe>yup turns out it's quite hard to have it behave correctly
<mothacehe>I have been tweaking most PRAGMA without success
<mothacehe>tried to force SQLite to mmap the whole database and disabled synchronisation
<mothacehe>which I hoped would be identical to the current tmpfs hack but with some disk backup
<mothacehe>without success
<civodul>cbaines and other db folks would say: just use postgres
<civodul>i guess they have a point?
<mothacehe>we were using SQL really badly
<mothacehe>and postegres wouldn't have help
<mothacehe>we also had N readers and N writers and sqlite doesn't support this approach really well
<cbaines>PostgreSQL and Sqlite are pretty similar
<cbaines>as mothacehe says, the writing is one thing that's pretty different, as well as PostgreSQL having a client/server architecture
<mothacehe>but having Cuirass working really nice with a tmpfs backed database gives me some hope :)
<cbaines>It sounds like getting the database on to its own SSD is one direction that may pay off
<cbaines>does the relevant machine have any SSD's currently attached?
<mothacehe>we could ask rekado about it?
<civodul>i don't think it does
<mothacehe>new stuff at!
<civodul>mothacehe: woohoo!
<zimoun>cbaines: what is sqitch for in the coordinator?
<cbaines>zimoun, it helps with managing changes to the database schema
<civodul>hey lfam!
<lfam>I noticed the source code for the latest release (3.4) of x265 seems to have gone off the net. We still have it cached on, though
<lfam>I'm looking for a new source now
<zimoun>cbaines: changing from sqlite to postgres?
<cbaines>zimoun, no, just things like adding new tables, adding new columns, that kind of thing. Sqitch can support multiple "engines", so it works with both Sqlite and PostgreSQL. I have planned to allow using either for the coordinator, but Sqlite is the only one supported so far.
<apteryx>civodul: what was changed recently was the image generation no longer being offloaded, because that'd lead to transferring large (~700 MiB IIRC) images over the network (at least it did for me when using the childhurd service).
<lfam>Also, looking at the CI job for linux-libre@5.9.1, I see that the i686-linux build has failed
<zimoun>cbaines: thanks.
<apteryx>which probably isn't really practical for most users (my DSL connection uploads at 1 MiB per second, for example...)
<lfam>Hm, I also notice the aarch64 build of linux-libre-headers@5.9.1 failed to even set up the environment
<lfam>"while setting up the build environment: executing `/gnu/store/x3gq648qnfnla7nppyfjvj62s2i8y7rl-guile-3.0.2/bin/guile': No such file or directory"
<apteryx>civodul: is the keyboard layout working correctly for you in GDM? I remember this series: that aimed to address it, but it still won't work for me (see:
<lfam>I'm not sure what to make of this
<mbakke>apteryx: 700MiB is not that large... offloading a build of 'ungoogled-chromium' to a newly-GC'd node will send over something like 3 GiB, yet I don't think we should disable offloading for that package.
<mbakke>though I suppose it does not change as much as the installation tests.
<zimoun>cbaines: each agent has to run something, right? I mean the “guix-build-coordinator“ should be installed on each agent, right?
<apteryx>mbakke: 700 MiB may not be that large, but if you use childhurd to build stuff you'll want a much bigger image, in which case it may transfer GiBs.
<civodul>apteryx: i use en_US, so i don't know, but we can check in a VM i guess
<mbakke>apteryx: ah yes, transferring a single-use huge ISO image is a bit wasteful
<apteryx>civodul: yeah, en_US is what it defaults to, for anyone it seems :-)
<civodul>apteryx: but see also
<civodul>perhaps you specified an invalid layout and didn't notice?
<apteryx>I'm not sure what that image couldn't be a sparse image to contain just the data rather than the allocated space.
<cbaines>zimoun, yeah, there's one process "guix-build-coordinator" that's the coordinator bit. The agents each run "guix-build-coordinator-agent". Both those scripts are in the guix-build-coordinator package currently.
<apteryx>civodul: Oh perhaps! I was relying on the fact the system would scream at me if I did something obviously wrong in my keyboard-layout definition (I seem to recall it would)
<apteryx>let me check
<civodul>i think it screams, but only if the 1st argument is wrong
<mothacehe>apteryx: hurd images are now qcow2 so they don't get as big as the allocated space
<apteryx>OK, that's good! Was it always the case? Perhaps that 700 ish MiB was the actual size of the data
<mbakke>lfam: AIUI the 'no such file or directory' is because qemu-binfmt has been updated and the old 'guile' it was using got garbage-collected, and guix-daemon has not been restarted to get the new binfmt chroot.
<mothacehe>No at some point it was a raw image, barebones hurd images are now around 375MB
<zimoun>cbaines: I am giving a look to the code. Well, to understand all the recent discussions. :-)
<lfam>I see, mbakke. Do you have advice about what I should do here? Should I ignore the result? Wait for the daemon to be restarted?
<apteryx>OK! That's good. I'm still of the opinion that images are best generated locally, for bandwith consideration (you have to transfer the whole contents of it... then generate an equally large image from it and transfer it back over the network). Now that genimage is used it's quite cheap to generate locally too.
<isengaara>For the Online Guix Days I plan do to a talk about porting Guix to PowerPC/Power9 so that I can run the Guix System on an RYF certified Talos II.
<cbaines>zimoun, cool, just let me know if you have any questions. The little documentation there is can be seen here
<apteryx>civodul: the keyboard layout used is defined as just "(keyboard-layout (keyboard-layout "jp"))". This gives me a en_US layout on the first GDM login.
<civodul>isengaara: nice!
<apteryx>But after logging in the layout is Japanese, and it sticks if I lock the screen.
<civodul>isengaara: there's a couple of people who've been looking at POWER9 here, on and off
<isengaara>Japan has US keyboard layout with Hiragana added
<civodul>zimoun, roptat: ↑ what's the procedure to submit a talk? :-)
<isengaara>There is already a port for 32bit powerpc
<civodul>ah yes, that's efraim
<civodul>well, efraim is not the port, just the author
<civodul>apteryx: could you check the logs of the various keyboard-layout drvs?
<isengaara>So I could start with that one. The Talos II is able to run 32 bit software.
<civodul>there's also been work for ppc64
<civodul>i don't have pointers tho
<civodul>but if you haven't started, you should definitely find out
<mbakke>lfam: dunno... perhaps we could patch guix-deploy to always restart guix-daemon (or a select set of services)?
<mbakke>lfam: I restarted that derivation manually for now.
<lfam>Thanks mbakke
<apteryx>mbakke: I think it's part of a bigger problems of auto restarting services that can be without bringing down the whole shebang.
<apteryx>there must be an opened issue for that
<mbakke>lfam: do you have the derivation for the full kernel build, not just the headers?
<apteryx>some essential services could be marked as 'not auto-restartable', but otherwise they'd be restarted?
<lfam>mbakke: I can't find it on the page :/ The pagination seems broken
<lfam>I will look again
<mbakke>lfam: if you have the kernel-updates branch locally, try running 'guix build -s aarch64-linux -d linux-libre'
<apteryx>civodul: is this the drv I should be looking at?
*lfam tries it
<apteryx>the is at least produced successfully (there's a resulting text file produced).
<lfam>mbakke: I think it's this: /gnu/store/sw4l3hvlxx35nlv4jh7pfm0z1c8wvvjj-linux-libre-5.9.1.drv
<civodul>apteryx: i think so; you should also check Xorg.*.log
<apteryx>mbakke: I just checked, there doesn't seem to be an issue tracking the auto restarting of (already running yet upgraded) services.
<civodul>mbakke: i think we should add a way to ask "guix deploy" to restart certain services
<apteryx>Why not have Shepherd restart any upgraded services that are not known to be problematic? Isn't this done on other distributions when updating services?
*apteryx checks
<civodul>apteryx: shepherd cannot know what's "problematic" for the user on that day
<civodul>we can only restart services not currently running
<civodul>the rest is up to the user to decide
<mbakke>lfam: seems like build job for that derivation is already in progress
<roptat>civodul, in the article we ask people to write to guix-devel, with a short description of the talk content
<sneek>Welcome back roptat, you have 1 message!
<sneek>roptat, jlicht says: I have some PoC's for features imho would be interesting for the home-manager, but would like to get your input on how to flesh them out, if possible.
<roptat>jlicht, sure
<zimoun>civodul, roptat: short meaning 10 lines or more.
<apteryx>civodul: unattended-upgrades on Debian works this way (services are restarted automatically, the user can opt-out by using defining a blacklist of services to leave untouched):
<lfam>mbakke: I'm curious, are you using SRT for anything? I spent some time playing with it but that's all
<dustyweb>hi civodul !
<mbakke>lfam: no, I just noticed it was spewing errors in an out-of-tree ffmpeg user ... I don't think those patches fixed it though :/
*mbakke has to go
<apteryx>civodul: ah! my system config was lacking a `set-xorg-configuration' service; that must be why! I'm validating this.
<PotentialUser-20>Hello folks, I am currently trying to set up the Slim display manager with exwm as window manager. I assume Slim needs a .desktop file to load the window manager, but I don't know where it should go. The guix documentation says to put it in the .guix-profile directory but it seems to be read-only. Any tip ?
<mroh>PotentialUser-20: try installing exwm in the system profile. Slim should find the .desktop file then. read-only is ok, I guess.
<ebn>Hi! Second day on Guix, what is the preferred way to install software, globally installed by adding them as a package to my system-config or just install using guix install? I previously used nix and I had all my software globally installed while having some environments (shells) for different projects, compilers and tooling etc.
<vagrantc>it's really up to you
<ebn>decisions, the bane of my existence ;)
<vagrantc>i do find being able to install and manage packages without needing root access preferable, so try to keep the system profile minimal
<apteryx>ebn: I'd recommend installing the packages you need in your own profile. You can use a version a manifest file that'll allow you to finely control what it contains and recreate it easily.
<ebn>ok, thanks for the advice!
<helaoban>hello guix, I'm unable to build some common packages on emulated hardware on my system. For example running 'guix build --system=aarch64-linux ghc' fails with 'while setting up the build environment: executing `/gnu/store/x3gq648qnfnla7nppyfjvj62s2i8y7rl-guile-3.0.2/bin/guile': No such file or directory'.
<helaoban>it looks like it has something to do with not being able to find guile. any tips?
<vits-test>helaoban: today lfam faced that error. Try restart the daemon/reboot (on/the VM?)
<vits-test>* IDK
<rekado>re SSD on Berlin: I think the root disk is in fact an SSD.
<rekado>so… let’s move the db back onto the root file system.
<rekado>I moved the caches onto the external storage because we ran out of space on the root file system. That’s no longer a problem but would become one if all the caches are moved to the root fs.
<rekado>the extenal storage array consists of SAS disks; SSDs would have been prohibitively expensive.
<helaoban>vita-test: that seems to have worked. thanks!
<mfg>does guix have a function for replacing services in a list of services like %base-services? or do i first have to delete the service from the list and then add my version of it?
<vits-test>mfg: the 'modify-services ? Though examples i'd seen work on every service of some service-type (so all fonts on all ttys will be altered, for example).
<mfg>ah, so for example if i only want to instead of the default 6 mingetty-service-types, then this is impossible?
<mfg>have 2 instead of 6
<vits-test>better 3.
<mfg>i have never used the second one :D
<vits-test>mfg: I use a wrapper for `login` to autologin any user on any tty. But since i use it, sometimes fonts are messed up.. somehow. Looks like ЫАin: |-1.
<mfg>is this inside of the tty then?
<mfg>if my terminal gets messed up `stty sane` works most of the time
<mfg>and thanks for modify-services this seems what i want to use, i had tried to use it, but i guess i used it wrong and forgot about it...
<divoplade>Save the planet, stop writing C++ :)
<mfg>doesn't that also count for 90% of all programming languages out there?
<roptat>save the planet, stop using computers
<mfg>true :|
<roptat>but yeah, C++ is particularly painful
<rekado>I think we can just stop after ‘stop’
<zzappie>wow I just got hash missmatch on in guix package durig system reconfigure
*zzappie well I guess It is a problem for tomorrow. Good night guix
<apteryx>civodul: woohoo, the gdm keyboard layout bug was just that, a mis-configuration on my part.
<zimoun>rekado: cp227.tcx is in texlive-kpathsea but I am not sure it appears in the texlive-union. Is it?