IRC channel logs


back to list of logs

<roptat>Oh, that's wonderful too :)
<civodul>roptat: so i don't get it
<roptat>Me neither
<roptat>I need to sleep, I'll have a look tomorrow
<civodul>good night!
<civodul>roptat: i pushed a hack that appears to work
*civodul -> zZz
<Dynamicmetaflow>Has anyone tried to create a website that acts as an interface for guix commands?
<Dynamicmetaflow>I'm trying to create a form where a user selects different pre-defined packages and at the end would generate a profile-manifest and then guix would spin up a virtual machine with the packages selected
<quiliro>Dynamicmetaflow: chévere
<ison[m]>Dynamicmetaflow: I'm sure there's ways to do that in almost any language, however if you can afford to choose what software you use I'd say the simplest solutions would be to make the website itself in Guile. Because then you have full access to all of Guix's modules inside your server code. Guile ships with some basic web server functionality, but I would also suggest you look at Artanis. It's a complete web framework for
<Dynamicmetaflow>quiliro: hola
<Dynamicmetaflow>ison[m]: Yes, I have flexibility to choose the software to use. So you recommend to use a combination of it being written in Guile and use Artanis as a web framework to achieve this?
<ison[m]>Dynamicmetaflow: It wouldn't be a combination, it would be entirely Guile since that's what Artanis is written in. So basically your web server and package manager would speak the same language and can both be controlled directly from the same code.
<Dynamicmetaflow>ison[m]: Thank you for the clarification! I will spend some time and see what I can come up with
<quiliro>Dynamicmetaflow: lo que dice ison[m] me parece muy bien...estudia artanis y podrás hacer la aplicación web para crear archivos de configuración del sistema
<Dynamicmetaflow>Si, me parece muy bien es lo que voy hacer.
<Dynamicmetaflow>ison[m]: Thank you very much for the support. I'm going to dedicate some time studying it and report back what I've learned.
<quiliro>tendrás todo el poder de guix porque todo estaría en Guile
<Dynamicmetaflow>Si, seria chevere. Mi meta es para poder configurar computadoras para non-profits
<Dynamicmetaflow>y prover software para ellos de tal manera, y pense que usando guix/guile seria la mejor manera para hacerlo
<quiliro>i would like to install a program that is from source in it just easier to make a guix package or to make the executable without the guix is compiled with qmake, but i don't see qmake in guix
<quiliro>i have been communicating with the kurso-de-esperanto developer
<quiliro>he has sent me the source code for testing it with a new version (with qt5)
<quiliro>(the old version was with qt4)
<ison[m]>quiliro: It's usually easier to get things working in guix by making a package. Especially since store paths can change and your build might eventually link to dead library paths. Building it through guix ensures it always links to the right places.
<quiliro>ison[m]: it is a temporary test...
<quiliro>ison[m]: i will eventually make the package for guix....what i am not sure is what is easiest now
<quiliro>especially since i do not know if qmake is available on guix
<nckx>quiliro: A package is always easiest.
<nckx>qmake is just a part of qt, which will be your first input ;-)
<quiliro>will you guide me?
*nckx → 😴
<quiliro>i will need make and qmake
<nckx>make is an ‘implicit’ input of the gnu-build-system, you can take it for granted (just like gcc, glibc, coreutils, &c.).
<nckx>Good luck!
<quiliro>trivial-build-sistem or gnu-build-system?
<quiliro>sneek: later tell Minall: Por favor avísame cuando entres al chat.
<rvgn>mbakke How's core-updates going?
<apteryx>lispmacs: maybe just wait until a Rust binary substitute becomes available
<quiliro>sudo apt install build-essential qtbase5-dev qt5-default libphonon4qt5-dev qtmultimedia5-dev these are the dependencies for a previous version of the package
<apteryx>lispmacs: or alternatively you could setup an offload machine which is more beefy and have it bulid Rust for your underpowered netbook
<quiliro>how should i construct the package?
<Minall>Hello guix!
<sneek>Minall, you have 1 message.
<sneek>Minall, quiliro says: Por favor avísame cuando entres al chat.
<Minall>quiliro: Kiel vi Fartas!!
<quiliro>Minall: bone...mi fartas bone! ĉu kaj vi?
<gnupablo>I need download Gnome to use after instalation?
<samplet>gnupablo: Did you install Guix System (the full operating system)? How did you install it?
<gnupablo>samplet: I did not! Just wanna know.
<samplet>Okay. It should be downloaded and installed during installation.
<gnupablo>samplet: Need to be sure.
<samplet>As long as you select it (or write it in your initial config), it will be downloaded and installed during installation.
<roptat>I think I know what happened, we're missing a / at the beginning of that expression
<rvgn>Hello Guix!
<rvgn>Can anyone help me with please?
<Gamayun_>rvgn: Hm... gpa works without issue for me. I did recently start a clean ~/.gnupg to get rid of old config files and cruft from previous versions. On a hunch, you could check if there are any gpa or gpgme config files in .gnupg that might be causing trouble.
<rvgn>Gamayun_‎ I see. I'll check now.
<rvgn>Gamayun_‎ I emptied ".gnupg" folder in home directory. Still same error.
<rvgn>Gamayun_‎ I emptied ".gnupg" folder in home directory. Still getting same errors. :(
<mattplm>Finally I can connect. ison[m] I saw your message in the logs. I sourced /etc/profile in my .bash_profile and yes there are things related to guix in $XDG_DATA_DIRS
<ison[m]>mattplm: So you're using the full Guix System distro? That's interesting. Do you have any fonts installed in your system declaration? And do you have fontconfig installed (do you see any output when you type fc-match for instance)?
<mattplm>Yep running GuixSD. When I type fc-match I get one font "Nimbus Sans L" "Regular"
<efraim>rekado: ant-bootstrap on core-updates builds fine on i686-linux
<efraim>rvgn: anything relevant in .config or elsewhere that maybe should be cleared?
<rvgn>efraim are you refering to gpa or libvirt?
<efraim>rvgn: gpa
<rvgn>efraim There no file or folder in ".config" related to gnupg or gpa.
<efraim>i'm going to try './pre-inst-env guix environment --pure --ad-hoc gpa -- gpa'
<rvgn>In fact, I have not ran gpa successfully even for once. So no changes in settings could have been made?
<rvgn>efraim Okay. Thanks for trying.
<efraim>ok, that got me errors
<efraim>ok, i sent an email with the relevant error messages, i'll explore a bit more
<mattplm>ison[m] Ok so I installed font-dejavu and now everything works. Isn't that supposed to be installed with icecat? I'm more familiar with debian and apt which installs all the dependencies when you pull a package so maybe I missunderstood how guix works.
<rvgn>efraim Thanks you.
<efraim>'guix gc -R /gnu/store/ayni13xfwfm144b7c8cym1kmjznigag3-gpa-0.10.0 | grep gnupg' gave no results
<efraim>'./pre-inst-env guix environment --pure --ad-hoc gnupg gpa -- gpa' was enough to get it to work for me
<efraim>i'll update the bug report
<rvgn>efraim I see. So are you gonna patch the bug?
<efraim>rvgn: I'll see what's possible
<rvgn>efraim Cool! Thanks!
<efraim>what version of gnupg do you have installed?
<rvgn>Whatever in the master branch with latest commit. I just guix pulled few hours ago.
<rvgn>efraim Wait a sec. I have not installed "gnupg" separately.
<rvgn>Some apps had the depedency though.
<rvgn>efraim Do I have to install gnupg separately? It should have been bundled with gpa because of dependency right?
<efraim>it seems gpa builds even if gpa builds even if gnupg isn't an input
<rvgn>efraim Ouch! So gpa package has missing dependency?
<efraim>perhaps, it should be enough to wrap the binary with gnupg's path but I want to see what the other options are first
<rekado>efraim: it seems to be a problem with JamVM. I posted an update on the bug.
*rvgn has not slept yet. So gonna take a nap. 😴
<quiliro>Is it needed to run 'sudo -E guix system reconfigure config.scm' or 'sudo guix system reconfigure config.scm' only?
<grumbel>Trying to download a https:// url with youtube-dl gives me a "urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed" error, am I missing some package/env-var or is this a bug?
<quiliro>grumbel: if you send me an url, i can test too
<quiliro>tell me what version of youtube-dl you have
<quiliro>grumbel: then you can compare
<grumbel>Error happens with every https:// url, e.g. youtube-dl
<grumbel>youtube-dl version is 2019.06.21, this is on Ubuntu 19.04 with guix on top
<quiliro>is your date correct? date --utc
<quiliro>downloading orrectly
<rekado>update on ant-bootstrap: I reduced the fix to something really bizarre: I added a long comment in a GNU Classpath file, which fixes the problem.
<rekado>if the comment is too short it’s not working.
<quiliro>$ youtube-dl --version
<grumbel>I fixed it now with: "export SSL_CERT_FILE=/gnu/store/yxpbyhy2024bj98hq87m1lagc4azzc9w-ca-certificate-bundle/etc/ssl/certs/ca-certificates.crt"
<quiliro>grumbel: :-)
<rekado>grumbel: I suggest not to set any variables to explicit /gnu/store locations.
<grumbel>yeah, that's the brute force workaround
<rekado>grumbel: instead install nss-certs into your profile and set the variables relative to your profile as suggested in the manual.
<quiliro>rekado: you mean guix install nss-certs
<rekado>on a Guix System this would not be necessary, but when using Guix on a foreign distro you need to follow the steps outlined in the Application Setup section of the manual.
<quiliro>rekado: ?
<grumbel>rekado: ok, thanks, that does the trick
<quiliro>so...should i use sudo -E or just sudo for reconfigure
<rekado>either should work fine now
<rekado>This is really cool:
<quiliro>on you link it is impossible to enter: Incorrect code entered
<quiliro>OSA has implemented a process that requires you to enter the letters and/or numbers below before you can download this article.
<quiliro>in the captcha
<quiliro>impossible for me to write this: íĂüøæ manually....copying will not be accepted
<quiliro>rekado: ^
<quiliro>what is it anyway?
<rekado>bleh, here on the institute network we get access to the paper without hassle.
<rekado>here’s a description:
<roptat>wow, that's really impressive!
<roptat>btw, I only had to enter ascii characters on that page to get access to the paper
<roptat>and for some reason it worked even without cookies or javascript...
<quiliro> (pdf here:
<ArneBab>In emacs gnutls-trustfiles, the entry is /gnu/store/qn1ax1fkj16x280m1rv7mcimfmn9l2pf-bash-4.4.23/bin/bash: python: Kommando nicht gefunden. (= command not found). Is a dependency missing here?
<dutchie> what's going on here? Why does it say there are packages to be upgraded (to the same versions, which i understand means that some input got updated) and then do nothing?
<efraim>java on core-updates on i686-linux fails at classpath-devel
<ArneBab>quiliro: wow, that’s just cool! I had not considered that deep learning can be implemented in optics, though it’s obvious now that I saw it.
<rekado>efraim: do you have a log?
<rekado>I wonder if it’s the same problem.
<quiliro>ArneBab: it was on the comments that of the last link rekado sent
<roptat>rekado, if it's just a comment, could it point to a compiler issue?
***amiloradovsky1 is now known as amiloradovsky
<efraim>rekado: I can get a log from bayfront
<efraim>bah, no i can't, it's too big
<efraim>/var/log/guix/drvs/p7/hcbp75dq79rmb863mq6yv75s4v01y1-classpath-0.99-1.e7c13ee0c.drv.gz for classpath-devel
<ison[m]>mattplm: The only other ideas I can think of are to check your icecat font settings for Latin (under language & appearance click Advanced), and also to possibly try installing some new fonts. There's been several people reporting similar font issues in icecat so you're not the only one.
<rekado>roptat: which compiler though?
<rekado>roptat: we see the same behaviour with gcc-4.9 or gcc-5.
<rekado>java compiler? There’s only jikes.
<rekado>I’m thinking it’s some optimization in JamVM.
<roptat>I'd say gcc, because you only modify C code by adding a comment, no?
<rekado>but if it’s a compiler bug then we have it in all versions of it.
<rekado>I find that hard to believe.
<rekado>especially since only the Java bootstrap is affected.
<roptat>mh... is there any difference in the output of gcc?
<rekado>difference compared to what?
<roptat>(I mean with and without the big comment)
<rekado>oh, I didn’t check.
<rekado>let’s see
<roptat>or the output of the compilation process until the error?
<rekado>I’m comparing lib/classpath/ between the variant with comment and the one without.
<rekado>they differ.
<rekado>they differ a lot according to diffoscope.
<rekado>but I don’t know if this ever was reproducible.
<efraim>your patch is for classpath-bootstrap?
<efraim>i got excited for a second, then I saw that ant-bootstrap was the one that failed on master for i686 and it doesn't make that one magically not fail
<roptat>rekado, can you send them to me? I'd like to have a look
<rekado>ant-bootstrap only fails because VMFile.exists returns true, even when files don’t exist.
<rekado>VMFile.exists is implemented in GNU Classpath.
<rekado>but it is mediated somehow by the JVM, so JamVM is involved here.
<rekado>on the C side of things (in Classpath) the return value is correct.
<rekado>JamVM is responsible for JNI, which is how the C stuff crosses over to the Java side.
<rekado>so it’s really not ant-bootstrap’s fault.
<rekado>it’s GNU Classpath + JamVM that display this faulty behaviour.
<roptat>could it be the JNI stuff is reading a source file and fails to parse it correctly somehow?
<rekado>Thankfully I really don’t know enough about this.
<rekado>we’re currently doing some tests with ungoogled-chromium to see if we should rather buy better SSDs or more RAM.
<rekado>the build already uses 32GB RAM.
<rekado>how do normal people build that browser?
<Gamayun_>I guess they don't...
<roptat>rekado, by using only one processor core
<efraim>newsboat jumped on the rust bandwagon :(
<rekado>roptat: but … so am I, no? Our package definition disables parallel building.
<roptat>ouch :/
<rekado>efraim: I want an implementation of rust on the Guile VM. Then I wouldn’t be bothered any more :
<jonsger>rekado: maybe the using real workstations :P
<rain1>has there been any progress on the cargo networking problem?
<rain1>where rust things were hard to package because cargo was trying to download stuff
<lfam>Is anyone else having trouble connecting to <>?
<rekado>lfam: works fine for me.
<rekado>both http and https
<lfam>I can load <> but just <> and the manual pages don't load for me
<rekado>logs. is on bayfront.
<rekado> is on berlin
<rekado>what does resolve to?
<lfam>Hm, <> also doesn't work
<lfam>It resolves to on my end
<lfam>, that is
<Dynamicmetaflow>Hello #guix!
<roptat>lfam, what kind of message error do you see?
<roptat>error message*
*civodul prepares to reinstall hydra-slave{1,3} (armhf-linux build machines)
<roptat>(the ip looks correct)
<efraim>I would need everything in Cargo.lock, right?
<roptat>civodul, what hack did you use to make the pdf files work again on the website?
<roptat>I was thinking we were missing a / at the beginning of the regexp (^/[^/]+\.pdf$)
<lfam>roptat: The HTTP connection just times out
<roptat>weird, it works really well here
<rekado>lfam: what about ? Same server.
<lfam>rekado: That works fine for me
<civodul>roptat: see guix-maintenance.git commit 06b6e8bc718e4cb544da07946736bc6f71ac23f6
<rekado>lfam: hmm.
<roptat>civodul, oh
<lfam>I'm trying an SSH login just to see if the connection starts
<Dynamicmetaflow>Has anyone used the artanis web-framework? or used Guile for creating a website?
<rekado>Dynamicmetaflow: I’ve used haunt.
<rekado>(for a static website)
<rekado>Dynamicmetaflow: for other web things (as in mumi or cuirass) I’ve used Guile’s included (web …) modules.
<Dynamicmetaflow>Yestertday I asked #guix about how to create a form that someone would access from a website and have it generate a profile manifest, and ison[m] recommended artanis
<efraim>civodul: you didn't want to edit i2p to use with-directory-excursion?
<fxer>hi, how can ~/.guix-profile/manifest be applied to another system? `guix package -m manifest` returns error: Wrong number of argument.
<rekado>fxer: it’s not an input, it’s an output.
<lfam>Curl-ing the ipv4 address gives me " Failed to connect to port 80: Network is unreachable"
<fxer>rekado: oh, i guess i need to generate the input to use on another system then?
<lfam>Could it be blocked from the US or something?
<rekado>lfam: it really shouldn’t.
<lfam>Maybe my computer is just messed up
<rekado>let me check from fencepost
<lfam>rekado: It's just my computer
<lfam>It's working from another machine
<efraim>it works for me™
<lfam>false alarm
<samplet>On core-updates, I’m getting an error about re-exporting “AT_SYMLINK_NOFOLLOW” when building a “module-import-compiled” derivation (for a Shepherd configuration). Has anyone else seen the same problem?
<jonsger>btw nice achievement with roptat rekado civodul et. al :)
<rekado>replace rekado with bandali and it’s correct :)
<jlicht>rekado: I build ungoogled-chromium on a machine with 32GB RAM and a NVMe SSD, and it still takes forever
<rekado>jlicht: took me 28 mins on a server with a 4GHz CPU.
<davidl>Im trying to fetch a git submodule in a package Im defining but it fails with "fatal: not a git repository (or any of the parent directories): .git"
<bandali>rekado, <3
<rekado>jlicht: we’re now comparing CPUs again… ugh
<bandali>it was a team effort for sure
<jonsger>sorry bandali, rekado should get some honour for :)
<mbakke>rekado: How many cores? Note that the current ungoogled-chromium uses #:parallel-build? #f which is unfortunate.
<minall>Hello guix!
<rekado>mbakke: lscpu says 16 CPUs (one socket, 8 cores per socket, 2 threads per core)
<rekado>it’s an Intel Xeon Gold 6144 @ 3.50 GHz base frequency
<rekado>we were initially going for AMD CPUs at 2.00 GHz base frequency (2.55 boost) but with way more cores for more parallel builds.
<rekado>my eyes glaze over looking at all the possible combinations…
<rekado>libreoffice completed in 25mins
<mbakke>rekado: So you built it with --cores=16 on 32G RAM on 28 minutes?
<rekado>I didn’t pass “--cores”.
<rekado>the machine has 187G RAM, but the build consumed ~36G RAM peak.
<mbakke>rekado: Right, nice.
<rekado>disk performance seems not to be the bottleneck. /tmp was on spinning rust, RAID5.
<rekado>we could try again with /tmp on SSD, but I doubt it’s going to be much faster.
<rekado>seems CPU-bound.
<mbakke>For 187 GiB RAM I would use use "TMPDIR=/dev/shm".
<rekado>lemme try
<rekado>building chromium again…
<mbakke>But build processes are rarely IO bound, I do it mostly to save the SSD...
<rekado>the unpack phase goes by more quickly…
<lfam>Well it may just be my setup but for some reason when I tether through my mobile phone I can't access
<lfam>A lot of my time online is spent like this so I'll need to figure out a resolution
<rekado>Wow. Slower CPUs with 192 cores, 3T of RAM, building in /dev/shm: libreoffice finishes in 6 minutes.
<jonsger>rekado: in 6 minutes, holy...
<civodul>efraim: re i2pd, dunno, maybe i overlooked that, but feel free to edit it if you think it's a good idea!
<civodul>sometimes i tend to get lost with all these reviews
<Dynamicmetaflow>rekado: Wait libreoffice builds in 6 minutes with the harware you mentioned?
<civodul>i just hope i don't contradict myself ;-)
<rekado>Dynamicmetaflow: yes.
<vagrantc>sounds suspiciously like substitute availablility might be a little easier to come by in the near future...
<civodul>rekado: seriously?! woow
<Dynamicmetaflow>rekado: Is that your own personal hardware or is it some server etc?
<rekado>certainly not personal hardware :)
<jonsger>vagrantc: at least for amd64
<rekado>we have a few new servers here at the institute and I’m using them to get a feel for what kind of performance to expect from hardware that we have yet to buy.
<Dynamicmetaflow>Wonder how much a rig like that might cost
<rekado>Dynamicmetaflow: ~91kEUR for the 192 cores + 3TB RAM server.
<Dynamicmetaflow>Looking to in the near future to hopefully spin up VM's for non-profits and such using Guix and haven't thought the time it would take to build things
<rekado>Dynamicmetaflow: but that’s our special price. We get like 60% off.
<Dynamicmetaflow>That's a little steep, damn,
<Dynamicmetaflow>Ok well, at least I know the option is available if we ever need it
<Dynamicmetaflow>rekado: Thanks for sharing that information
<rekado>looks like chromium is just a bad outlier because we had to disable parallel builds. Other builds seem to benefit a lot from having a virtually unlimited number of cores, more so than increasing single core performance.
<rekado>the new build farm nodes won’t have 192 cores; we’re looking at 2x24 cores per node. We probably won’t reach 6mins for libreoffice with them, but I bet we can do better than 25mins.
<vagrantc>no promises, but i might get my hands on a couple apm mustang boards with 16GB of ram that could be used for guix ... although i probably don't have the bandwidth to host them
<rekado>vagrantc: neat!
<vagrantc>for aarch64
<rekado>vagrantc: I think we should open up to the idea of paying for hosting of these machines.
<rekado>so far our limitation has not been available funds for buying machines but hosting them.
<civodul>rekado: in a previous life i looked at build scalability:
<Dynamicmetaflow>Are the 2x24 cores per node less expensive? Also what hardware are they running, curious for the future
<vagrantc>i wonder if the synquacer boards are available yet ... 24 cores, up to 64GB of ram
<rekado>Dynamicmetaflow: they are much less expensive, but much of the difference is from using less RAM (128GB instead of 3T). The official price from Dell is 15kEUR per server. If you go single CPU, 32 cores, 64GB RAM it’s below 8kEUR.
<Dynamicmetaflow>Hmm that's alot more reasonable
<Dynamicmetaflow>rekado: thanks for sharing
<rekado>no problem!
<rekado>there are probably cheaper options than Dell. Supermicro is pretty cheap.
<rekado>We go with Dell because the institute has negotiated really good discounts.
<Dynamicmetaflow>That's great! Although if I were spending 15k or so on hardware I would be interested in checking out the systems from raptor,
<Dynamicmetaflow>For Coreboot / security reasons
<rekado>looks like building with TMPDIR=/dev/shm is slower than building directly on SSD.
<rekado>libreoffice with /tmp on SSD takes 5m48sec, on /dev/shm it’s 6m8sec.
<civodul>surprising indeed
<rekado>the SSDs are connected via a caching RAID card, but I didn’t think it would be faster than /dev/shm
<civodul>when there's a lot of RAM, most data doesn't actually reach the disk anyway
<jonsger>rekado: I think programms like libreoffice with a lot of templated C++ tending to be compute bound
<mbakke>rekado: When the build farm can handle ungoogled-chromium without running out of memory all the time, we should certainly enable parallel builds again :)
<Dynamicmetaflow>Does anyone have exampls of websites that were built with Guile and possibly uses artanis, or if someone knows the if is built with guile and is the source available?
<rekado>mbakke: the problem is that I can’t guarantee that *only* ungoogled-chromium would be built on a given node.
<rekado>memory is certainly enough for ungoogled-chromium, but I don’t know if it would still be the case if we build 8 or 16 things at the same time.
<mbakke>Right. Maybe we need smarter offloading hooks.
<minall>quiliro: Kiel vi Fartas!
<mbakke>Currently, all the nodes run only one job at the time, right?
<mbakke>If we are increasing max-jobs, it's good to decrease --cores at the same time.
<rekado>no, different nodes get different numbers of max jobs
<rekado>gotta check maintenance.git
<mbakke>rekado: So the problem is that the nodes get multiple jobs that all use --cores=$max.
<mbakke>I think reducing --cores on the build machine daemons to use $half will solve many performance problems.
<mbakke>(--cores=max and --max-jobs=1 is the guix-daemon default)
<rekado>ooh, ungoogled-chromium finished in 6m11secs. With 96 cores (as we’ll have in the future), /tmp on SSD.
<rekado>it uses 103GB of RAM peak.
<vagrantc>what a waste of ~2.9TB of ram!
<mbakke>I've noticed the offloading hook does not forward --cores, is that a bug or a "known limitation" ? :P
<rekado>vagrantc: we bought that one for R. R needs every GB of it :)
<rekado>mbakke: let’s call it a bug.
<civodul>mbakke: it's a feature
<civodul>to me, you'd configure the build machine with the right --cores and --max-jobs
<civodul>so the head node can be oblivious to these details
<civodul>then again, the limitation is that max-jobs is per-session, it's not global
<mbakke>I limit --cores on all my build machines, and then tune --max-jobs and the offload configuration so that load never exceeds $amount_of_cores.
<civodul>'guix offload' also checks the load before sending a build over
<mbakke>I've tuned it to not send new jobs when the load is >75% :)
<mbakke>civodul: The problem on berlin is that a node can get multiple jobs before it starts getting any load, and then you'll suddenly have ungoogled-chromium and libreoffice both trying to use all cores.
<mbakke>rekado: I think limiting --cores on the Berlin builders to roughly $physical_cores / 2, and set max-jobs to 2 or 3, will greatly improve the scheduling efficiency.
<mbakke>That's what I do anyway, and never had a problem with chromium or mariadb, apart from those I cause myself :P
<rekado>mbakke: I have no preference here. It’s fine by me if you go ahead and change that in machines.scm.
<civodul>mbakke: that can happen, but how likely is that?
<civodul>i think at some point we switch from looking at the load over the last 15mn to looking at the load over the last 5mn or something
<civodul>because you could have the opposite problem: resources that'd be underused
<mbakke>civodul: max-jobs 3 solves under-utilization in the scenario described above... then you'll have two jobs each using $half cores each, and the Guix scheduler will wait until the load drops below a configurable threshold before sending the third.
<mbakke>Occasionally, you'll get three jobs each using $half cores, but that's certainly better than two using $all.
<mbakke>rekado: What are the hardware specs of the current builder? In order to implement this scheme, the configuration will need to know the amount of cores on each builder.
<civodul>note that <build-machine> also has its own notion of 'parallel-builds', which is strictly enforced
<civodul>there's no shortage of parameters to tweak ;-)
<rekado>mbakke: different nodes have different CPUs and RAM here.
<rekado> has 8 cores at 2.3GHz max, 64GB RAM.
<rekado>others like .141 have the same CPU but only 16GB RAM.
<rekado>I think most of them have the same CPU.
<rekado>the one node with more cores is not currently in use :-/
<mbakke>Maybe it's enough to limit the 16GB RAM machines for now.
<mbakke>Maybe `guix deploy` could gather some information about the target system, and we could use that for the guix-daemon parameters? :-)
<rekado>I’ll make a list of the 16GB nodes
<mbakke>rekado: Thanks!
<rekado>how many builds would we like to have on the 16GB nodes?
<rekado>just one?
<mbakke>rekado: Try three jobs with --cores=4.
<mbakke>Guix will not send it the third job if it has a high load.
<rekado>I need to reconfigure these nodes then to override “cores”.
<rekado>how many parallel builds on the 64GB nodes then?
<mbakke>rekado: indeed :/
<mbakke>rekado: With 8 cores each having 8GB RAM, maybe the default is OK.
<mbakke>I think 3-4 GB RAM/core is a kind of "sweet spot" for a build machine. Berlin has two extremes :P
<mbakke>Maybe 6 GB actually, considering chromium.. That reminds me that I need to configure those Zabbix screens :P
<civodul>what's the package that corresponds to 'nss-certs' on Debian?
<mbakke>civodul: 'ca-certificates'
<civodul>thanks :-)
<rekado>mbakke: is that with hyper threading or physical cores?
<mbakke>rekado: I treat HT cores the same as physical cores on builders.
<rekado>for the new 1U servers we’ll have 4GB per physical core (32 cores, 128GB RAM); for the 2U servers we’ll have about ~3GB per physical core (2x24 cores, 172GB RAM)
<mbakke>That sounds awesome :-)
<rekado>RAM info is now in maintenance.git
<rekado>we have some custom modifications to that file on berlin, which I’ll merge in next.
<civodul>impressive specs!
<rekado>the current numbers say: 6x 2U and 28x 1U.
<civodul>crazy stuff
<rekado>I also noticed why download speeds for berlin are so disappointing: we have 2x 10G network ports, but a) only one of them is connected and b) it’s connected to a 1G switch…
<civodul>we're gonna have to have a party in Berlin
<civodul>now we know :-)
<rekado>what I really need is a way to configure bonding declaratively.
<rekado>even if it’s just a bit of Scheme over some NetworkManager invocation.
<rekado>for the build nodes that would also be really handy as we don’t have enough money for 10G switches, so we want to bond all their 4(?) network interfaces and connect them to separate 1G switches with dual 10G uplink.
<rekado>if someone is looking for a little useful project I can recommend improving the networking system services.
<mbakke>I do not have bonding hardware, but would be happy to help configure bonding on Berlin.
<mbakke>I think all the cool kids use 'libteam' these days.
<rekado>mbakke: I’d like to do this in the operating-system configuration.
<rekado>don’t want to do this manually on all those nodes.
<mbakke>rekado: why, of course :-)
<mbakke>It would be good to have "clustered" system tests... multiple VMs connected to the same network(s).
<mbakke>But I think it's possible to do a virtual bonded interface on a single machine, at least with OpenvSwitch... Will try it out.
<mbakke>Fun fact: Open vSwitch does an awesome job at LACP (bonding), but there is nearly no documentation for it.
<jlicht>welp, I'm a bit out of my depth at regarding us being able to use an unpatched network manager with support for vpn plugins in Guix system. Could someone perhaps help out by clarifying the exact issue I am trying to present? Either my capability of explaining how guix works or my expressiveness in English seem to be lacking a bit :/
<samplet>jlicht: IIUC, you might need to reiterate that the plugin directory will change depending on which plugins are included, so in order to avoid recompiling NM every time you change the set of plugins, you need to be able to pass the plugin directory to the daemon at runtime.
<quiliro>minall: saluton
<quiliro>minall: kiel vi fartas?
<minall>quiliro: I'm heading to my home in some minutes, I can connect at 2:30 - 3
<minall>I made some contributions to the fsf
<minall>also I loaded the linux kernel, now it is loading, when I head home, I'll load it to my pc too
<minall>And I founded this:
<quiliro>nos conectamos a las 15h
<minall>Hasta luego!!
<minall>muy interesante esa pagina
<minall>quiliro: revisala!
<quiliro>Reverse Engineering Software...not hardware
<quiliro> may reverse engineer the driver, which is software
<quiliro>lets check the document
<quiliro>see you later
<Dynamicmetaflow>Is there a way to have one master guix installation that runs on a server and have virtual machines that have guix installed but use the /gnu/store from the master guix installation?
<Dynamicmetaflow>Could guix publish be used in the example I gave above?
<civodul>roptat: did your recent bug reports stem from reading ? :-)
<civodul>i find this kind of discussion rather insightful
<civodul>it shows what the pain points are
<roptat>civodul, yes
<civodul>though i don't think "guix package -f foo" produces no output, as the person wrote
<civodul>i think it throws an error that's maybe hard to grasp
<roptat>no, there's really no error message
<roptat>touch test.scm; guix package -f test.scm returns nothing and status 0
<civodul>oh, fun
<Dynamicmetaflow>In case anyone was interested in what my prior message, I found a thread in the mailing list that talks more about levraging a GuixSD Host's store in vm-images
<bavier>so often hear "still many packages missing", but most do not bother to enumerate
<str1ngs>roptat: that if the package symbol is not at the end of test.scm?
<roptat>str1ngs, yes, that happens whenever the file retursn nothing
<roptat>although guix build -f test.scm prints a difficult to understand error message about #<unspecified>
<roptat>at least there's something :p
<str1ngs>roptat: just my opinion here. I think -f might want a package selector flag. such that you can select the package and the file can then have more then one package in it. just my two cents
<str1ngs>so something like guix pakcage -f ./foo.scm --use my-package
<str1ngs>minus my typos of course :P
<str1ngs>I used long option for the example
<roptat>I don't know how all that is implemented, but it could be as simple as loading the file as a library, like -L
<roptat>so rather guix package -f foo.scm my-package :)
<roptat>but I don't know if we really want to change that
<str1ngs>yeah sometimes changing is not good. but also might help to put -f and -l flags on part
<str1ngs>since that is annoying at times. when do I use -l when do I used -f
<str1ngs>on par*
<roptat>if someting is in propagated-inputs of an input, is it in %build-inputs?
<mbakke>roptat: yes
<quiliro>how to start reverse engineering drivers? especially video
<civodul>roptat: i still find it amazing that people would not report such issues
<roptat>me too
<civodul>i mean, if you're writing package definitions, then you're no longer really a passerby
<civodul>also, i was reminded today of how hard it is to communicate with our users
<civodul>a colleague of mine was running the daemon from 0.15.0, which would talk to
<civodul>so obviously, they were no longer getting substitutes
<civodul>but they hadn't realized what was going on
<civodul>we came to the conclusion that we should have an "Upgrading" section next to "Installing" in the manual
<bavier>does the daemon not emit a warning if the substitute server cannot be reached?
<bavier>oh, well, I suppose does still respond, hmm
<civodul>bavier: yes, it still responds
<civodul>i thought it would ease transition...
<bavier>civodul: seems reasonable, but probably no good way to distinguish between "responds" and "responds but doesn't have recent substitutes"
<roptat>rekado, there's a huge difference in sym.Java_java_io_VMFile_exists between your two files, in terms of assembly code
<roptat>the modified version (with an additional comment) is way smaller
<roptat>mh... did you use a different optimisation level?
<rekado>roptat: let me double check
<rekado>roptat: aren’t they the same size? Both 56K?
<roptat>but they are also very different
<rekado>is it just padded?
<rekado>yes, I saw that.
<rekado>without comment is /gnu/store/wjvv9g23vqpx4zp7vpnkpbzb65dj74ql-classpath-0.93
<roptat>without the comment: and with it:
<rekado>with comment is /gnu/store/wwkvcr0x04ilbppipid97b5i8qfjffpw-classpath-0.93
<roptat>yeah, that's what you sent me
<rekado>I just rebuilt them
<rekado>same hash.
<roptat>mh... that's very weird, how can a comment change so much code?
<rekado>well, no, I didn’t rebuild them. It’s what “guix build” returns for the same change.
<roptat>btw, is completely blank
<rekado>looks like a temporary failure. Connection to Debbugs times out.
<rekado>it’s about time I overhaul mumi to just fetch all emails and store them locally.
<rekado>(help welcome)
<roptat>your commenting the next line!
<rekado>just added text
<roptat>mh... but the next line is commented out
<rekado>no code changes.
<roptat>.* matches the last \n
<rekado>oh… let me check
<rekado>I’m a sleepless idiot.
<rekado>I’m confused though: why does the size of the comment matter then?
<rekado>or is it just coincidence?
<roptat>another idea my colleague had is that there is line number / character position embedded in the file and that's somehow used by the JNI wrapper
<rekado>I expected something fancy in the JNIEXPORT or JNICALL macros, but I couldn’t find anything.
<rekado>JNIEXPORT is only used as a tag at build time to make sure that all JNI methods actually have implementations.
<rekado>(because some of these declarations are generated automatically)
<roptat>well, I don't know what's going on...
<roptat>what if you add a \n at the end of your patch, to prevent the next line from being commented out?
<rekado>thank you for noticing the effect of my comment, though
<rekado>commenting that line could have some actual effect
<roptat>it happened to me once or twice :)
<rekado>to me as well! But I guess I really shouldn’t do this when I’m frustrated and it’s late.
<rekado>I’ll build again with a trailing \n to see sanity restored
<rekado>with the \n the ant-bootstrap build fails.
<rekado>so, no magic after all.
<rekado>commenting that line would have the effect of introducing a memory leak, no?
<roptat>I think so
<rekado>so, I rebuilt again with the replacement being “\n//”, i.e. just commenting the next line. And it works.
<rekado>I must have made a mistake when editing the replacement which gave me the wrong and insane impression that the length of the comment matters.
<rekado>I’m relieved that this is not the case, but also a little angry that this fooled me :-/
<rekado>I’ll look up the definition of ReleaseStringUTFChars to see why removing it might fix things.
<rekado>yay, that’s where it’s at: Classpath doesn’t define it. It’s defined in the JVM.
<rekado>so classpath doesn’t do anything wrong. It must be a problem in how JamVM is compiled.
<rekado>that method is defined in jamvm-1.5.1/src/jni.c
<dwagenk_com[m]>Hey there! I've had some conversations with roptat and others a few days ago.
<roptat>what was it about?
<dwagenk_com[m]>I'm still trying to understand where the edge of what can be configured with a system declaration is
<dwagenk_com[m]>I reread today and it's mentioning (in section The self-reproducing live USB) using guix system disk-image to create a copy of the whole OS incl. e.g. PGP keys and email, so some stuff from the users $HOME dir
<dwagenk_com[m]>Will each user profile be included in a disk-image (or v, or whatever is used to play around with guix system)?
<roptat>I'm not entirely sure how you would do that tbh
<dwagenk_com[m]>or is ambrevar refering to a setup where his PGP keys and email storage are also defined in guile code?
<rekado>ReleaseStringUTFChars really just calls “free” on the chars.
<rekado>nothing magical
<dwagenk_com[m]>I mean the functional package management in itself is a great step forward from what I'm used to with my Systems cluttering up over time with traditional package management approaches.
<dwagenk_com[m]>I've worked with embedded linux (yocto/openembedded) in the past and it's possible (and common) there to also configure the users (which in an embedded device are a little different from general purpose computer users, more like software roles, comparable to the guixbuild users).
<roptat>I don't think config.scm is meant to modify /home or /root directly, but you can probably use a special-files-service-type
<dwagenk_com[m]>and combining that (configuring users $HOME) and the guix way of package management would be really great!
<roptat>I need to go to bed now, see you!
<dwagenk_com[m]>see you
<dwagenk_com[m]>will look into your read only home, to understand a little more of what you're doing tehre
<civodul>is Savannah down?
<bavier>civodul: I've gotten "failed to connect" for about the last 30 minutes
<rekado>debbugs too
<bavier>no, sorry, about 90 minutes
<rekado>and as well…?
<rekado> as well
<rekado> works though :)
<rekado>just in time, eh?
<rekado>so, with the “fixed” classpath-bootstrap I can get right up to the build of the first icedtea, but it fails during the configure stage while “checking if the VM and compiler work together”. Aborts with Illegal instruction :-/
<rvgn>Hello Guix!