IRC channel logs

2020-06-22.log

back to list of logs

<Kozo>Is it OK to ask an emacs question here since most people use it?
<lfam>Sure kozo
<Kozo>I am writing in CL using Emacs + Slime. Sometimes I have to close emacs and open it to clear it's loaded buffer as it remembers previous values from C-c C-k. Is there a way to clear it on each compile?
<nckx>raghav-gururajan: With some luck I am starting the last wpewe build (now with #:tests? #f).
<jonsger>mbakke: that's crazy. But why does take chromium sooo much longer then icecat?
<mbakke>jonsger: I think the Chromium codebase is much larger than IceCat/Firefox, and it makes heavy use of C++ templates, to the point where it can only build with a bleeding-edge Clang :/
<jonsger>ah those c++ templates are pretty nice for debugging (sarcasm off) ^^
<mbakke>up until version 80 chromium had a flag called "jumbo_build" that drastically improved build times by compiling multiple targets in one go, but they removed it, almost doubling the number of compilation steps :/
<mbakke>for my current chromium 83 build, there are 39100 steps, and with jumbo_build it was ~23k
*mbakke looks at numbers flying by ... 9815/39100
<leoprikler>You know that you have reached Enterprise Quality Coding, when merely unpacking your code takes up to two minutes and longer.
<jonsger>mbakke: do you have a publish server in front of your xeon server?
<NieDzejkob>mbakke: what's the deal with #41828? ludo LGTM'd, and one of the patches was applied, but the other wasn't
***familia_ is now known as familia
<mbakke>jonsger: I do, but it rarely has substitutes for 'master', so I'm hesitant to recommend it as a substitute server to others
<mbakke>NieDzejkob: ah, I had a change of heart wrt the other patch, but forgot to update the issue, thanks for the reminder
*mbakke can't sleep because of heat
<bdju>looks like next (browser) fails to build
***terpri__ is now known as terpri
<raingloom>hey, how come $MANPATH is not set in --ad-hoc environments?
<vagrantc>i've had the experience of installing man setting it ... if i remember correctly
<vagrantc>e.g. guix environment --ad-hoc man PACKAGE1 PACKAGE2
<vagrantc>python worked that way, too
<raingloom>vagrantc: but if $PATH and others are set, why not $MANPATH? it's weird and confusing...
<vagrantc>raingloom: just saying what i recall observing, i hear you
<raingloom>vagrantc: it looks like this works: guix environment --ad-hoc socat man-db -- man socat
<vagrantc>raingloom: i guess man-db package, not man ... but yeah, basically what i mentioned
<apteryx>raingloom: the search-path specification definitions are usually made on the packages that make use of those paths. In the case of manpages, that's man-db (which is the man reader). So unless you install man-db in a profile, MANPATH is not set.
<raingloom>apteryx: i see. i guess that makes sense, since `guix environment` doesn't know that man is installed in the calling environment. still, this is quite suboptimal. this is where a union file system would be better than this search path trickery.
*raingloom crawls back into the Plan 9 VM it's tinkering with
<apteryx>raingloom: there's already a union fs view of the profile, but you still need to get the tools to find them, which requires environment variable unless you use the FHS.
<apteryx>perhaps you mean that there should be a mechanism to set variables based on the presence of files in this union... that could work but would set a plethore of things that might not be relevant, which seems dirty.
<raingloom>apteryx: or the host environment could contain some kind of hint for what variables should be updated
<raingloom>but yeah, this seems to get tricky.
<raingloom>maaaybe if the whole system could be an overlay file system. i'll see if i can build something like that on Plan 9. but it might be way too much effort on Linux.
<apteryx>raingloom: I encourage you to think about it :-), environment variables have limits and can't be replaced for the running session, which is annoying at times.
<NieDzejkob>I was thinking about this lately
<NieDzejkob>we could ship a loadable bash extension that uses inotify or signals to reload the environment variables the moment the profile file changes
<NieDzejkob>also, perhaps environments should also take into the account your active profiles to check which variables should be set?
<NieDzejkob>(of course, unless --pure is passed)
<apteryx>NieDzejkob: regarding "merging" profile variables, I believe this is an old bug
<apteryx>see #20255
<apteryx>Luvodic had a patch but it was rejected by the main commenter, so it stalled. Perhaps we should relaunch the discussion.
*apteryx zzzz
***terpri__ is now known as terpri
***nikita_ is now known as nikita`
<civodul>Hello Guix!
<efraim>hi!
<civodul>wazup, comrades?
*civodul tries Guile 3.0.3 for 'guix pull'
<efraim>what's the commit message for fixing ("foo", foo) to ("foo" ,foo)
*efraim is touching the rust crates again
<civodul>heh
<civodul>"move unquote next to the expression it applies to"?
<cbaines>Does anyone know about binutils? Specifically about what as -march=armv7-a means and why it might not work
<cbaines>I'm looking at the issue reported here https://issues.guix.info/issue/34872
<efraim>are you looking at go?
<cbaines>Yeah
<efraim>I have a hack for aarch64 go locally which I don't really like
<efraim>basically we're building go and go pacakges for aarch64 with #:system armhf-linux
<civodul>"as --help" lists supported CPUs for -march, and armv7-a is not one of them
<efraim>my hack cross builds from armv7 to armv8 for aarch64
<civodul>wait, i'm running it on x86_64
<efraim>when really we should be doing that in the latest go compiler
<efraim>then again, the hack works well enough to make most of the packages which currently fail to build to actually build
<efraim>looks like I didn't save the generalizing it
<efraim>basically for aarch64 unset GOBIN and set GOARCH to arm64
<bdju> http://ix.io/2pQh here's my build log for a failed build of the "next" browser
<rekado_>civodul: congrats on releasing Guile 3.0.3! I’m looking forward to the impact the new compiler may have on guix pull.
***rekado_ is now known as rekado
<cbaines>efraim, thanks for the information, I see the relevant change now http://git.savannah.gnu.org/cgit/guix.git/commit/?id=2ab321ca37d1c00c1540d78d587226d3d487b2d4
<cbaines>can you remember why that change was made?
<efraim>go-1.4 doesn't natively support aarch64, just i686, x86_64, armhf and ppc
<efraim>by building on aarch64 but targeting armhf we could add support
<cbaines>right, that's useful to know
<cbaines>maybe go 1.13 supports aarch64... I'll give building that a go
<efraim>the problem is the bootstrapping, we need to bootstrap go with either go-1.4, using an armhf build, or using gccgo
<cbaines>maybe it's fine to build go-1.4 for armhf, but building go-1.13 for aarch64 when applicable might fix the issue I'm seeing
<efraim>Makes sense
<efraim>Would it work to specify #system and feed it go-1.4-for-armhf
<civodul>cbaines: i'm looking at https://debbugs.gnu.org/cgi/bugreport.cgi?bug=40525#19
<civodul>did you try to mitigate the problem in the GDS?
<cbaines>civodul, no, I think I just did the libgc version change
<cbaines>I can look at trying to split the work across multiple inferior processes if that's useful though
<raghav_gururajan>Hello Guix!
<raghav_gururajan>During build, I get "meson.build:990:0: ERROR: Program(s) ['check-version.py'] not found or not executable".
<raghav_gururajan>What is that mean? Is there a package that provides 'check-version.py' script?
<civodul>hey raghav_gururajan
<civodul>cbaines: dunno, i was trying to see if we still need that ligc7 variant
<civodul>the fundamental problem here is that "guix lint -c derivation" finishes with a 2GiB heap
<civodul>so we should definitely clear caches periodically
<cbaines>is the "mmap(PROT_NONE) failed" error related to excessive memory use, or is there an actual bug in libgc or the way it's used in Guile that is more likely with lots of garbage to collect?
<civodul>it's not clear to me if it's a bug, but what's clear is that "guix lint -c derivation" consumes a whole lot of memory and that's not good
<civodul>so the mmap(PROT_NONE) issue could be a bug that shows up under severe memory pressure
<civodul>but i'd not rather not have severe memory pressure in the first place :-)
<MSavoritias>Why wouldn't I want to invoke guix pull with --bootstrap? Shouldn't we want to build always from small reproducible pieces? At least that's my understanding
<MSavoritias>Please correct me if I'm wrong
<civodul>hi MSavoritias
<civodul>MSavoritias: this option is for developers
<civodul>all (or the vast majority) of packages in Guix are reproducible
<civodul>you don't need to restrict yourself to a small package set
<civodul>and all of them are built from relatively small binary seeds: https://guix.gnu.org/blog/2020/guix-further-reduces-bootstrap-seed-to-25/
<MSavoritias>So there is just no need for it, Other than building the distribution?
<civodul>no need for --bootstrap?
<MSavoritias>yeah
<civodul>yeah, it's at best occasionally useful for Guix developers
<MSavoritias>why is it useful for developers? Checking that you can build everything from one seed?
<civodul>initially it would allow you to test "guix pull" without rebuilding the world
<civodul>that's all
<civodul>but nowadays i think it doesn't help much
<civodul>we could just as well remove it
<MSavoritias>makes sense. Thank you for the answers
<civodul>yw!
<efraim>I was never sure what it was for
<efraim>cbaines: I'm taking another look at changing go-1.13 on aarch64 so it builds for aarch64 and not for armhf
<cbaines>efraim, cool :)
<cbaines>I'm too looking at this
<efraim>I thought I'd have to change more, just "fixing" the #:system keyword in go-1.13 still pulls in go-1.4 built for armhf
<cbaines>I'm currently building go@1.14 for aarch64
<efraim>I guess the devil is in the details of making go-1.13 build correctly for aarch64
<cbaines>I tried building 1.13, but I got a test failure
<cbaines>so I'm trying with 1.14
<cbaines>I think I've just got a different test failure though: fatal error: linux/errno.h: No such file or directory
<efraim>I'm not picking up a lot of substitutes so I'm back at building go-bootstrap and binutils-gold first
<efraim>Also I think the plan was to merge binutils-gold back into binutils and change binutils-final to stay the same
<cbaines>Ah, I think go 1.14.4 is already being worked on
<cbaines> https://issues.guix.info/41695
<bricewge>Hello Guix
<bricewge>I have a package without a LICENSE file in its source, does a package need to contains such a file?
<bricewge>If so how do I deal with packaging it when such a file isn't in the source?
<cbaines>bricewge, is the applicable license clear? There might be something at the top of the source files.
<rekado>bricewge: it doesn’t need that file, but how do you know what license applies?
<bricewge>cbaines: It's a simple expat
<rekado>actually, having copyright headers in the source files is the *preferred* way of specifying the license
<bricewge>The license is specifed in copyright.h, this file is referenced by most (all?) other files
<bricewge>rekado: Thanks I thought a license file was required to inform the system user about the license
<bricewge>But only the 'package-license' field is required IIUC.
<mbakke>how can I make patches available in a gexp? I tried using local-file, but get:
<mbakke>error: #f: 'local-file' is a macro and cannot be used like this
<cbaines>you'll need to ungexp local-file, but I'd expect that to work
<cbaines>can you provide some context?
<mbakke>cbaines: https://paste.debian.net/1153303/
*mbakke works on ungoogled-chromium update, breaking the dependency on debians patches
<cbaines>ah, right, I don't think this is a gexp issue, but a general macro issue
<cbaines>Reading the docstring thing for local-file, it says: This is the declarative counterpart of the 'interned-file'
<cbaines>So maybe interned-file from (guix store) is what you can use
<cbaines>That's a monadic function though, so I'm not sure how you'd use that in the context of the gexp... it might work though
<jlicht>hullo guix!
<MSavoritias>hi
<MSavoritias>So turns out I tried to update the system with --rounds=3 and it failed. Do I file bugs for this error?
<MSavoritias>it said that the output differs from different round
<MSavoritias>the question is though that this happened with guix-package-cache.drv
<MSavoritias>is guix package cache even supposed to have the same output everytime?
<crab1>hiya, is guix system yet working on pinebook pro?
<cbaines>This blog post suggests so https://joyofsource.com/guix-system-on-the-pinebook-pro.html
<mbakke>woah, I can use local-file by referring to the absolute file names as "patches/foo.patch" (without gnu/packages prefix)
<mbakke>eeeh relative, not absolute
<apteryx>mbakke: doesn't work at the REPL though
<mbakke>actually, I can use the absolute file name too with (local-file (search-patch ...)), but not combined with (map local-file (search-patches ...)), hmm
*mbakke is guile noob
<civodul>mbakke: yeah pretty cool, no? :-)
<civodul>i wonder if we could move towards that style actually
<civodul>that may help mitigate the stat storm
<mbakke>civodul: can we make (map local-file (search-patches "...")) work too? :-)
<mbakke>I don't understand why that gives "error: #f: 'local-file' is a macro and cannot be used like this", but (local-file (search-patch "...")) works
<civodul>ah no, we can't :-)
<civodul>the thing is, 'local-file' captures the dirname at macro-expansion time
<civodul>so that at run-time it can resolve file names relative to the source file
<mbakke>ah, I see, pretty cool that it is smart enough to determine that it would have failed at runtime
<civodul>we should profile translate-texi-manuals in (guix self) because it's awfully slow
<civodul>(the Guile part)
<apteryx>+1
<apteryx>civodul: in my opinion local-file hurts hackability by not working in a REPL context
<apteryx>so I wouldn't like migrating to it instead of the current patch finding mechanism.
<civodul>apteryx: yeah, that's annoying
<civodul>maybe it wouldn't help much reducing stat calls too
<civodul>anyway, not gonna happen in the foreseeable future, no worries :-)
<cbaines>civodul, I'm just reading through your email about sqlite issues, do you know if there's any long writes happening, or writes happening inside long transactions?
<civodul>cbaines: i've identified a terrible source of slowness in deduplication, which happens while the database is opened
<civodul>fixing that should reduce the time spent with the database open
<civodul>what did you mean by "long writes"?
<cbaines>Any UPDATE or INSERT that takes a while
<cbaines>I think if you have a transaction, as soon as you do an INSERT or UPDATE, no other connection will be able to write until you COMMIT or ROLLBACK
<cbaines>Is the deduplication code in the daemon, or (guix store database)?
<cbaines>I'm also reading about WAL performance considerations: https://www.sqlite.org/wal.html#performance_considerations
<cbaines>Do you know how large the SQLite database file is, and how large the WAL file is?
<mothacehe>cbaines: https://paste.debian.net/1153319/
<cbaines>thanks, 213M for the WAL doesn't seem unreasonable, I don't really have many points of comparison
<cbaines>bayfront probably isn't in good shape... it's WAL file is 3.3G in size, 500M bigger than the 2.8G database!
<civodul>cbaines: it's (guix store database) calling (guix store deduplicate)
<civodul>cbaines: /var/guix/db/db.sqlite on berlin weighs in at... 12G
<civodul>the WAL is 213M
<cbaines>thanks
<civodul>ah mothacehe was faster
<mothacehe>civodul: the first example of your mail is from the image-root derivation that was also added recently
<civodul>yes, and i think that one excercises the deduplication performance issue very well
<civodul>because there are lots of files in there
<civodul>why do we create a directory that contains a copy of the store actually?
<cbaines>register-items in (guix store database) uses a transaction, so once the first change to the database is made in that transaction, all other writes will block (as far as I'm aware)
<mothacehe>I see. I'd like to also have a look maybe after finding the Shepherd deadlocking issue.
<civodul>cbaines: oh, and the change is made right before deduplication
<civodul>so that means that all writes are blocked until we're done deduplicating?
<cbaines>I'd guess so
<mothacehe>civodul: because creating a partition image requires a raw directory (with mke2fs for instance). I think I also tried with a directory only containing symlinks but it didn't make the trick, can't remember why.
<cbaines>does register-items need to happen in a transaction? Or would it be OK to just register things individually?
<cbaines>(and by "in a transaction", I mean does it need to use call-with-retrying-transaction)
<civodul>mothacehe: perhaps we could do both in one derivation, such that there's never a derivation that produces such a big directory
<civodul>cbaines: not sure, call-with-retrying-transaction was added a few days ago
<mothacehe>yes but then the files will have guixbuilder user & group
<mothacehe>instead of root:root when using two distinct derivations
<civodul>cbaines: see the comment of 8971f626f2e69987bea729307adb93a0be243234
*raghav_gururajan edits gtk+ and whole bunch of packages rebuild. *sigh*
<mothacehe>unless we could override them from mke2fs command line, hmmm
<civodul>mothacehe: does ownership matter? can't we specify UIDs/GIDs for use by mke2fs?
<civodul>yeah
<cbaines>civodul, OK, I see, before it was using call-with-transaction, so I think the behaviour would be the same
<mothacehe>civodul: ok, I'll try that then :)
<cbaines>I'm wondering if the transaction could wrap sqlite-register, rather than wrapping the inside of register-items
<cbaines>that would mean the deduplication happens outside the transaction at least
<civodul>well, there's the call to 'path-id' there
<civodul>i think it has to be in the transaction
<cbaines>why, it's just querying the database, right?
<civodul>yes
<cbaines>I guess if the presence of the id was being used to do something, then you might want to use a transaction to prevent it from being deleted... but absence seems to be the only thing that matters in the code currently
<civodul>hmm i was thinking we want to avoid TOCTTOU issues
<civodul>but call-with-transaction doesn't help with that
<civodul>right
<apteryx>raghav_gururajan: if it's just for testing, make your own my-gtk+ package and use it with what your trying to fix
<cbaines>Yeah, TOCTTOU would be a good argument for a transaction, but the value is not being used in this case
<cbaines>Anyway, just wrapping path-id and sqlite-register in a transaction would still be an improvement
<cbaines>The commit which introduced the transaction to register-items (a4678c6ba18d8dbd79d931f80426eebf61be7ebe) does give a reason (which is great :D ), saying it's to: prevent broken intermediate states from being visible.
<cbaines>register-items say the items must be ordered "leaves first", assuming leaf packages are the ones with nothing referencing them, I can't see how that would work...
<raghav_gururajan>apteryx, Thanks! That's a good idea.
<cbaines>Looking at the code, as it records the references as it goes along, you'd have to start at the store items where all the items they reference are already registered
<civodul>cbaines: in practice this code is only used by "guix offload"
<civodul>and "guix offload" is fed in topological order
<civodul>so maybe it works by chance
<civodul>and wouldn't work in a broader context
<nixfreak>OK i'm getting the error /tmp/tmp.9JLClbjPCM/rustup-init: error while loading shared libraries: libgcc_s.so.1: cannot open shared object file: No such file or directory , I researched the file service variable `(("/bin/sh" ,(file-append bash "/bin/sh"))) for this example, but I have the /lib in my path now but still doesn't work because its a shared object but using `(("/lib"
<nixfreak>,(file-append gcc "/lib"))) doesn't seem to work , what else can I do ?
<cbaines>civodul, ok, will this only be used for a handful of store items then at any one time?
<civodul>currently yes, it's used by finalize-store-file
<civodul>so just for the outputs of one derivation
<civodul>thus typically between 1 and 3 store items
<cbaines>nixfreak, I don't really follow your message. Are you trying to use a program not built through a Guix package on a system running Guix as an OS?
<nixfreak>yeah
<nixfreak>Alot of these scripts to install languages and tools like , nim-lang , rust, and so use bash scripts that tie into specific depenencies like gcc
<nixfreak>so for instance curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
<civodul>in general binaries are not portable across GNU/Linux distros
<cbaines>civodul, cool, good to know. This is the first time I've looked at this code, so I'm a bit hesitant, but it seems like register-items shouldn't need to be one whole transaction
<civodul>nixfreak: so the command above (which is risky) may work on some distros, and not on others
<nixfreak>thats correct
<nixfreak>do I need to use extra-special variable to symlink some libraries maybe ?
<cbaines> https://rustup.rs/ tells me: You appear to be running Windows 32-bit
<civodul>:-)
<civodul>based on User-Agent
<cbaines>Needless to say, it's wrong...
<nixfreak>yeah I have used this on 3 other systems
<civodul>cbaines: i've pushed my changes to deduplicate
<civodul>do you want to give the transaction thing a try?
<nixfreak>but I want to get away from arch, osx using brew, ubuntu
<civodul>nixfreak: the nicer solution would be to build these things from source
<civodul>but as a stop-gap measure, you can probably create an FHS environment good enough to run these binaries
<civodul>i guess you'll need /lib for ld-linux.so, at least
<nixfreak>FHS ?
<civodul>and then it really depends on what assumptions they made when creating that binary
<civodul>FHS is "file system hierarchy standard"
<civodul>meaning the "standard" hierarchy with /bin, /lib, etc.
<nixfreak>ok
<nixfreak>so building would be better then
<civodul>definitely
<nixfreak>ok
<civodul>running pre-built binaries is unsafe
<civodul>(unless you have a recipe to rebuild them by yourself)
<nixfreak>ok
<cbaines>Guix also has packages for plenty different rust versions... maybe even too many
<civodul>heh
<nixfreak>don't have nightly
<nixfreak>otherwise yes they do
<civodul>we were discussing --with-source for nightly the other day (was it you?)
<civodul>would that work?
<cbaines>providing the nightly rust would be something a Guix channel might be a good fit for as well
<nixfreak>wasn't me what I'm sure that would work , just need cargo to upate
<nixfreak>ok so is that ready then rust-nightly --with-source ?
<nixfreak>so I still have libgcc issues when building nixfreak@GNU-ChaOs ~/build/rustc-nightly-src$ python3.8 x.py build
<nixfreak>/home/nixfreak/build/rustc-nightly-src/build/x86_64-unknown-linux-gnu/stage0/bin/cargo: error while loading shared libraries: libgcc_s.so.1: cannot open shared object file: No such file or directory
<mothacehe>civodul: managed to remove the "image-root" derivation from the ISO build. Will push when the tests are done.
<mothacehe>Then I need to do the same thing for raw disk-images.
<civodul>mothacehe: woow, that was super fast, nice!
<cbaines>civodul, going back to register-items, I don't know how to test it, but I can give writing a patch a go.
<civodul>cbaines: "make check TESTS=tests/store-database.scm" runs basic tests
<civodul>but testing for contention is gonna be tricky
<civodul>i plan to update the 'guix' package soon and deploy it on berlin
<cbaines>cool, I'll have a look at writing a patch.
<cbaines>Out of interest, do you have an idea of how long the deduplication can take roughly?
***asdkfadslfjasd is now known as cacsr
<civodul>cbaines: it's a function of the number of files in the store item
<civodul>now it should be linear
<civodul>before the change i pushed it was roughly exponential i think?
<civodul>in the number of sub-directories at least
<cbaines>cool, well hopefully your change will help :)
<Bryophyllum>Hello Guix o/
<jeko>Hello Guix \o
<sneek>Welcome back jeko, you have 2 messages!
<sneek>jeko, nckx says: The Web UI is here: https://git.savannah.gnu.org/cgit/guix/maintenance.git/tree/hydra/goggles.scm. Actual logging is (or was) done by ZNC i.e. the bayfront-log user in here.
<sneek>jeko, nckx says: The Web UI is here: https://git.savannah.gnu.org/cgit/guix/maintenance.git/tree/hydra/goggles.scm. Actual logging is (or was) done by ZNC i.e. the bayfront-log user in here.
<jeko>haha
<efraim>I keep on timing out on tests when trying to build my go-1.13 on aarch64
<NieDzejkob>Would it be reasonable to have a channel that accesses the network to generate the `package' records? (I'm thinking about rust nightlies...)
<jeko>is there a way to have guix install to retrieve binaries ready to install (instead of packages to build) ?
<NieDzejkob>substitutes
<jeko>NieDzejkob: then I will avoid the build step in guix install ?
<vagrantc>jeko: if substitutes are available, yes.
<vagrantc>jeko: but sometimes the substitutes haven't been built yet, and guix will fall back to building locally.
<jeko>vagrantc: ok, I can use substitute with --substitute-urls=ci.guix.gnu.org option for example ?
<nckx>Thanks, sneek! Botsnack!
<nckx>Thanks, sneek! Botsnack!
<vagrantc> https://ci.guix.gnu.org is the default out of the box ... but you may have to confgiure guix to use it ... https://guix.gnu.org/manual/en/html_node/Substitute-Server-Authorization.html#Substitute-Server-Authorization
<nckx>raghav_gururajan: Well don't go editing gtk+, I just finished building wpewebkit successfully :-/
<vagrantc>heh, that # was totally useless :)
<vagrantc>jeko: if you have entries in /etc/guix/acl (if i remember correctly) then it might have already been done.
<leoprikler>is there any package in Guix using/testing EGL?
<lispmacs[work]>where are system shepherd config file(s) stored in Guix?
<mbakke>lispmacs[work]: I usually extract it from 'ps -p1 u' :P
<lispmacs[work]>ps -p1 u
<nckx>raghav_gururajan: https://paste.debian.net/plain/1153342 and available on g.t.gr (but this workflow and indentation give me a headache, can we please use git pull requests or at least patches?)
<leoprikler>sneek is not botsnacking? What happened?
<nckx>sneek: botsnack?
<sneek>:)
<nckx>Sneek is just picky…
<sneek>So noted.
<nckx>…and sassy.
<MSavoritias>i tried to invoke guix pull with --rounds=3 option to see the determinism of the build but it seems to fail at guix-package-cache.drv
<MSavoritias>is this expected? should I just disable it for guix pull?
<MSavoritias>or is it a bug
<nckx>MSavoritias: Do you mean that it fails to build the cache, or that it errors out because it's not reproducible?
<nckx>The cache should be as reproducible as anything else so non-determinism is definitely a bug worth reporting. A build failure might be, if you can reliably trigger it.
<raghav_gururajan>nckx: Sorry for the trouble! :( and thanks a lot for building it.
<nckx>Can you reproduce your previous gtk+ so you can at least continue using my substitute? Otherwise, I can start one last iteration but then I'll be mostly off-line until Friday.
<nckx>And I'd like it to be in the form of a single git commit @ your disroot repo that I can just pull, not a pastebinned tarball of patches.
<raghav_gururajan>nckx: No worries! I built them myself.
<raghav_gururajan>nckx: Sure thing!
<MSavoritias> https://paste.debian.net/1153350 it basically says this
<MSavoritias>forgot about the build log.
<MSavoritias>i will post that one too
<nckx>raghav_gururajan: You built wpewebkit yourself?
<raghav_gururajan>nckx: No no, the bunch of packages that depended on gtk+.
<MSavoritias>the build log says basically nothing. just that generating the package cache. value and the it fails
<nckx>MSavoritias: You should be able to ‘guix pull --rounds=2 --keep-failed’ to keep the offending copy in the store (as /gnu/store/…-check). You can then compare the two with diffoscope or so, or attach them to a bug report (no idea how big the cache is).
<nckx>raghav_gururajan: I thought wpewebkit depended on gtk+ (as in: the first time I built your wpewebkit package, it first built gtk+), but maybe I'm mistaken.
<jeko>vagrantc: thank you for your help, I will try !
<raghav_gururajan>nckx: wpewebkit only needs libwpe instead of libgtk
<raghav_gururajan>Later Folks!
*raghav_gururajan --> zzZ
<nckx>raghav_gururajan: So gtk-doc (now disabled anyway) doesn't imply gtk+?
<nckx>raghav_gururajan: G'night.
<raghav_gururajan>gtk-doc doesn't depend on gtk+, IIRC.
<nckx>raghav_gururajan: but gstreamer does, and wpewebkit depends on gstreamer. So you'll need to use gtk+ as it was at 3e032569d0dfe0b2f5b184b65f5534532abda5f3 to use my substitute.
<MSavoritias>the folder of the build directory seems to be empty
<nckx>raghav_gururajan: Pull this to get a substitute for wpewebkit: https://github.com/nckx/geeks/tree/wpewebkit-substitute (ignore the fact that it's on GH; unrelated reasons).
<nckx>MSavoritias: Both the -check and the non-check? :-/
*nckx starts a guix pull --rounds… too.
<MSavoritias>my bad. i found them.
<MSavoritias>i could get the message at first..
<MSavoritias>they are difference in size turns out
<MSavoritias>im going to run them through diffoscope
***terpri__ is now known as terpri
<MSavoritias>i got the ouput but the file is 20.3 MIB
<MSavoritias>do you want me to limit the output?
<nckx>MSavoritias: I'm not sure to what. How big are the 2 cache files themselves, combined? ‘guix pull --rounds=2 --keep-output’ doesn't want to build anything here and ‘--check’ isn't recognised.
<nckx>I wouldn't bother with a 20 MiB diff.
<MSavoritias>5 mb each
<MSavoritias>it working for me with the --check-failed you said
<MSavoritias>worked*
<MSavoritias>i tried to limit the maximum size of the report of diffscope but it always came out as 20.3 mib
<MSavoritias>dont know why
<nckx>I meant ‘keep-failed’ above, not ‘keep-output’. I assume you did too 🙂
<MSavoritias>ah ok :P yeah i did
<nckx>MSavoritias: I'm guessing you're looking at hex representation of binary files, plus ANSI colour codes and formatting. If the diff contains basically the contents of each file -/+ (so 10 MiB), as hex (each byte → 2 bytes), it can easily exceed 20 MiB.
<nckx>No point in staring at that.
<MSavoritias>yeah. i didn't know if it was correct but the diff is basically random numbers
<MSavoritias>with colors
<nckx>MSavoritias: Just open a bug report, but don't bother including the two outputs, it should be ‘reproducible’ anyway and it probably won't be solved by staring at the raw files.
<nckx>And thanks!
<MSavoritias>ok. thank you for the guidance it was educating
<apteryx>chromium creates lots of IO activity from some thread call ThreadPoolForeg, slowing down my machine (visible from iotop). Is this happening to someone else? Seems to occur mostly when using sound (video playback, video conference)
<lfam>apteryx: Do you have swap?
<apteryx>not on this machine, but it has 2.1 GiB available (1.8 GiB) free, so it shouldn't matter
<lfam>Okay
<lfam>Can you tell what files it's accessing?
<nckx>civodul: How is the dmitri/sergei tunnel started now? It seems not to be running.
<apteryx>Nthe activity came to a halt now, but I could see this in iotop: chromium --type=utility --field-trial-handle=13676461428315526438,444005121~c-apm-in-audio-service --shared-files=v8_snapshot_data:100 [ThreadPoolForeg
<nckx>I say seems because I'm not sure what to look for, but ‘sudo guix offload test’ fails to connect to localhost which is a pretty big hint.
<nckx>And there used to be ssh processes running in civodul's screen which are gone now.
<Bryophyllum>I managed to boot Guix on my production machine! There was a hiccup caused by my system firmware not honoring the boot display device setting, but it was solved by enabling the legacy boot compatibility mode, in which the said setting is honored. In short, on the output, to which my AMD GPU is connected, a static screen, with early boot messages was displayed, and on the other one, to which my iGPU is connected, the ncurses ins
<Bryophyllum>taller screen, which, on the other hand, would freeze upon selecting "Install using the shell based process".
<apteryx>lfam: OK, managed to find out what files it's trying to use
<apteryx>many files, but one example: [pid 4577] openat(AT_FDCWD, "/home/maxim/.config/chromium/Default/Cookies-journal", O_RDWR|O_CREAT|O_NOFOLLOW|O_CLOEXEC, 0600) = 176
<lfam>Interesting
<lfam>What does that file look like?
<lfam>Seems like a case where it should be using a cache directory (ideally on tmpfs) rather than ~/.config
<lfam>I guess that cookies are not really considered cache anymore
<apteryx>also cache files are regularly accessed [pid 4693] openat(AT_FDCWD, "/home/maxim/.cache/chromium/Default/Cache/d52380a3b4d4c644_0", O_RDWR|O_CREAT|O_EXCL, 0600) = 188
<lfam>I'm surprised it was bad enough to slow your computer down
<lfam>Is it a spinning disk?
<lfam>Or maybe a slow filesystem? Sometimes I still get slowdowns with btrfs although only when writing large amounts of data
<lfam>The good thing about Chromium is you can be sure they know about the issue and will fix it if it affects many users. The benefits of telemetry
<lfam>It's weird that it seems related to media playback
<apteryx>yes, just playing a music video on youtube seems to trigger it
<apteryx>but only after it's warmed up quite a bit (when it's freshly restarted it doesn't exhibit this problem, I think)
<apteryx>I'm on Btrfs, that could have to do with the way it's managing its cache
<apteryx>I could try to disable CoW on its cache directory to see if it helps
<mbakke>apteryx: is your file system pretty full (90+%)?
<mbakke>I've had performance issues with btrfs on full file systems
<apteryx>far from it: /dev/mapper/cryptroot 932G 88G 844G 10% /
<mbakke>that seems like it should work
<apteryx>10% of use
<apteryx>I've restarted Chromium, so far so good
<apteryx>I'll try reading up about this chromium --type=utility --field-trial-handle=13676461428315526438,444005121~c-apm-in-audio-service ... process when I have more time.
<apteryx>it was bringing down my system to its knees, I could barely use it (IO storm).
<mbakke>did you check RAM usage? maybe it's leaking memory
<apteryx>mbakke: yeah, it seemed OK with 2 GiB free
<apteryx>and my OOM killer (earlyoom) didn't log kills of chromium recently
<efraim>good news, icecat builds again for me
<mbakke>bad news, Magit with TRAMP just locked up my emacs :P
<mbakke>ooh, killing the SSH process brought it back to life
<MSavoritias>I am trying to close my bug report by send an email to number-done@debbugs.gnu.org but for some reason the email cant be sent. It keeps says that TLS is required but it wasn't provided by debbugs
<MSavoritias>is this the correct way to close bug reports?
<lfam>Yes, but it sounds like something is broken with either debbugs or your email
<MSavoritias>i was able to send the email to open the bug report though. So how does that make sense?
<MSavoritias>hmm
<lfam>Which one are you trying to close?
<lfam>And what do you use to send email?
<MSavoritias>im trying to close #42008
<MSavoritias>I'm using claws mail and posteo
<MSavoritias>I have the TLS and everything set up
<MSavoritias>what is weird is that I didn't have a problem sending email anywhere before
<lfam>And claws mail shows you an error?
<lfam>I was able to close it by sending mail. I'd guess there is a bug in claws-mail or maybe it was a transient network issue
<MSavoritias>maybe. Thanks for closing
<lfam>It could also be a problem with debbugs but I figure we would have heard of it before
<MSavoritias>I will see if it happens with the other email. Otherwise it may be the client
<MSavoritias>I don't think so either. Maybe posteo doesn't like something. I will see with the other email
<efraim>well, if I skip cgo then go-1.13 builds natively for aarch64
<ATuin>trying to run nix-channel --update i get the following: while setting up the build environment: executing '/gnu/store/pwcp239kjf7lnj5i4lkdzcfcxwcfyk72-bash-minimal-5.0.16/bin/bash': No such file or directory
<ATuin>any idea what's going on?
<cbaines>ATuin, does /gnu/store/pwcp239kjf7lnj5i4lkdzcfcxwcfyk72-bash-minimal-5.0.16 exist on your system?
<ATuin>yes, it does
<ATuin>wondering if it's a problem with the nix-daemon
<ATuin>i will try to reconfigure the system
<cbaines>Maybe try restarting the nix-daemon... I haven't used it myself though, so I don't really know what could be wrong
<civodul>nckx: with ~root/ssh-tunnels.sh, i've restarted them now
<ATuin>cbaines: i'm reconfiguring the system which will update nix-daemon also, let's see
<ATuin>restarting the daemon produced the same error
<ATuin>seems that my nix installation is really messed up somehow :D
<civodul>ATuin: you're using the Nix service on Guix System?
<drakonis>civodul: you might want to update the nix package soon enough
<civodul>me? why? :-)
<drakonis>i dunno
<drakonis>anyone
<drakonis>nix with flakes should arrive soon
<civodul>ah nice
<civodul>i don't use Nix, but there seem to be good ideas with Flakes
<dissoc>how do I output to console when building a package for debugging purposes?
<dissoc>is there an intended function to use?
<NieDzejkob>you can just use display or format, just like you would in an ordinary guile program
<NieDzejkob>rekado: I just noticed that CI picked up quite a lot of rebuilds with your commit updating R. the guix refresh output suggests that's it should go on staging, is R an exception?
<civodul>NieDzejkob: i don't know if that's the reasoning here, but i think many R package builds are very short
<NieDzejkob>ah, makes sense then.
<ATuin>anyone using nix on top of guix?