IRC channel logs


back to list of logs

<bdju>ryanprior: any plans to submit that "countdown" package to the official guix repo?
<lfam>Welcome Yar53
<Yar54>Ah I got disconnected
<Yar54>Did anybody have an answer to my question?
<Yar54>This thing: 5500m,
<Yar54>Wait not that
<Yar54>Question(s) I have: 1) I'm going to get a laptop with an AMD Rx5600 or 5500m, will amdgpu work? 2) I'm going to get a laptop with an ax1650, will drivers work?
<ryanprior>bdju: I have not made a concrete plan yet but let me take a look at it and see if there's any reason not do to that today.
<ryanprior>bdju: have you tested it, did it work for you?
<Yar54>Haven't tested it, I didn't get the laptop yet
<bdju>ryanprior: you helped me to run it in an environment and it worked, but I wasn't sure how to use it system-wide, so I didn't do much with it
<bdju>ryanprior: it's not working for me anymore
<ryanprior>Hmm, those errors superficially seem unrelated
<bdju>hm... I tried in a different shell and it works
<bdju>oh, I think I was in the wrong directory. I had a "guix" and "guix-packages" in the same dir and got lazy with the tab complete
<ryanprior>bdju: just sent a patch upstream for countdown, should have a bug number for you to watch in a moment
<apteryx>lfam: I'm looking at your QEMU patch
<apteryx>will push a slightly modified version soon if it passes. I rebase the info patch on 5.2.0, adapted for meson. It's simpler, although I'll try to convince upstream again to make it easy to build an info manual.
<pkill9>has anyone work on/looked into packaging the tor browser?
<pkill9>it's a patched firefox
<pkill9>maybe the patches could apply to icecat
<lfam>It's been discussed several times before
<lfam>Thanks apteryx!
<ryanprior>pkill9: the Tor Browser project doesn't approve packages or builds they don't make themselves, because they're worried about a broken build damaging their brand.
<ryanprior>So it would have to be called onioncat or something
<pkill9>it should be called the ogre browser
<pkill9>because like onions, ogres have layers
<ryanprior>ship it
<lfam>It's not just about the brand ryanprior, although that is the legal mechanism that's available to them
<ryanprior>Apologies, I regret the oversimplification
<lfam>They are worried about something called "tor browser" being distributed that may not work the way it's supposed to
<lfam>But yeah, we could call it something else
<apteryx>lfam: was it building fine for you? I'm getting: make: *** No rule to make target 'multiboot.bin', needed by 'all'. Stop.
<apteryx>I think it may be due to using a different build directory than usual; I'm throwing to the mix as well.
<apteryx>so the build now happens in the 'b/qemu' subdirectory (a la debian)
<apteryx>seems I can hack the pc-bios/optionrom/Makefile VPATH like this: VPATH = $(SRC_DIR):$(TOPSRC_DIR)/pc-bios
<lfam>apteryx: Yes, it worked for me and let me do things like `guix system vm-image`
<apteryx>good. I'll apply the workaround I just found and test
<apteryx>thanks for the info
<narispo>lfam: they were looking into starting to use GNU Guix as part of their reproducible builds toolchain AFAICT
<lfam>apteryx: There's a substitute for what you get when you apply my patch on commit 68a9e933c7e319085541ba8e8e0e932ade36d8a4
<lfam>Just FYI (tried it on my old branch)
<apteryx>I was surprised to get a substitute for the source derivation
<apteryx>I guess you were building on the CI directly? :-)
<lfam>It's got the capacity and it saves me a lot of time
<apteryx>Makes sense.
<lfam>Right now, the only job runnings are slow aarch64 emulated builds
<lfam>I mean, the only jobs running
<apteryx>This page is really handy
<lfam>It's fun to look at
<apteryx>QEMU seems to build now
<apteryx>I guess I should write a news entry to mention the guix-support? field is removed from the qemu-binfmt-configuration record as its not needed anymore
<lfam>Yeah, it's a good idea
<lfam>There is some facility for deprecating service-y things but it's too late for that now
***iyzsong-- is now known as iyzsong-w
<lle-bout`>hello! :-D
***lle-bout` is now known as lle-bout
<raghavgururajan>lle-bout`, Hello
<lle-bout>raghavgururajan: how are you doing? :-)
<raghavgururajan>lle-bout, I am not bad, How about you?
<lle-bout>raghavgururajan: awe well super great!
<lle-bout>what are you hacking on?
<lle-bout>no new CVEs so I am looking at fixing pybitmessage still, some obscure python2-sip issue, then still need to go through the backlog of unhandled CVEs but a bit unmotivated for it now, will take on it soon
<lle-bout>I see lots of exciting announcements today in the mailing list!
<raghavgururajan>lle-bout: I have been re-working patches #42958, so that it can be applied on current master.
<lle-bout>GNU Mes and Guile Netlink releases :-D
<lle-bout>raghavgururajan: super! I would really like to hear about the ins and outs of a GNOME upgrade, really looking forward to using latest GNOME improvements with Wayland on my laptop
<raghavgururajan>Would you be able to retry now on those you tried last time, but couldn't apply cleanly on current c-u?
<raghavgururajan>lle-bout, ^
<lle-bout>Okay! I will do!
<lle-bout>Right now actually :-D
<lle-bout>raghavgururajan: they're a bit troublesome to apply for me but ehh aa
<raghavgururajan>I feel you.
<raghavgururajan>The delay have caused some complications.
<apteryx>bah. I don't like meson much.
<lle-bout>Even with git-send-email, I am not sure how to select like v2 or v3 of patches
<lle-bout>apteryx: what are you running into?
<raghavgururajan>lle-bout: Btw, latest patches are in last three emails of that thread.
<lle-bout>raghavgururajan: did you duplicate some patches in each the emails or..
<raghavgururajan>No, no duplicates.
<lle-bout>raghavgururajan: okay :-)
<apteryx>lle-bout: custom_target, and the output directory not being that of the build directory as I would have expected
<apteryx>upstream suggestion: create a wrapper for the tool to that cd into the build directory. meh.
<lle-bout>apteryx: custom_target is for what exactly?
<apteryx>it's a basic block to define targets, takes inputs and produce outputs
<apteryx>similar to Makefile targets
<lle-bout>raghavgururajan: yay this works quite well: for i in {1..11}; do curl$i | git am -s; done
<lle-bout>apteryx: I see
<raghavgururajan>lle-bout: great!
<lle-bout>apteryx: I am not sure I understand the issue you are facing exactly
<lle-bout>raghavgururajan: also for the other 68 patches it worked awesome!
<lle-bout>raghavgururajan: here we go, all applied! wonderful! can test now :-D
<lle-bout>raghavgururajan: - I get this during 'make'
<lle-bout>it compiles, but warnings
<lle-bout>maybe missing uses?
<lle-bout>I am trying to make from scratch also
<raghavgururajan>Oh my. Probably missing use-modules. Yeah.
<lle-bout>raghavgururajan: I need to find a way to determine all the packages to rebuild that were affected by your patches
<lle-bout>raghavgururajan: then you think I should try running GNOME in a VM? Are the services etc. also taken care of or not yet?
<raghavgururajan>> lle-bout‎: raghavgururajan: I need to find a way to determine all the packages to rebuild that were affected by your patches
<raghavgururajan>In first message of #42958, Danny have mentioned it.
<lle-bout>raghavgururajan: ahh!! cool!
<raghavgururajan>> lle-bout‎: raghavgururajan: then you think I should try running GNOME in a VM? Are the services etc. also taken care of or not yet?
<raghavgururajan>This is one of few things from wip-desktop. More to come once we get this # out of the way. Then, as a whole GNOME will be super new and awesome.
<lle-bout>raghavgururajan: alright! so for now scope is just packages, then services?
<lle-bout>3.36+ means I can run tiling pop-os thing :-D
<raghavgururajan>Yeah, that too few packages. More patches for packages are in wip-desktop. Once we are done with this # and applied to c-u, with the help of Danny I will me rebasing wip-desktop <-> c-u, so that I work in remaining patches.
<lle-bout>raghavgururajan: alright :-D
<lle-bout>raghavgururajan: and what is the reason we are trying to merge this with c-u first?
<raghavgururajan>lle-bout: Since the patches in wip-desktop are bulky (contains more changes in a single commit), the idea is that we take few changes out wip-desktop in a batch, split/clean them and merge to -c-u, which eventually goes into master.
<raghavgururajan>One such batch is #42958.
<lle-bout>raghavgururajan: okay I understand now!
<lle-bout>so raghavgururajan
<lle-bout>./pre-inst-env guix build -k -e "(@@ (gnu packages gtk) gtk+-2)" -e "(@@ (gnu packages gtk) gtk+)" -e "(@@ (gnu packages glib) glib)" -e "(@@ (gnu packages gnome) yelp-xsl)" -e "(@@ (gnu packages freedesktop) wayland-protocols)" -e "(@@ (gnu packages freedesktop) wayland)" -e "(@@ (gnu packages gnome) json-glib)" -e "(@@ (gnu packages gtk) gtk-doc)" -e "(@@ (gnu packages gtk) gtk+)" -e "(@@ (gnu packages gtk) gtk+-2)" -e "(@@
<lle-bout>(gnu packages gtk) at-spi2-atk)" -e "(@@ (gnu packages gtk) at-spi2-core)" -e "(@@ (gnu packages gtk) atkmm)" -e "(@@ (gnu packages gtk) atk)" -e "(@@ (gnu packages gnome) libgsf)" -e "(@@ (gnu packages gnome) vala)" -e "(@@ (gnu packages gtk) gdk-pixbuf+svg)" -e "(@@ (gnu packages gtk) gdk-pixbuf)" -e "(@@ (gnu packages gtk) pango)" -e "(@@ (gnu packages glib) gobject-introspection)" -e "(@@ (gnu packages gtk) cairo)" -e
<lle-bout>"(@@ (gnu packages glib) glib-with-documentation)" -e "(@@ (gnu packages gnome) yelp-xsl)"
<lle-bout>raghavgururajan: - I created some command like this to test things accurately, since package specifications may not reveal all packages that were altered
<lle-bout>right now it fails because of some missing use-modules in some
<lle-bout>raghavgururajan: pango fails with
<lle-bout>raghavgururajan: I think it's because glib-or-gtk? field has no value
<lle-bout>it should be #t I believe?
<raghavgururajan>Ah yes, its #t.
<raghavgururajan>Must have mistakenly closed the file before saving it.
<lle-bout>raghavgururajan: also if you don't know about "git add -i", I discovered the other day and it allows you to stage individual hunks which is really handy for creating nice commit history
<raghavgururajan>Ah I see.
<raghavgururajan>Thanks for the tip.
<lle-bout>raghavgururajan: you basically do: git add -i, then p, then the number associated to the file affected by potential changes, then you press y or n for hunks to stage, or when you're finished you say q
<lle-bout>after the number, also press enter to start staging hunks of that file
<lle-bout>raghavgururajan: by the way I am trying to fix the use-modules as well, will send an uncommitted patch if you want.. then
<raghavgururajan>Thanks lle-bout.
<lle-bout>raghavgururajan: it's compiling for me now, waiting!
<lle-bout>aannnd.. the gzip connection reuse errors strike back aaa
<lle-bout>we should merge master into core-updates somehow
<raghavgururajan>If anything else doesn't go well on testing, let me know.
<raghavgururajan>If there are big issues, I might have to re-work some of those patches again.
<raghavgururajan>But I hope not.
<raghavgururajan>I'll catch you in the morning lle-bout.
<raghavgururajan>Night o/
<lle-bout>raghavgururajan: see you :-D - tests are running will let you know!
***iyzsong-- is now known as iyzsong-w
<h_art9ine>Request: Can someone provide comprehensive details about GNU Hurd and GNUstep on GUIX.
<h_art9ine>I mean current status of the technologies.
<h_art9ine>People are in dire need of this info.
<lle-bout>raghavgururajan: it's bootstrapping Rust now!
<marusich>lle-bout, "guix pull" is still running.
<lle-bout>marusich: heh
<marusich>I'm going to bed as it is 1:30 am here, but hopefully it will be done by tomorrow morning.
<lle-bout>marusich: alright! :-D
<lle-bout>good night!
<marusich>keep sending positive thoughts to my little power9 machine\
<Zero-ghost[m]>nckx: Hey there, I added this room to the matrix community, you can enable the flair for that in this room if you like
<cbaines>good morning efraim :)
<lle-bout>hello :-D
<cbaines>I was thinking again about computing channel instance derivations, and whether it's necessary to do that with code running on the system you're computing the derivation for
<cbaines>or to put it another way, do you need to run aarch64-linux code to compute a channel instance derivation for aarch64-linux?
<cbaines>you don't for computing package derivations, but I'm unsure if the same holds for channel instance derivations
<lle-bout>"An issue was discovered in the Linux kernel through 5.11.6. fastrpc_internal_invoke in drivers/misc/fastrpc.c does not prevent user applications from sending kernel RPC messages" CVE-2021-28375 - hasnt made a release yet, guess it will come soon
<efraim>found a bug in the recent changes to cargo-build-system
<cbaines>lle-bout, one retroactive bit of review for the cyrus-sasl change, because the replacement package is exported (define-public), there's the possibility of some confusion as the name and version match the other cyrus-sasl package
<cbaines>e.g. ./pre-inst-env guix refresh -l cyrus-sasl
<cbaines>guix refresh: warning: ambiguous package specification `cyrus-sasl'
<lle-bout>cbaines: there's the same thing with unzip, is it really a problem?
<cbaines>lle-bout, indeed, cyrus-sasl, libcroco and unzip are the duplicates the guix-data-service is picking up on
<cbaines>I think it is a problem, just not a particularly big one
<lle-bout>cbaines: if we make the package hidden, the graft will just work and we're done?
<cbaines>lle-bout, I'm not that familiar with grafts, but I think just not exporting the replacement package would resolve the issues in these cases?
<lle-bout>cbaines: so (define instead of (define-public?
<cbaines>lle-bout, I think not exporting it (define rather than define-public) is slightly different from making it hidden
<cbaines>lle-bout, yeah, indeed
<lle-bout>what's hidden exactly?
<cbaines>lle-bout, hidden packages aren't meant to be shown to the user
<cbaines>you can make a package hidden by adding (hidden? . #t) to the package properties
<cbaines>I think that would also avoid the duplicate issues in this case, but it's cleaner in my opinion to just not export the replacement definitions
<cbaines>one day (hopefully soon!) I'll get around to tracking grafts in the Guix Data Service, and then I might understand what's going on
<lle-bout>cbaines: basically, there's a conflict when the package version remains the same
<cbaines>lle-bout, yeah, indeed, I see for some packages, the replacement has a different version (although the same number of characters is required I believe)
<leoprikler>In some cases it's for your security (see gcc), in others the consumer(s) are in a different file than the package itself (e.g. ffmpeg for stepmania).
<lle-bout>cbaines: made them all 3 private and pushed
<cbaines>lle-bout, great :)
<cbaines>leoprikler, regarding the fractal patches, the v4 series looks to have a problem with rust-gtk-sys-0.10
<leoprikler>lemme see
<civodul>cbaines: re channel instance derivations, no, you don't need to run code on aarch64-linux to compute the aarch64-linux derivation of a set of channel instances
<civodul>i'm not sure the channel API exposes that, though
*gn21[m] < >
*gn21[m] < >
<cbaines>civodul, ah, that sounds exciting. When the Guix Data Service tries to compute channel instance derivations for other systems, it ends up using QEMU and fails if it's unavailable
<lle-bout>gn21[m]: 'cleanup guix', what does that mean exactly?
<lle-bout>gn21[m]: what packages do you have in your profile?
<gn21[m]>Sorry, but English is not my language. I made a guix gc
<lle-bout>gn21[m]: I see, well let me take a look
<lle-bout>gn21[m]: the python-sphobjinv package is broken it seems, I am going to fix it.
<gn21[m]><lle-bout "gn21: what packages do you have "> dino
<gn21[m]><lle-bout "gn21: the python-sphobjinv packa"> Thanks
<lle-bout>bavier[m]: hello! are you Eric Bavier? If so you forgot to include a patch when pushing your python-sphobjinv
<lle-bout>gn21[m]: fixed, guix pull and try again!
<leoprikler>Is there a fix to our HTTP situation in sight?
<lle-bout>leoprikler: what do you mean?
<leoprikler>Throw to key `bad-response' with args `("Bad Response-Line: ~s" ("ÝÖ\x91?\x0f"))'.
<lle-bout>leoprikler: if you guix pull and update guix-daemon it should be solved now
<leoprikler>meaning reconfigure until success?
<lle-bout>leoprikler: and herd restart guix-daemon
<lle-bout>leoprikler: use: while true; do guix system reconfigure /etc/config.scm && break; done
<rekado_>leoprikler: where is that “HTTP situation”? Something wrong with
<PurpleSym>There are multiple issues with HTTP right now. Another one is that --discover is completely unusable, because downloading more than one substitute from a raw `guix-publish` results in an eof error:
<civodul>leoprikler: please see <>; could you get the latest daemon and try again?
<civodul>PurpleSym: is there a way to reproduce it? like isolating the way 'guix publish'-that-triggers-EOF runs, and the revision of the guix-daemon that fails
<PurpleSym>civodul: The daemon on both sides was pulled this morning, commit d059485257bbe5b4f4d903b357ec99a3af2d4f39. It looks like it only happens when downloading multiple substitutes from the same auto-discovered guix-publish instance in a row.
<lle-bout>PurpleSym: the issue is around connection reuse so that makes sense
<lle-bout>cbaines: is this issue related to subsitute refactoring?
<PurpleSym>lle-bout: Yeah, in wireshark I see a RST packet sent by guix-publish, which guix-substitutes on the other end apparently does not handle?
<cbaines>lle-bout, I don't know of any issues which the refactoring has introduced, but I have been trying to follow along with some of the open issues
<leoprikler>lle-bout, civodul: Currently reconfiguring, if I don't complain about it again it'll have worked :)
<lle-bout>civodul: I agree with this issue I think: "Guix substituter gives up too easily, causing higher-level commands to fail catastrophically"
<lle-bout>we should have retry/resume policy
<lle-bout>also caching of artifacts that don't pass checksum checks? so they don't need to be downloaded twice if you fix the checksum in the source later (as you'd do during dev for git repos)
<cbaines>lle-bout, caching of artifacts that don't pass checksum checks is difficult if you phrase it as somehow storing the outputs of failed fixed output derivations
<lle-bout>cbaines: there could be two steps, one derivation for download and one derivation for checksum check
<lle-bout>but I don't think this is ideal that it relies on derivations for caching network things
<cbaines>lle-bout, I'm unsure how that first derivation would work, given it wouldn't be a fixed output derivation, but would also require network access
<lle-bout>cbaines: alternatively it could work similarly to the --keep-failed option?
<lle-bout>but instead of keeping build directory it could keep downloaded artifacts that didnt make it into the store?
***chrislck_ is now known as chrislck
<bavier[m]>lle-bout: omg, argh!
<lle-bout>bavier[m]: you need to add it to gnu/ and of course stage/commit the patch, I removed the patches field and used python-certifi in the interim
<pkill9>rekado_: what was that tool for creating a tag-based filesystem interface
<pkill9>it was a userspace filesystem
<pkill9>that let you add tags to files and make queries
<pkill9>ah found it, tmsu
<civodul>lle-bout: it looks like you forgot to remove the now unnecessary python-sphobjinv patch from the tree
<civodul>PurpleSym: could you report a bug, providing complete command outputs?
<lle-bout>civodul: when I looked it up it didnt exist
<lle-bout>did you find it?
<leoprikler>So after running reconfigure and restarting guix-daemon I still get "Bad Read-Header-Line header: #<eof>"
<lle-bout>leoprikler: guix pull as well?
<lle-bout>hash -r?
<leoprikler>I did guix pull before reconfiguring
<lle-bout>unrelated: is there a way to debug execve failures? I have a strange failure where an execve fails with ENOENT (No such file or directory) but judging from ldd every dynamic links are fine
<SanchayanMaity>*sorry typed by mistake
<lle-bout>cbaines: perhaps it's possible to have a lint pass for patches included in GNU Guix but unused in any package?
<leoprikler>so the difference is I no longer seem to get garbage headers, but I can still get EOF.
<lle-bout>leoprikler: progress!
<cbaines>lle-bout, possibly, although currently, lint warnings are attached to packages, and I'm not sure which package those warnings would be for
<leoprikler>indeed 🙂️
<bavier[m]>I was thinking the other way around: warn of patches used in packages but no declared in the makefiles!
<lle-bout>bavier[m]: for that I think a more general way to test "guix pull" easily in an isolated fashion would work
<lle-bout>hmm it wont test packages so maybe not
<lle-bout>maybe "search-patches" could be done at evaluation time instead
<lle-bout>I mean compilation time
<rekado_>lle-bout: “make as-derivation”
<lle-bout>rekado_: ah cool! didnt know!
<lle-bout>rekado_: we should really have a pre-push checklist somewhere.. with all tooling available for that
<civodul>cbaines: just go the "bad read-header-line" error by running a daemon from master
<civodul>cbaines: i'm unsure about commits that fiddle with cached connections, such as 7b812f7c84c43455cdd68a0e51b6ded018afcc8e and subsequent commits
<cbaines>civodul, is this an instance of the problem that was related to gzip substitutes?
<civodul>cbaines: in particular, in 20c08a8a45d0f137ead7c05e720456b2aea44402, call-with-connection-error-handling doesn't do the same as call-with-cached-connection used to do
<civodul>it doesn't handle bad-header and all appropriately
<civodul>that's not good
<civodul>cbaines: no it's not related to gzip substitutes
<civodul>it happens when reusing a stale connection
<civodul>(did we discuss this patch series?)
<civodul>errors related to cached connections have to be handled separately from "normal" networking errors
<civodul>because they typically don't happen normally
<cbaines>civodul, at which point in the flow do you think things could be going wrong, when narinfo's are being fetched, or the actual nars?
<cbaines>http-multiple-get now has inbuilt error handling, include some stuff which looks like it handles bad-header errors (this was added in 205833b72c5517915a47a50dbe28e7024dc74e57)
*civodul reports bug
<civodul>cbaines: the actual nars
<cbaines>and while http-multiple-get isn't aware of connection caching, if a bad-header error occurs, that connection shouldn't be reused (as it'll be closed)
<gn21[m]>lle-bout: Now everything is all right. Thank you.
<civodul>cbaines: honestly, i'm think we should go back to explicit cached connection handling
<civodul>the reason is that it requires special care
<civodul>bad-header in http-multiple-get doesn't need to be caught because it shouldn't happen
<civodul>so we should let i through when it does, because there's a real issue
<civodul>*but*, if we know we're reusing a connection, then we must catch all these things, reopen a connection when that happens, and retry
<civodul>and that really depends on the call site
<lle-bout>civodul: what do you think about using guile-curl there?
<lle-bout>moreover it supports HTTP2
<civodul>nope, thanks :-)
<lle-bout>civodul: why exactly?
<civodul>first, it'd be a rewrite
<civodul>second, we approach problems generally by moving more things to Scheme
<civodul>because that gives finer control over what hapens, etc.
<civodul>that'd be a step backward in that regard
<lle-bout>okay, because the HTTP implementation in GNU Guile is quite ancient and low level, things like connection caching should be handled transparently and never even exposed to the user-facing API AIUI, also HTTP2 support, speeds things up a *lot*
<civodul>ancient & low-level?
<cbaines>civodul, I think I see how the error handling has changed in process-substitution, I think I may have discounted some of the network related error handling there because no actual data is read
<lle-bout>civodul: low-level as in you must handle connection reuse manually somehow
<cbaines>civodul, but the response header will be, so that bit of the call-with-cached-connection/with-cached-connection error handling would have applied
<lle-bout>ancient because HTTP 1.1 support only
<civodul>cbaines: yeah
<civodul>now reported at
<lle-bout>Implementing HTTP libraries is huge work I don't think it's a good idea to try and make it Scheme considering we are already with limited hands, on the other hand curl is fast and mature.. and I don't think you would need the additional finer control of Scheme there since curl would just work and expose what you need through it's C and therefore Guile API
<davexunit>bailing out to a C library is not good. may help in short-term, but bad long-term.
<lle-bout>I think bailing out to a C library for HTTP handling is good, C integration is what GNU Guile is made for, I don't see the problem here
<davexunit>replacing Scheme code with a C library is going to be a very hard sell in Guile land.
<davexunit>as a community we generally work towards the opposite goal.
<civodul>cbaines: would it be an option to revert this patch series? that's a bit brutal but we could then start afresh from a "known state"
<cbaines>civodul, it's an option, but that seems more work than just adding back error handling around fetching nars
<lle-bout>davexunit: nothing prevents from writing complete HTTP library in GNU Guile and changing it back, it just doesnt exist now, so
<lle-bout>But I think once this issue is sorted, HTTP 1.1 with TCP connection reuse may very well suffice performance-wise in GNU Guix for a good while
<roptat>what do you mean by huge speed up with HTTP 2.0?
<roptat>what's the difference?
<lle-bout>roptat: HTTP 2 could allow to download all substitutes required in a single TCP connection in parallel, with HTTP 1.1 and connection reuse you are stuck with 1 substitute at a time (sequential)
<lle-bout>It is ideal for maximizing bandwidth usage
<roptat>so you don't have to send more than one header?
<lle-bout>roptat: you do, but HTTP 2 implements a multiplexing protocol to send multiple requests (queue-ing them up) in a single TCP connection (avoiding TCP handshake latency for parallel requests)
<lle-bout>HTTP 2 also features header compression (HPACK)
<lle-bout>but it's not very important for us (header compression)
<roptat>right, comparing the size of headers vs substitutes ^^
<lle-bout>substitutes are especially fit for this use case, since they can be numerous, introducing many requests
<roptat>you mean narinfo?
<lle-bout>roptat: yes don't know exactly what it is, I mean more generally the substitute fetching phase of GNU Guix
<roptat>from your description I feel like HTTP2 would be beneficial below a certain ratio of data size / request
<roptat>there are two parts: fetching substitute information (narinfo), and fetching substitutes themselves (binaries, in nar archives)
<roptat>the first one is obviously fast because there aren't too many data, but that's probably where you get the most speedup, though it's not really interesting
<roptat>for nar, I don't really understand how multiplexing could help, they're big files, and not so many requests per MB
<lle-bout>I think it is beneficial in every way, individual requests may be slow for various reasons like disk I/O, if you can queue many requests up, these bottlenecks disappear and the systems become fully utilized I/O wise up to their full potential
<roptat>or you hit the bandwidth limit, but ok :)
<roptat>I guess it might be beneficial for connections with high latency, you don't have to wait between substitutes
<cbaines>civodul, I thought I understood call-with-cached-connection, but then I got to the bit where it runs proc again, and then promtly raises an exception
<lle-bout>roptat: yes that also
<cbaines>civodul, which makes me unsure how this could act to retry a request, if it's just going to re-raise the original error anyway?
<lle-bout>roptat: but more generally, it means GNU Guix can send in one row all things it needs and the server can answer all at the same time also, so bandwidth is full as much as it can (and therefore faster..)
<lle-bout>roptat: dnf over at Fedora's implemented something even better, that is, fetching from multiple repo mirrors at the same time, which will come naturally to us when ipfs/gnunet support is complete
<lle-bout>ipfs is even better since it can download chunks of substitutes from different places
<roptat>help, I'm kinda sold on the idea of using http2 now
<lle-bout>Some times I felt like substituing over SSH was faster than HTTP
<lle-bout>roptat: for it to work we also need server-side support, so not sure how the substitute server currently serves them
<lle-bout>curl doesnt do server side AFAICT, we would need nghttp2 bindings or something there
<roptat>through nginx, which supports it I think
<lle-bout>roptat: and nginx reads from disk? not another server running HTTP 1.1?
<lle-bout>roptat: we could implement HTTP 2 in GNU Guile directly but that's quite complicated..
<roptat>it reads from guix publish, which uses guile, but it has a local cache
<lle-bout>roptat: guix publish is HTTP 1.1?
<roptat>right, it's written in guile
<lle-bout>HTTP2 I think is especially interesting for our various CI systems
<lle-bout>guix-build-coordinator-agent / cuirass
<roptat>could help with offloading too :)
<roptat>well, anything network related
<lle-bout>offloading is done over SSH?
<roptat>mh.. "kernel: nouveau 0000:03:00.0: gr: GPC0/TPC3/MP trap: global 00000000 [] warp 3c0001 [STACK_ERROR]" that's no good, is it?
<roptat>lle-bout, ah right, not http
<lle-bout>Basically having good network links between CI workers and coordinators stuff without HTTP2 is kind of wasting it
<cbaines>civodul, I hadn't picked up on the if and was reading the lines as happening sequentially, it makes more sense with the if
<lle-bout>HTTP 1.1 is really dumb when it comes to parallel file transfer
<civodul>cbaines: either it retries (in case of bad-header, gnutls-error, etc.), or it re-raises (some other error)
<cbaines>civodul, yeah, I'd forgotten about the if
<lle-bout>roptat: some demo of HTTP2 in action for many little image files:
<civodul>lle-bout: perhaps you've seen discussion about CPU-bound substitutions, connection reuse, etc.
<civodul>currently HTTP1 is far from being a bottleneck
<civodul>plus, GET responses for nars are as fast as can possibly be
<lle-bout>civodul: CPU-bound that not
<civodul>lle-bout: for example
<civodul>i suggest looking into these areas first :-)
<roptat>civodul, although I don't think we decompress on the fly, do we?
<lle-bout>civodul: I see thanks, also wonder if we decompress in parallel? like pgzip?
<roptat>there's always some time between retrieval of the file and fetching the next substitute
<lle-bout>with HTTP2 (or HTTP 1.1 even, just several TCP connections instead of a single one) we could probably decompress on the fly and queue as many requests as there is cores at a time
<lle-bout>so without parallel decompression of individual gzip/other blobs we still use multiple cores for decompression
<lle-bout>civodul: I really have to learn Scheme to help meaningfully there
<cbaines>civodul, I've sent a patch which I think restores some error handling in process-substitution I haven't had a chance to test it locally yet
<civodul>roptat: there's no delay nowadays in between substitutes:
<roptat>civodul, oh very nice!
<roptat>mh... and that's not so recent, I wonder why I haven't noticed
<lle-bout>I noticed substitution got much faster since I first installed GNU Guix so definitely there's been lots of improvements even without HTTP2
<efraim>would it be faster to do substitution over a local network without any compression?
<lle-bout>efraim: to me it depends if the server has to compress on the fly or has already cached compressed artifacts
<lle-bout>often decompression is fast and compression is complex
<lle-bout>probably not worth doing zstd maximum compression ratio if you have lots of unutilized free bandwidth to spare
<lle-bout>(and you don't already have the compressed blob)
<lle-bout>storing compressed blobs for 3-4 different compression formats doesnt look worth it to me
<lle-bout>and spending CPU time on compression server-side every time (unless we have plenty available) doesnt look worth it either
<lle-bout>Intel/AMD hardware has gzip compression/decompression accelerators so I would use gzip with that and nothing else (server side)
<civodul>efraim: it depends on bandwidth and available CPU power, which typically vary over time, but overall zstd is cheap for the server and super cheap for the client, so usually a win
***rekado_ is now known as rekado
<lle-bout>civodul: do we store copies of compressed artifacts or it's all on the fly done at each request?
<rekado>lle-bout: there’s a cache
<lle-bout>okay a cache, but doesnt the cache get filled up a bit too quick with duplication like this?
<efraim>I was thinking mostly between multiple machines on my home network
<lle-bout>efraim: there's no way gzip (minimum compression level) can hurt
<Aurora_v_kosmose>So all the bad responses I get when pulling now is due to guile-git using something newer than what debian has?
<lle-bout>Aurora_v_kosmose: hello, nope, actual stressful bug pending fix
<rekado>lle-bout: sorry, what’s the concern here? Space usage?
<lle-bout>rekado: yes
<rekado>lle-bout: why?
<Aurora_v_kosmose>lle-bout: Oh, that's unfortunate, but good to know it's not just me. In the meantime I'll keep manually pulling failing derivations with guix-build.
<lle-bout>rekado: if you have lots of fast SSDs to use as cache then it's probably fine
<lle-bout>whether it's at the end faster to offer only one single compression algorithm to all users and cache that or offer several and cache all of them (worse off caching diversity), that I don't know
<lle-bout>seems like server compute power is not a problem here
<lle-bout>(for GNU Guix)
<lle-bout>official servers
<lle-bout>Aurora_v_kosmose: "while true; do guix pull && break; done" may help
<Aurora_v_kosmose>Oh, just retrying pulls directly should work?
<lle-bout>Aurora_v_kosmose: yes the failures are non-deterministic, but not sure what the failures you are getting are
<lle-bout>maybe I am mistaken and you're not meeting the problem I think you are meeting
<lle-bout>Aurora_v_kosmose: yes so that is it, just retry until it works and wait for updates (still not fully fixed)
<lle-bout>Aurora_v_kosmose: follow
<Aurora_v_kosmose>Alright. Glad to know it's not just on my end, and is being worked on too.
<Aurora_v_kosmose>Thanks for the help.
<davidl>Anyone who has a working cuirass specification/configuration in the new format that uses custom/non-guix channels? I filed a bug report cuz I cant make mine work: #47164
<lle-bout>davidl: yes there's one here:
<davidl>lle-bout: those specs only uses guix channels.
<lle-bout>davidl: okay sorry, then nope
<davidl>lle-bout: I need to use a config from my own channel/git repo and packages from my own channel/git-repo.
<davidl>lle-bout: no probs, thank you still. I was able to locate the documentation for the current specs here:
<davidl>but I think there's a bug in the cuirass code because non-guix channel specs haven't been tested.
<davidl>I assume* haven't been tested.
<raghavgururajan>lle-bout: Ah yes, Rust. I am wondering if its due to lack of substitutes for c-u on ci, or some how the patches triggers re-builds of rust.
<raghavgururajan>(Hello Guix!)
<ennoausberlin_>Hello. Are there any efforts to update the tensorflow package to 2.1
<lle-bout>raghavgururajan: hello! :-D
<lle-bout>Rust is now finished, it's going onto next things
<raghavgururajan>lle-bout: Wow! What beast are you running guix on?
***lukedashjr is now known as luke-jr
<lle-bout>ennoausberlin: you can search on
<lle-bout>apparently there isnt
<lle-bout>raghavgururajan: AMD Epyc 7nm
<ennoausberlin_>I am on mobile. I will check later. Thank you for any answers
<leoprikler>cbaines: I tried fixing this in v5, but v5 is split in patchwork again :(
<cbaines>leoprikler, hmm, it looks sort of split in my mail client as well
<cbaines>leoprikler, did you send the patches with a single git send-email command?
<leoprikler>no, I have to split them when there's more than 50
<rekado>hah, Guix System comes preinstalled on these laptops:
<cbaines>leoprikler, ah, I guess that's why they're being treated as multiple series
<cbaines>leoprikler, out of interest, why do you send them in batches?
<cbaines>leoprikler, also, git send-email has a --batch-size option, are you using that?
<cbaines>rekado, ooh, that's cool :D
<leoprikler>I didn't know about batch-size so I was manually partitioning them
<raghavgururajan>> lle-bout‎: raghavgururajan: AMD Epyc 7nm
<raghavgururajan>So Epic!!!! xD
<leoprikler>and batches of 10 seemed the logical thing to do at the time
<cbaines>leoprikler, OK, well, I haven't tried it myself, but I'd hope that the built in batching support would convince Patchwork that they're one series of patches
<lle-bout>raghavgururajan: gtk+@2 fails: FAIL:
<lle-bout>raghavgururajan: do you have IPv6 networking?
<lle-bout>I could give you access to my store so you can fetch the substitutes I built
<lle-bout>raghavgururajan: last one is glib-networking
<raghavgururajan>lle-bout: Let me check.
<raghavgururajan>> ‎lle-bout‎: I could give you access to my store so you can fetch the substitutes I built
<raghavgururajan>That would be great. Thanks a lot!
<raghavgururajan>Do you need my ssh key?
<leoprikler>cbaines: built-in batch support seems to work
<cbaines>leoprikler, cool, that sounds good :)
<lle-bout>raghavgururajan: I need nothing, just if you have IPv6 networking or not, otherwise you can run as root miredo (packaged in GNU Guix)
<raghavgururajan>lle-bout, I think I do. Is there a way to test via browser?
<lfam>Finally got (what I think is) a satisfactory patch submitted for <>
<lfam>Security-minded people may be eager to test :)
<lle-bout>raghavgururajan: try: ping6
<lle-bout>ping6 *
<raghavgururajan>with -c 3?
<raghavgururajan>PING (2606:cc0:10:415:f24d:a2ff:fe74:3eea): 56 data bytes
<raghavgururajan>ping6: sending packet: Network is unreachable
<lle-bout>raghavgururajan: then run: sudo guix environment --ad-hoc miredo -- miredo
<lle-bout>then retry
<raghavgururajan>Works now.
<lle-bout>raghavgururajan: great!
<lle-bout>let me setup guix-publish
<lfam> <> !!!
<lfam>"Unprivileged chroot()"
<efraim>lfam: have a subscriber link you can share?
<lfam>Could be great news for Guix
<lfam>I think that, if we are interested in using that code, somebody from Guix should join the conversation on the kernel-hardening ML
<raghavgururajan>lle-bout: Could you re-try the gtk+-2 build with #:parallel-tests? #f
***jpoiret1 is now known as jpoiret
***stefanc_diff_ is now known as stefanc_diff
***drakonis- is now known as drakonis
***xMopx- is now known as xMopx
<lle-bout>raghavgururajan: yes
<lle-bout>raghavgururajan: http://[2a01:e0a:2a2:1350:cd10:777c:7b57:3bb6]/
<lle-bout>raghavgururajan: same thing without parallel testing on gtk+@2
<raghavgururajan>lle-bout: I see. Can you send me your modified patches?
<raghavgururajan>I guess you fixed some of use-modules.
<raghavgururajan>I mean. All patches via format-patch.
<lle-bout>raghavgururajan: on the ipv6 http link above you have the signing key available and then you can use the address directly as substitute server
<lle-bout>raghavgururajan: I didnt modify any commit
<raghavgururajan>Ah I see okay.
<raghavgururajan>its guix import command right?
<raghavgururajan>guix archive --auth something
<lle-bout>raghavgururajan: are you on GNU Guix System?
<lle-bout>raghavgururajan: -- add something like this to your system configuration
<lle-bout>authorize is deprecated on GNU Guix System because it violates stateless config
<raghavgururajan>Ah okay.
<lle-bout>raghavgururajan: and you can add the ipv6 url on individual commands
<lle-bout>raghavgururajan: this way, you don't have my substitute server for your whole system..
<lle-bout>It would be nice to have more granularity when it comes to trusting substitute servers keys (like for what purpose exactly, temporary append-only store in a container you can discard modifications of maybe..)
<lle-bout>Also sandboxed GNU Guix channels..
<roptat>I'd like to send a message on weblate to our translators, hope this is ok?
<lle-bout>raghavgururajan: other things fail as well but tired now..
<roptat>at some point, I might write something for the cookbook about the translation process, what to take care of, etc
<lle-bout>raghavgururajan: you can reproduce results using the big command to test I sent earlier
<raghavgururajan>lle-bout: Thanks for you help. I'll fix them and get back to you.
***wielaard is now known as mjw
<Gooberpatrol66>Does anyone have btrfs subvolumes working on guix?
<pkill9>Gooberpatrol66: I was playing around with them the other day, not with guix installed in a btrfs subvolume
<pkill9>just in an image file which i wrote a btrfs filesystem to
<Gooberpatrol66>pkill9: did you generate a subvolume with config.scm?
<pkill9>with btrfs-progs
<pkill9>which has the btrfs command
<Gooberpatrol66>I'm not even sure if you're able to generate a subvolume with config.scm
<Gooberpatrol66>The manual doesn't explicitly state if you can
<jackhill>pkill9: what subvolume arrangement are you going for? I have my home volume on a different subvolume, but have /gnu/store on subvolid=5
<jackhill>I couldn't figure out how to get grub to load the kernel and initramfs from a different subvol (although with Guix, btrfs snapshop and rollback of the volume with /gnu/store is less compelling, so I decided I was fine with that ☺)
<nckx>Morning Guix.
*jackhill waves
<jackhill>pkill9: If I manually edited the paths in grub, I found that guix was happy to mount a different default subvol
<nckx>Gooberpatrol66: You can't create subvolumes in an operating-system, just like you can't mkfs. It's not in scope.
<nckx>Guix deploy should support it, if it doesn't already.
<Gooberpatrol66>ok, so i need to do it manually, and then mount it with config.scm
<jackhill>it would be lovely if guix deploy/image supported a richer filesystem language. Something for the todo list I suppose.
<pkill9>jackhill: i have /home on a different partition
<pkill9>i'm not sure what the purpose of different subvolumes is
<atw>Building openjdk@14 is failing for me, and I see "Connection attempt failed: Connection refused" in the build log, which I think sounds a little suspicious. Have other people experienced this?
<jackhill>pkill9: cool. I meant to address my replies at Gooberpatrol66 as well. For me a different subvolume means I can keep independent snapshots and send/recv it for migration and backup
<pkill9>oh yea, i get it for snapshots, but, other than that i'm not sure
<Gooberpatrol66>most of guix is disposable because it can be generated from a single file, so that's a lot of data that you can exclude from your backup subvolume
<roptat>well, I'll send the message later this evening if nobody reacts :)
<nckx>roptat: Good idea :) Can't comment on the contents (I assume that if you wrote it, it's the way things are?), but don't really get ‘If it's not on weblate, what are you waiting for? :-)’ -- if you mean ‘ask them to add it’, you might want to explicitly point out that it's possible, and how.
<roptat>nckx, ha sure, thanks!
<roptat>I actually got the string freeze date wrong, we're planning it for the 12, not 11, otherwise I think the rest is correct ^^
<roptat>I think it's just as easy as pushing a button for the users, you just have to visit the component and "add your language"
<nckx>I guess what I mean, then, is I don't understand the difference between ‘If your language is on weblate but not yet in the Guix repository’ and ‘If it's not on weblate, what are you waiting for?’.
<nckx>It's probably obvious if you know even the first thing about Weblate, but I'm an example of someone who just started clicking enough random things to start translating ☺
<nckx>So the first quote is, for example, the ‘nl’ situation? I've created a translation for it but it's not in guix.git.
<nckx>Is that correct?
<nckx>(It would be artwork or maintenance or whatever, not guix.git.)
<roptat>nckx, yes, correct
<roptat>but I can remove the sentence if that's confusing
<nckx>I think it could be confusing if you're not familiar with how Gettext works (and to a lesser extent Weblate) and why ‘adding a language’ is a multi-step process. But that might be obvious to ‘real’ translators of which I am not one.
<nckx>Just someone who got sucked into translating 😛
<apteryx>diffoscoping qemu has an ETA of 2h20. Hmm.
<bone-baboon>I have been using the --no-substitutes flag when installing software. Icecat has a dependency on rust. Rust has a dependency chain of previous rust versions. Why does rust require these previous versions?
<lfam>bone-baboon: They are the "bootstrap chain".
<lfam>In order to build Rust, you have to have the previous version of rust
<lfam>The chain goes for many versions
<bone-baboon>What happens when you get to the end of the bootstrap chain? Is rust part of the bootstrapping intitaive at
<apteryx>bone-baboon: because newer versions of rust depend on features added in the last one
<apteryx>(previous one)
<lfam>bone-baboon: The chain goes back to the oldest version that can be built without Rust. I believe it's C
<rekado>bone-baboon: nobody representing the rust community participates in
<bone-baboon>rekado: lfam: apteryx: Thanks
<nckx>bone-baboon: At the end, you have the latest Rust compiler. At the beginning, you have mrustc, a Rust compiler in mrustc. ‘Bootstrapping’ as a concept or good practice has nothing to do with, which I guess you'd call an ‘awareness campaign’/extended hackathon to bootstrap more stuff.
<nckx>Lol. s/in mrustc/in C/.
<nckx>That sentence was not bootstrappable ☹
<apteryx>bone-baboon: note that on core-updates the time to build the full rust bootstrap has been halved, mostly by bootstraping directly from a newer rust and getting rid of the tests of the intermediate rusts.
<bone-baboon>apteryx: So the tests are only done for the version that are dependencies of programs?
<nckx>rekado: Is there no IPv6 at the MDC? Could it be arranged for berlin? Alternatively, is it possible for berlin to respond to ICMP pings from a whitelisted IP address?
<apteryx>right; the test suite on core-updates is run only for the current rust version, which is the only one made public.
<apteryx>lfam: had you tried running --check -K on qemu 5.2.0? It doesn't seem to be reproducible. ISTR it used to be.
***scs is now known as Guest75682
<lfam>No, I never check for reproducibility
<lfam>I see it as a "nice to have"
<apteryx>but I think we already had it, so it's more like a regression
<apteryx>let me confirm
<Yar54>Hello again! For anyone that's wondering, 3d accel works with AMD apparently, just not opencl but there's an experimental package for that
<lfam>Yeah, it's a regression. I just think it's really low-priority
<nckx>Thanks for letting us know, Yar54.
<lfam>That's great Yar54
<lfam>QEMU is one of those packages like linux-libre and chromium that should be updated despite minor regressions like that
<lfam>Well, it's not as serious as the kernel or a browser. But it still brings a steady stream of security fixes
<apteryx>still worth looking into and reporting
<Yar54>PS is virtualbox free I still don't know
<lfam>You might have luck restricting the diffoscope to one directory at at time, apteryx
<lfam>Yar54: It depends how you look at it. If I remember correctly, it requires a nonfree compiler to biuld
<apteryx>lfam: yes I'm using diff -r then feeding one binary to diffoscope
<Noisytoot>nckx: mrustc is in C++, not C
<nckx>Noisytoot: You're absolutely right. It's in C++ but generates C, IIRC, right?
<Noisytoot>lfam: I can't recompress my filesystem to use zstd, as libreboot 20160907's GRUB doesn't support btrfs with zstd compression, and my /boot is on the same filesystem
<apteryx>Yar54: with which GPU is this?
<Yar54>Reddit lol
<Yar54>Someone told me
<nckx>Noisytoot: Could you chainload Guix's GRUB or are you using LUKS etc.?
<nckx>It supports zstd.
<rekado>nckx: I already asked the network people to enable ICMP
<Noisytoot>I'm not using GuixSD
<Noisytoot>*Guix System
<lfam>Sorry Noisytoot, I don't remember the context of previous dicussion :)
<nckx>GuixSD needs to go in case studies of ‘how damn hard it is to rebrand things, years later’.
<Noisytoot>But where would I put the GRUB that I chainload?
<apteryx>lfam: 5.1.0 was indeed reproducible
<nckx>rekado: And did they respond?
<Noisytoot>I would have to create a new partition, as everything (including /boot) is encrypted
<nckx>Hence the LUKS question.
<nckx>And the ‘or’.
<Yar54>I heard you speaking about mrustc, is it because rustc still has the trademark policy?
*Noisytoot is using LUKS
<Yar54>I thought in 2021 they had a vote to get rid of it
<nckx>Yar54: No, because Rust can't be built without a binary Rust compiler.
<Yar54>oh ok
<nckx>Trademarks don't make software nonfree.
<nckx>‘GNU’ is a trademark.
<rekado>nckx: they did but I dropped the ball
<Yar54>Hyperbola said that Rust's trademark policy is as bad as Firefox's so they're moving to OpenBSD because Linux uses Rust sometimes
<rekado>nckx: I have just confirmed that we do indeed want to allow ICMP from the internet
<nckx>Oh, I read that as a parody but it probably isn't.
<rekado>(I don’t get this back and forth; I requested something and the only hold-up is that I didn’t restate that I indeed want what I asked for…)
<nckx>rekado: Great! Then I can set up a Hurricane Electric IPv6 tunnel, assuming the MDC doesn't have it natively.
<nckx>rekado: Maybe it's in their checklist; else I know people like that, unfortunately.
<rekado>I don’t know what’s up with ipv6; I don’t get why so many admins are *still* so conservative about it.
<nckx>It's horrible.
<nckx>(Not IPv6. The other thing.)
<lfam>My ISP, which is one of the biggest ISPs in the USA, still doesn't support ipv6
<Noisytoot>Both of my ISPs don't either
<nckx>Belgium's reasonably great at it, for once.
<Noisytoot>Virgin Media and EE
<lfam>The other big USA ISP has supported it for years
<lfam>Comcast supports it. I used them previously and it was great. But I switched away from them because their service is too expensive
<nckx>rekado: Oh, and thanks!
<mubarak>Hi guys
<pkill9>does anyone know how to use a daemon to execute a command, that would act like calling it normally, i.e. the calling program would get the output of the command, and the calling program can write to the opened program's stdin
<pkill9>the usecase is executing a command from within a container, but with the command executed outside the container
<pkill9>but still connected to the shell or whatever than called the program
<nckx>Yar54: I think some people don't understand what trademarks are and aren't, and it leads to suboptimal decisions like that one.
<nckx>Oh, they left.
<nckx>pkill9: Is that what systemd-run does?
<pkill9>i don't know
<nckx>Or the equivalent of ‘ssh’.
<mubarak>yesterday I tried installing Guix 1.2.0 amd64. but i faced a problem in my /etc/config.scm after modifing desktop.scm. When I run "guix system init /mnt/etc/config.scm /mnt" I get error like ( many wrong numbers of arguments in system init) I think that what i get every time I run guix system init
<mubarak>here is my config.scm
<apteryx>lfam: upstream bug it as
<lfam>Thanks apteryx!
<pkill9>nckx: it works with ssh, but I don't want to use that because I want to restrict the commands it can run
<pkill9>actually ssh can probably do that
<nckx>It can.
<nckx>But restricting Unix commands is hard and fraught with peril.
<nckx>But then that would apply to any solution.
<lfam>mubarak: I tried using your config.scm, but it fails due to a typo in the user-accounts section
<mubarak>I remember in past when I installed guixsd 0.14 . When you mount partitions. There is file-systems (cons in
<mubarak>(file-systems (cons (file-system (device (file-system-label "my-root")) (mount-point "/") (type "ext4"))
<lfam>mubarak: There is an extra parentheses at the end of "(comment "Alice's brother"))"
<mubarak>and when mounting two patitions I add * to cons
<lfam>After fixing that, it fails due to an issue with how you are using keyboard-layout
<pkill9>it can't be that difficult
<mubarak>Ifam: There was (password (crypt "alice" "$6$abc")) after that line and I remove it and I added that extra ) because I remove it from password
<lfam>I'm not exactly sure, because I haven't used keyboard-layout before, but I think you have to define it within operating-system. There is an example here:
<lfam>mubarak: When I added a keyboard-layout, like in the example, `guix system vm config.scm` did not crash
<nckx>If you're talking about SSH, it's not difficult at all.
<pkill9>yea i'm talking about ssh
<mubarak>Ifam: I removed keyboad-layout (keyboard-layout (keyboard-layout "us" "altgr-intl")) line . I thought English US is the default in all OS
<lfam>That's correct mubarak, but then you should also remove it from bootloader and xorg-configuration
<nckx>Yes, but altrg-intl isn't.
<nckx>If you don't rely on that, you can drop it.
<mubarak>Ifam: in desktop.scm keyboard-layout, it say "provides dead keys for accented characters." and I don't need that. so I remove that line
<mubarak>so should I also remove " (keyboard-layout keyboard-layout))))" from services?
<mubarak>and from the bootloader?
<lfam>You can remove it from bootloader
<lfam>For services, remove the set-xorg-configuration expression
<mubarak>and the default keyboard layout will be used. right?
<lfam>I will check for you
<mubarak>Ifam: what about file-systems part. is it correct?
<lfam>Make sure that the filesystems exist and have the right labels
<nckx>Good night Guix.
<lfam>mubarak: I fixed those problems in your config.scm:
<lfam>I can't check that the file-systems section is correct. Only you can make sure about that
<lfam>You should also make sure the target device in bootloader is correct
<lfam>Also, you don't have to include the user-account comment if you don't want to use it
<lfam>But, it won't cause problems either way
<mubarak>Thank you Ifam :)
<mubarak>I will give it a try. and I will come back (mostly tomorrow ) and tell you what happened
<lfam>Okay, good luck!
<apteryx>lfam: qemu 5.2.0 is now on master. thank you!
<lfam>Thanks for seeing it through
<apteryx>it was more work than I'd hope, but I'm rather happy with the result
<apteryx>mostly because of my added bits such as the info manual and the static output
<lfam>It's a shame that we still have to patch the build system for that. I'm grateful you did the work
<pkill9>has anyone set up guix so no ~/.guix-profile is required?
<lfam>I think that, if you don't install any packages, you won't have a ~/.guix-profile
<lfam>I'm not sure if that's what you had in mind
<pkill9>i suppose you could use different profile path
<pkill9>with --profile
<pkill9>and just export the variables
<pkill9>though the user's guix profile is hardcoded in many places
<pkill9>I'd prefer the /var/guix/profiles/per-user/<user>/guix-profile path to be hardcoded instead
<lfam>That directory is considered to be configurable
<pkill9>how can that be if it's hardcoded?
<lfam>When building Guix, you have to set --localstatedir=XXX, and the standard location is /var
<lfam>But, the default is something else
<pkill9>makes sense to use that as symlink then i guess
<lfam>What's wrong with "~/.guix-profile"?
<pkill9>tbh a small issue which would be simpler to workaround in a different way, idk why i'm resorting to thinking of big changes to fix it
<pkill9>but also I like the idea of getting rid of it
<pkill9>to keep clear separation between OS and user data
<pkill9>and less problems arise if /home partition fails to mount or something
*lfam looks at ~/.config and ~/.local
<pkill9>but hte small issue is that firejail can't see ~/.guix-profile, and doesn't let you whitelist symlinks
<pkill9>and my SSL_CERT_DIR and SSL_CERT_FILE is using the .guix-profile path instead of absolute path
<pkill9>but it doesn't make sense because the ~/.guix-profile/etc/profile uses absolute store paths
<leoprikler>While you can probably work around ~/.guix-profile you will always encounter profiles as symlinks in Guix
<pkill9>it's not getting set at /etc/profile nor in ~/.profile.d
<leoprikler>btw. thanks for reminding me about the profile proposal I have yet to write up :(
<pkill9>nor in ~/.profile
<pkill9>I just need to know where it's setting the SSL env vars
<lfam>Can you clarify what you are asking?
<lfam>For me, the Guix-y SSL vars are found in ~/.guix-profile/etc/profile
<lfam>That's on Debian
<pkill9>i want to know where SSL_CERT_DIR is being set, because it's using ~/.guix-profile/etc/ssl instead of /gnu/store/xxx-profile/etc/ssl
<leoprikler>if you have some ssl stuff in ~/.guix-profile, that does get expanded as $HOME/.guix-profile
<leoprikler>because $GUIX_PROFILE is actually bound when expanding it
<leoprikler>so it uses that instead of absolute paths
<leoprikler>you can expand it once without binding $GUIX_PROFILE and it should use absolute paths
<pkill9>so i need to modify /etc/profile
<pkill9>or get firejail to put a symlink to it in the sandbox
<leoprikler>I wouldn't head into that territory
<leoprikler>I would rather set up the firejail container in a way, that profiles get expanded explicitly without binding $GUIX_PROFILE
<leoprikler>though I don't really know how it's currently being invoked
<leoprikler>are you even invoking it directly?
<pkill9>it inherits the environment variables
<leoprikler>from where?
<davidpgil[m]>my guix pull is failing
<davidpgil[m]>says a depenency cannot be built
<pkill9>leoprikler: from the environment you call firejail in
<leoprikler>so you do manually invoke it then?
<davidpgil[m]>anyone have any ideas on how i can fix this?
<leoprikler>i.e. you can choose to just `source ~/.guix-profile/etc/profile` before
<pkill9>yea, i have a script for calling executables
<pkill9>I symlink to the script, which when called searches for the actual executable and runs it with firejail