IRC channel logs


back to list of logs

***ChanServ sets mode: +o litharge
***litharge sets mode: +b *!*@2001:470:69fc:105::2:34a4
***havershayer[m] was kicked by litharge (You are banned from this channel (by nckx))
***litharge sets mode: -o litharge
<acrow>So, if we fail to reproduce ci knows to stop further builds? until the package is updated?
<nckx>No. It will rebuild each change to the package (or a transitive input). Reproducibility is not tracked.
<nckx>By the CI, that is.
<vagrantc>we can dream of a better future, though! :)
*nckx describes the world as it is, dreams cost extra.
*vagrantc rummages around in the couch cushions for spare change
<nckx>Nope. Just more guix lint.
<nckx>vagrantc: Is there a theme to R irreproducibility?
<acrow>Ah, so all these mismatched checksums could be caused by changes in transitive inputs? Although when there are many builds and none match; well, the source emerges as the culprit because the inputs are relatively stable.
<vagrantc>acrow: no, those are for a given guix commit.
<nckx>acrow: Not quite, I think, although I'm not sure what you mean.
<vagrantc>does guix publish publish the public key used to sign it's packages, or do you have to publish that independently?
<cbaines>I believe it does
<vagrantc>e.g. wget
<nckx>vagrantc: It does by default, yes.
<nckx>There's a ‘welcome page’ at / and the key is at / by default.
<cbaines>I think it's meant to be available at / but maybe NGinx on doesn't reverse proxy requests at that path
<oriansj>unmatched-paren: I'll add that to my todo list, probably somewhere after solving the Linux bootstrap problem and finishing the GHC bootstrap in C
<vagrantc>i wonder if i could hide that somehow ...
<vagrantc>ah yes, put it behind a proxy
<acrow>Ok, I should've thought about it more. the derivation checksum is calculated from the checksums of the derivations of the inputs (so when an input changes, we have an entirely new deriviation to build from) .. So, we are building derivations and there should be no variation in the output checksums. I was thinking about how changes in the inputs affect things. IIUC, derivations are reproducible, no?
<acrow>well, at least, they ought be.
<oriansj>and unfortunately some absolutely are not
<nckx>acrow: The derivations ‘themselves’? Yes, indeed.
<vagrantc>i want to set up a build garden (not big enough to be called a farm) and keep it's signing keys a secret ...
<nckx>vagrantc: Why?
<vagrantc>nckx: if you can successfully download packages from it that match the signing keys of ci or bordeaux, you know it's at least marginally reproducible :)
<nckx>(FYI nginx does an acceptable job of proxying guix publish, if you have nowhere to start.)
<cbaines>vagrantc, narinfos actually include the full signing key in the signature, so you'd want to not sign nars entirely
<vagrantc>cbaines: ah.
<vagrantc>is that ... an option with guix publish?
<nckx>Is secrecy really required for that?
<vagrantc>i've never managed to get guix publish to start without a key
<nckx>I don't think so, hence my scepticism.
<oriansj>vagrantc: you need only hide the secret key, not the public key
<vagrantc>nckx: you could just not trust that key, true
<vagrantc>oriansj: in this case, i'd like to hide both, but that seems maybe a non-options
<oriansj>well, if you are willing to setup a proxy then it is a non-issue
<vagrantc>heh. or publish the private key as proof you should not trust the signing key :)
<oriansj>as you can just make the proxy return a rick-roll video anytime someone tries o get your private key
<nckx>oriansj: The key is in the .narinfos.
<vagrantc>but if it's embedded in the narinfo, as cbaines said, it's not so easy
<nckx>You can rewrite those to contain a roll, but that needs a pretty heavy level of proxy.
<nckx>Better off modifying guix publish directly to skip the middleman (and overhead).
<cbaines>I think unsigned narinfos are a thing, but I'm not sure what the support in guix publish for that is like
<oriansj>nckx: but then it'll be xkcd worthy
<vagrantc>i'm not much for dinner rolls, all for aikido rolls ...
<nckx>They are at the very least lightly tested. :)
<nckx>oriansj: Goals.
<vagrantc>yeah, ideally it'd just be unsigned narinfos ... that solves a lot of this needless complexity
<vagrantc>and secrecy
<cbaines>maybe have a look at tweaking guix publish if it doesn't support not signing things
<oriansj>vagrantc: well the signing keys aren't for secrecy so much as enabling trust
<vagrantc>oriansj: right, but this should be an untrustable server
<vagrantc>and if they're unsigned, that would be perfect
<oriansj>I wonder what passing: --private-key=/dev/null would do
<vagrantc>presuming that doesn't botch the hash matching the signed ones from ci and bordeaux
<vagrantc>cbaines: and huge thanks for the reproducibility stuff ... it really streamlined a lot of work i had been meaning to get around to
<cbaines>vagrantc, good good, I'm glad it's being used, and being useful :)
<nckx>It is!
<vagrantc>took me a while to get around to it, but it would have taken at least an order of magnitude longer to do the stuff i did earlier this month
<vagrantc>or more...
<vagrantc>nckx: the r issues in guix kind of look like ... although i suspect many of those are misfiled in debian
<vagrantc>and i thought those issues were fixed upstream
<nckx>apis_and_ipas: Nice nick.
<vagrantc>nckx: most of them kind of look like ordering issues ... but i'm not sure
<nckx>vagrantc: That server is either very slow or there's something wrong with my connection, but thanks!
<nckx>So it's generally in embedded data, not in bytecode?
<vagrantc>nckx: for an example
<vagrantc>the .rdb files mostly ... or .rdx files
<vagrantc>some look vaguely like they could be PIDs
*vagrantc goes on wild hunches sometimes
<nckx>I don't think so, but your guess is absolutely as good as mine.
<nckx>The .rdb differences are what puzzle me though.
<vagrantc>likewise, which is why i marked them "unidentified issue in .rdb" :)
*yewscion is back (gone for 01:00.57)
***aeka` is now known as aeka
<unmatched-paren>how can i check whether a package is in my profile or used as an input by one of the packages in my profile?
<unmatched-paren>i want to see whether i've gotten rid of elogind entirely
<lilyp>guix graph?
<unmatched-paren>can that graph a profile, too? nice!
<lilyp>you could also grep for elogind in the manifest
<unmatched-paren>oh, good idea
<unmatched-paren>although that wouldn't show inputs, only propagated-inputs
<unmatched-paren>I want to remove elogind completely
<lilyp>I'm pretty sure guix gc also has some options available to check what's referenced
<unmatched-paren>ah, it's pulse and polkit that reference it
<unmatched-paren>polkit is brought in by colord, and pulse by sdl...
<unmatched-paren>sdl by libvisual, and libvisual by... nothing?
<unmatched-paren>I wonder where colord is coming from though...
<unmatched-paren>ah, gtk
<unmatched-paren>and it all leads up to inkscape. which is necessary to convert the SVG grub background to png.
<unmatched-paren>seems overkill.
<foobarxyz>Hi, is there a convention as where to output or name the platform dependent shared libraries, e.g. lib/amd64 for amd64 perhaps?
<unmatched-paren>can i just run pipewire from my sway config with `exec pipewire`?
<unmatched-paren>and `exec pipewire-pulse` of course
<lilyp>why the hate on elogind tho?
<maximed>foobarxyz: We usually just put them all in 'lib', without architecture-specific subdirectories.
<maximed>vagrantc: I would keep the narinfos signed (even if the authorised key is not authorised), to allow later for things like multisig systems -- e.g., client only accepts a substitute if signed by two different authorised keys or such
<maximed>* even if the authorised key -> even if the key
<foobarxyz>maximed I see, thanks. I suppose this is because there will be only one guix instance running at the time as far as archs are concerned, so we can't have both x86 and amd64 packages at the same time
<maximed>foobarxyz: You can have both i686 (as we name it in Guix) and x86_64 in the same ‘Guix instance’.
<maximed>Depending on what you're doing, exactly.
<maximed>foobarxyz: FWIW, if you have built a package 'glibc' twice, once for x86_64 and once for aarch64,
<maximed>then glibc-for-aarch64 is put in /gnu/store/HASH1-glibc-VERSION/lib and glibc-for-x86_64 in /gnU/store/HASH2-glibc-VERSION/lib
<maximed>where HASH1 != HASH2
<maximed>so not necessarily a problem (although installing two variants in the same profile probably won't work).
<maximed>Also, Guix supports cross-compilation arbitrary packages to any (supported) architecture: "guix build hello --target=aarch64-linux-gnu", which implies that you can have multiple architecture variants of the same thing on your system.
<foobarxyz>maximed: thanks, I'll have a look at the --target option and see how it behaves exactly
***tox is now known as vier-littleme
<sammm>simple question I hope, how can I have a shepard service (using requirement array) depend on something like the docker-service-type (which isn't a shepard service AFAICT)
<sammm>in short, my shepard services which were running docker run failed to start as docker wasn't yet running.
<sammm>I lied, it looks like is the symbol I want to require
<arjan>has anyone encountered this at an initial home reconfigure? "guix home: error: mkdir: Permission denied"
<sneek>Welcome back arjan, you have 1 message!
<sneek>arjan, podiki[m] says: patch was sent to that should fix the problem
<arjan>the home configuration is minimal and does build successfully, but cannot be applied apparently
<stampirl>Hi #guix. I have a problem with my Guix SD. Docker stopped working and is spitting out this error ` Unknown runtime specified`
<mbakke>emacs tramp is asking for username for a host defined in ~/.ssh/config, what gives? It manages to resolve the alias and ProxyJump from the same file, but not User
<lilyp>mbakke: Is this new or did you just now try tramp and notice it for the first time?
<mbakke>lilyp: I've used Tramp occasionally for years, but not since I reinstalled my machine due to fatal btrfs corruption a few weeks back, perhaps I missed something? Also migrated to Guix Home, so many moving parts :P
<lilyp>there's also Emacs 27→28, which already broke some tramp thing
***wielaard is now known as mjw
<rekado_>nckx: the rdb/rdx differences in the case of r-affycompatible are likely due to the order of classes.
<rekado_>it’s only a problem for some packages
<rekado_>out of the 1000+ R packages we only have a handful that are not reproducible
<rekado_>r-affycompatible generates code from XML, and the order of nodes apparently is not deterministic
<lilyp>is there a git-format-patch option to generate several single-patch series on the fly?
<lilyp>e.g. my HEAD~5 contains five totally unrelated patches
<nckx>Good morning, Guix.
<nikola_>Hello there
<muradm>hi guix
<nckx>Hullo both.
<jpoiret>lilyp: nope
<jpoiret>a simple for loop should be sufficient though
<lilyp>but that's like super inelegant :(
<jpoiret>also, `git am` doesn't actually use the proper mbox format :(
<jpoiret>that or `b4 am` claims it doesn't for no reason
<z20>Hello all. Having some trouble getting the mpd service running. Should I manually create a config file?
<lilyp>try "sudo herd start mpd" first; on some systems mpd is started too soon, leading to race conflicts
<nckx>‘Proper’ is the problem here :-p mbox is at best a family formats, all slightly different, but all (AFAIK) poorly-defined.
<jpoiret>nckx: right, but it doesn't even escape From lines once (which both mboxo and mboxrd do)
<jpoiret>I haven't seen an mbox format that does that
<jpoiret>hmmm, the MLs mangling From: lines is annoying. How do committers deal with that/at all?
<nckx>jpoiret: Oh indeed it doesn't (I don't use mbox, I think ever).
<jpoiret>what's your workflow?
<nckx>git send-email up, mu4e save mail → git am -s down.
<jpoiret>how do you manage cover letters with send-email up and debbugs interfering?
<fps>how can i build an image for an armv7l system?
<fps>i tried as a first stab: guix system --target=armel-linux-gnu image config_without_kernel.scm
<nckx>jpoiret: Interfering how? As sender or receiver? I want to say in both cases: ‘I don't.’
<nckx>But that sounds unhelpful.
<z20>lilyp: Running that, it says that the service is disabled. For context, running mpd on its own says that there is no config file. Same story whether it's user as user or mpd as user in the service declaration.
<fps>i get an error though:
<nckx>fps: That is not a valid target. See ‘guix build --list-targets’.
<jpoiret>z20: did you only install the package?
<lilyp>z20: then "sudo herd enabe mpd" first?
<nckx>fps: Is arm-linux-gnueabihf what you meant?
<jpoiret>nckx: I mean, when you're opening a bug with a patch-series, the cover letter has to be sent to guix-patches, and the patches to
<fps>nckx: ah ok. that was lifted from the docs though. thanks for the pointer to the --list-targets option
<nckx>I know little of 32-bit ARM, which is to say nothing.
<jpoiret>fps: where in the docs?
<nckx>fps: --target=armel-linux-gnu was lifted from docs? Ours?? Where???
<lilyp>note that to start mpd as a user vs. starting mpd from the system are distinct things
<fps>jpoiret: guix system image --help
<fps>gives armel-linux-gnu as example
<nckx>jpoiret: I send the ‘cover letter’ normally. ‘Hello friends, blah blah beep boop.’ I don't see a reason for me to use git for that.
<nckx>Again, I am extremely basic 😊
<lilyp>the former (to my knowledge) requires hand-coding of both the shepherd service and the config file
<jpoiret>ah, right, ok! i like using all the git features
<jpoiret>--base=auto and having the basic git log is cool
<lilyp>nckx: interestingly, I use --cover-letter and then add in the blurb with emacs
<jpoiret>i'll be trying to add range-diffs myself
<nckx>I compose in mu4e, wait for a bug number, then git send-email --to=.
<nckx>Is there git automation that would help me? I'd love suggestions.
<jpoiret>lilyp: right, but how do you manage needing to send the cover letter only to guix-patches and the others to another address
<lilyp>though I don't actually use git send-email, because that's broken with gmail
<nckx>It's not that I'm against git features, but I thought the 2-step manual(?) process negated them here.
<nckx>jpoiret: ☝
<jpoiret>nckx: well, `git format-patch --cover-letter` adds a cover letter. But i agree that with the debbugs handshake it's annoying to use
<nckx>git send-email --wait-for-debbugs-reply --then send-more-email ; ah, of course.
<jpoiret>I also don't understand why there isn't a --annotate-cover-letter-only argument for git send-email
<lilyp>jpoiret: git send-email --to=guix-patches cover-letter.mbox; git send-email --to=debbugs actual-patch-*.mbox
<nckx>All workflows are beautiful and valid but that one sure ain't one for me.
<jpoiret>what about `guix please-let-me-contribute HEAD~5`?
<nckx>man git-senpai
<lilyp>guix git send-email
<fps>gnueabihf is 32 bit arm? i guess i could try that..
<nckx>fps: I thought so.
<fps>nckx: i'm not an arm native either.. i'm just trying to build an image for my raspberry pi 4
<nckx>fps: <guix system image --help> Huh. I think that might be a bug? I'm as confused as you.
<nckx>I recently found a forgotten mips reference so it's not impossible.
<fps>bugs _do_ happen ;)
<jpoiret>I think we should remove the example and refer to --list-targets
<nckx>This is the only occurence of armel-linux-gnu in the entire Guix tree.
<nckx>Bug: confirmed.
*nckx gets the glass & coaster for humane bug removal.
<fps>jpoiret: i agree. and also maybe try and make the error message more helpful if the target is not found
<nckx>Eh, is etc/completion/fish/ hand-maintained? That's… unfortunate.
<jpoiret>let's see what `guix build --target=i586-pc-gnu linux-libre` says :)
<jpoiret>yes, zsh is also lagging
<jpoiret>no --list-targets
<jpoiret>i was about to talk about that :p
<jpoiret>oh no
<lilyp>aren't all our completions hand-maintained?
<nckx>jpoiret: But it contains all(?) the --help strings :-/ I first assumed it was generated from --help output, but it's git-tracked.
<jpoiret>`guix build --target=i586-pc-gnu linux-libre` does not complain
<jpoiret>although it shouldn't be supported
<lilyp>we should probably add a feature to guix scripts to auto-generate them
<jpoiret>this involves a makefile: i'm out
<lilyp>if you're on x86, why should that not be supported?
<jpoiret>it would be cool to be able to autocomplete things like --target=
<lilyp>actually --target should "always" work
<nckx>jpoiret: I guess you're referring to a different bug? (Re: i586-pc-gnu)
<jpoiret>i586-gnu-pc is hurd
<nckx>although it shouldn't be supported → why?
<jpoiret>nckx: yes, although it's related, i was wondering if we had a different error message when packages weren't supported on a target vs. specifying an invalid target
<nckx>Supported how?
<nckx>It (probably) won't build, that's true.
<nckx>Or it will build an x86 kernel in no way relevant to ‘pc-gnu’.
<lilyp>It might actually build, you don't need linux to compile linux
<jpoiret>well, i586-pc-gnu isn't in the supported-targets of the package, no?
<nckx>But I don't see why it should complain?
<jpoiret>supported-systems 8
<nckx>What are supported-targets?
<jpoiret>at least `guix show linux-libre` tells me that
<nckx>They don't seem related?
<nckx>Are you saying we should make targets depend on systems? I think that's… risky.
<jpoiret>perhaps i'm misunderstanding something fundamental about cross-compilation, but isn't `supported-systems` describing which systems the package can be compiled for?
<nckx>Nix systems don't politely map to GNU triplets though, is the problem.
<nckx>E.g. arm-linux-gnueabihf vs armhf-linux (that's the main one, really.)
<nckx>Unplanned shutdown ☹ o/
<z20>lilyp: I initially only had the service, but I reconfigured to add mpd just now. Herd enabled and started mpd with same results; mpc says connection refused, mpd says no configuration file.
<lilyp>MPD should (if started from the service) always have a configuration file.
<lilyp>We generate one after all.
<dhruvin>What downsides are there to rely on language package managers (like npm or go get or cargo or cabal) to download/manage guix package dependencies? I can think of one is that users won't be able to patch/modify dependencies as they wish as easly. Am I mising something important here?
<fps>dhruvin: i guess not all of them offer the reproducibility that guix/nix offers
<dhruvin>Yes, you're right about reproducibility.
<lilyp>It's also technically infeasible. You can't download the internet if you don't have network access.
<jpoiret>nckx: as an example, `guix build --target=arm-linux-gnueabihf cpuid` does not compile, but guix build tries to build it, even though arm-linux-gnueabihf isn't a supported-system
<jpoiret>and it really does not compile
<jpoiret>ie it's missing a header file that it i586/x64-specific
<lilyp>jpoiret: I think supported-systems is a hint for CI, not a hint for guix
<jpoiret>maybe we should emit a warning then?
*nckx ret (I think my CPU fan just… died? feck.)
<fps>nckx: that always sucks ;)
<jpoiret>right, that'd explain it then
<nckx>jpoiret: The package might cross-compile fine, hypothetically.
<nckx>How big of an H you want to use there is up for discussion of course.
<nckx>But in general, ‘do what I ask’ is a quality of its own.
<nckx>Not knocking your suggestion; just weary of clever software.
<jpoiret>yeah, not being able to force cc would be bad too
<nckx>In case I was unclear: there's no such system as arm-linux-gnueabihf or arm-linux, so it would never be listed.
<jpoiret>i hate the system/triplet distinction
<jpoiret>doesn't help understanding cross-compilation :p
<lilyp>Well, we have cross-compilation and native cross-compilation :)
<nckx>I was hoping you'd suggest getting rid of them so I could agree without having to suggest it myself, because I think I've already done that and maybe it was stupid.
<nckx>I don't fundamentally understand why Nix systems deserve to exist at all except for backwards compatibility.
<nckx>Enlightenment welcome.
<nckx>lilyp: True.
<jpoiret>maybe because the daemon internally usse them
<lilyp>I think we'd have to implement --system=gnu-triplet for Guix 2
<nckx>(supported-target-matrix '((…) (…))) it is.
<nckx>‘It'll cross-compile to armhf on x86_64 but not to riscv64 on power.’
<nckx>jpoiret: Sure, but we own everything there, that could be fixed.
<lilyp>but at what cost? We'd have to write C++ code
<jpoiret>for sure. we just need someone to fiddle with the cpp code is that all
<nckx>By continuing to accept the deprecated (Nix) synonyms, but not emitting them, or whatever.
<nckx>jpoiret: …oh.
<nckx>lilyp: …ah.
<jpoiret>queue the usual discussion about rewriting the daemon in guile 🙃
<nckx>Y'all raise a good counterpoint to ‘why don't we just defeat the dragon’ being ‘because we'll have to fight a dragon’.
*nckx 'd forgot about the dragon.
<nckx>jpoiret: It's close to being its own Godwin's law at this point.
<lilyp>Well, I prefer the modern counterpoint to ‘why don't we just defeat the dragon’, but that doesn't apply here.
<dhruvin>lilyp: I'm suggesting to use npm and others to download dependencies from internet in source step, just like guix git-download and others do. Language package managers can download those packages and put them in some known place, where they're later used in build phases.
<nckx>My metaphor was utterly horrible. I could at least have gone with ‘appease the dragon’ and ‘we'd have to learn dragontongue’, but I did worse.
<lilyp>dhruvin: have fun hashing that
<jpoiret>dhruvin: what would that gain us though?
<lilyp>I think the dragon takes virgin sacrifices in either interpretation, so they'd be easy to appease...
<nckx>How would you separate packages that way?
<jpoiret>if you mean vendoring (bundling dependencies without the package manager managing them), that's a big nono
<dhruvin>We have numerous software that we're unable to add to guix because they have thousands of dependencies
<lilyp>yeah, so?
<nckx>I get the ease of ‘npm get foo’ (or whatever it's called), but then you end up with a directory with 1000 packages, which is… quite useless, you want exactly 1.
<lilyp>History of supply chain attacks on Guix:
<dhruvin>As long as we can compute and save hash to whatever npm is supposed to download, we should be fine right?
<jpoiret>Guix 2 changelog: `Guix now runs curl | bash for every package`
<dhruvin>Am I getting something wrong?
<nckx>dhruvin: But it downloads tonnes of packages.
<lilyp>dhruvin: I think your time is better invested trying to write an importer
<jpoiret>dhruvin: but then you lose the advantages of a package manager: a vetted and coherent repository of packages, with sane version management for dependencies
<bjc>guix isn't invulnerable to supply chain attacks, but it is less vulnerable
<jpoiret>the supply chain argument isn't the best argument because we use importers in any case
<z20>lilyp: Since the file message was from running `mpd' as a command, I suppose the clue is the connection refused message. I worked on that for a bit without success, but a more basic problem is why it's disabled and stopped on reboot
<lilyp>z20: as I pointed out, your mpd races against the network service, thus it gets disabled
<dhruvin>jpoiret: I get about not vetted packages.
<nckx>dhruvin: Say we patch a bug in package a, for which no fix is publish upstream, how do we get it to affect all users of a without rewriting each user?
<bjc>sheperd will disable services if they restart too quickly
<jpoiret>nckx: that even gets worse when you use a language pm that lets packages specify which exact version (or even git commit) they'd like to use for their dependencies
<nckx>(In this example, you won't know which of the 1000 dependents is package a until it's too late.)
<dhruvin>nckx: re: patching a bug in one package: yes, that's something I hadn't thought of.
<nckx>*dependencies, me no typ gud.
<jpoiret>there are numerous go modules that depend on old arbitrary git commits, so people using them through the go ecosystem will never get those dependencies' updates (security updates included)
<nckx>jpoiret: I'd disregarded that for sanity but yes, good point. Multidimensional matrixes are the best DAGs.
<jpoiret>this is terrible
<lilyp>who said a DAG must be acyclic?
<jpoiret>the issue is that developers want their program to Just Work™, but they're not thinking about the whole lifecycle of their product, and their dependencies
<jpoiret>many people in those "new languages" don't even understand that they should update their dependencies and test with them
<jpoiret>the problem is that the tooling encourages such "frozen" behavior
<jpoiret>if they were using packages from a package manager, updates to the dependencies would force them to test with the newer dependencies
<lilyp>tbf guix time-machine encourages frozen behaviour too
<jpoiret>lilyp: i wouldn't say it encourages, since it's harder to use that just guix itself, but yes
<nckx>I don't enjoy these discussions because they always risk coming off (or simply do, even) as old men yelling at the cloud, but at the same time I feel like the cloud isn't sending its best. So we end up performing this ☝ exact dance each time.
<lilyp>I read that as old men yelling at the butt.
<nckx>That's on you.
<nckx>I'd love for some compelling idea to come along that Guix just can't ignore.
<dhruvin>To give context: I'm looking for ways to ease the submission of lets say electron based apps to guix (ignoring their downsides). Since there are so many dependencies, and using an importer will yield that many guix packages.
<nckx>‘Download stuff to ~’ isn't that so far.
<lilyp>write the importer anyway
<jpoiret>dhruvin: well, apart from that, I'd suggest complaining to upstream that they have far too many dependencies :)
<lilyp>if we get typescript etc. running, things will improve for everyone
<nckx><ignoring their downsides> is the impedance mismatch, I think.
<lilyp>maybe 2030 we get vscode
<jpoiret>the issue is that you can't really avoid the dependency hell of those package managers, they have smaller libraries that do only one thing
<dhruvin>nckx: <ignoring downsides>: I didn't want to derail this conversation.
<lilyp>(after people have aleady moved on to the next horrible ide mixing rust, go and node just to spite us)
<nckx>I didn't mean to imply you were, dhruvin, but a good part of the conversation is people pointing out downsides.
<dhruvin>Alright. IIUC reproducibility, vetting, and seamless bug fixes done by guix favors `guix import`.
<nckx>You can take your npm directory with 1000 packages that form a ‘functioning’ whole, check it into my-electron-app-monorepo.git, and write a Guix ‘package’ that just extracts that, with a nice hash & everything, today.
<nckx>It's just not Guix packaging at that point.
<jpoiret>oh and don't take it personally either! I think we all at some point were eager to add something like you are, only to discover that it's just a bottomless pit of horror. Personally, I had that with the protonmail bridge
<nckx>If you say, ‘that's fine, I don't care about that’, that's fine. :)
<jpoiret>that made me reconsider which software I want to use, and now if a program is written in Rust/Go/npm it's highly unlikely i'll ever use it
<jpoiret>it's getting difficult though, with Rust infiltrating many projects
<lilyp>I still think GNOME folk embracing Rust with open arms was a mistake.
<jpoiret>see for an example with firefox
<bjc>i wish it were feasible to use rust without cargo
<lilyp>Let alone kernel devs
<jpoiret>bjc: It is!
<lilyp>Thankfully we're working on antioxidant.
<jpoiret>see antioxidant for a practical example
<nckx>Seeing green number go up on the antiox job has been the highlight of my days this week.
<lilyp>“A developer working on a function may suddenly discover the need to, say, left-pad a string with blanks.” Already liking that article.
<bjc>i know of antioxidant, but i'm not sure how practical it is currently
<lilyp>it's WIP, you won't see it in prod elsewhere
<janneke>it seems our gdb is broken?
<janneke>warning: Unable to find libthread_db matching inferior's thread library, thread debugging will not be available.
<jpoiret>the issue though is not cargo itself, but rather the practices that it encouraged
<jpoiret>antioxidant or not, rust packages upstream are still going to be using a never-ending dependency graph
<bjc>yes, they have a cultural problem, for sure
<bjc>cargo itself is a mess, though
<dhruvin>Is it too crude to say that software developers and system package managers have tension between them about who maintains the dependencies to be installed on a user's machine.
<jpoiret>language-specific pms were a mistake tbh
<jpoiret>not at all, it's exactly this
<dhruvin>The situation got worse as new language level package managers came that allowed developers to create their own mini-world of dependencies disregarding how downstream will take care of that mess.
<jpoiret>see Python also, I have friends that use it for scientific computing, and they've all been told to use Anaconda, which is an additional distribution that's not either PyPI or a system distribution. It also includes proprietary software, that is on the front page. Updates don't go well, you don't get the latest packages, and it all interacts
<jpoiret>terribly with your own system-wide python
<bjc>cpan has been around forever and somehow didn't cause this level of problem
<vagrantc>cpan evolved in a very different development culture
<bjc>so at least some of this is simply cultural. cpan isn't special
<jpoiret>right, i personally think that it's the tools and the type of language that influenced that kind of culture
<dhruvin>jpoiret: I believe that many ml researchers only want a declarattive reproducible environment. That they can share with peers (or downstream). Conda does that well I heard. Isn't that right?
<jpoiret>i bet they use docker for that :)
<dhruvin>(Since conda does manage system dependencies as well)
<jpoiret>i'm not familiar with conda
<jpoiret>dhruvin: oh it doesn't manage system dependencies
<dhruvin>you're right about them using docker :)
<lilyp>conda is actually just a virtualenv with its own package manager
<jpoiret>it packages non-python code too, for sure, but it doesn't interact with the system pm
<lilyp>well, so does pip :P
<jpoiret>docker doesn't contribute to reproducibility at all
<jpoiret>lilyp: right, but I think that it originated at a time when pip didn't do that well
<dhruvin>My bad. I meant it packages libraries written outside of python (c, c++, etc), that are then used by various python packages.
<lilyp>define "do well"
<bjc>docker *impedes* reproducibility
<jpoiret>well, wikipedia tells me that conda was born because pip didn't support proper dependency version constraints
<dhruvin>re: define well: This is very subjective. But conda manages everything they want a package manager to manage.
<dhruvin>By they I mean people who I talked to. Which may be a very biased subset of all ml researchers.
<jpoiret>from their point of view, it's perfect, just like how cargo is perfect for Rust developers
<bjc>a lot of rust developers find cargo wanting, tbh
<dhruvin>lilyp: This is definitely in the context of their work, as in research of software development.
<nckx>Perfect is a stretch, but ‘this thing that is *very* convenient, just ‘works’, and is enabled and sanctioned by the language & tools is actually bad and you should stop doing it’ is a very hard sell.
<singpolyma>bjc: if you have a docker image it's pretty reproducible... I understand major kernel changes can invalidate that, but the same image will do the same thing for a long time
<nckx>Ooh, IRC likes.
<jpoiret>singpolyma: an image is literally a hard drive dump
<singpolyma>jpoiret: yes
<jpoiret>that's not reproducible
<singpolyma>It takes "works on my machine" and makes it work on your machine too by making your machine *be* my machine
<nckx>How does that definition of reproducibility not apply to any file ever? This JPEG is reproducible, look, I cp it and the md5sums match.
<lilyp>Okay, I have to back-pedal a little on Conda, it seems to want to be a multi-language package manager:
<jpoiret>reproducible is like `here is my drawing, let me teach you how to understand and make the same drawing` vs. `let me just trace over that drawing`
<lilyp>"Package, dependency and environment management for any language—Python, R, Ruby, Lua, Scala, Java, JavaScript, C/ C++, Fortran, and more."
<singpolyma>nckx: yes, of course, a jpg has no dependencies and so is also reproducible. Modulo the viewer I guess
<bjc>docker is the photocopier method of reproducible, and not what we mean when we discuss it
<jpoiret>reproducible is now regretting their choice of IRC nickname
<bjc>very ;))
<singpolyma>Well, using multiple definitions of the same words doesn't help obviously
<nckx>Let's s/docker image/huge static binary/ (the difference is purely semantic, but static binaries are easier to understand), and your binary will still likely do different things on PCs 30 years apart.
<bjc>language is hard
<singpolyma>When a person who likes docker means "reproducible" they mean "every mrhine produces same result"
<singpolyma>There is also the "reproducible builds" concept which is completely unrelated and uninteresting to docker people
<nckx>My experience with Docker does not support that claim.
<nckx>If that's what they're selling, their product does not deliver.
<lilyp>Actually, when talking about reproducibility w.r.t. Docker, we'd have to start from Dockerfiles.
<singpolyma>> 30 years apart
<singpolyma>Yeah, they probably don't care about that at all either. Minds at the most
*nckx 's fan is now making grindcore noises.
<singpolyma>... months
<jpoiret>how do I know the Docker image you're giving me has unmodified scientifc libs, and not a patched one that will always give the same results you claim to obtain?
<nckx>singpolyma: :)
<singpolyma>jpoiret: a docker user doesn't care about that. They just run the image
<jpoiret>Docker is nothing more than Flatpak
<singpolyma>They are roughly equivalent yes.
<singpolyma>Or any VM system etc
<singpolyma>Or a big static binary (ish)
<muradm>singpolyma: not VM i suppose
<jpoiret>well VMs only add the Kernel in the end
<jpoiret>so it's not that different
<lilyp>tbf flatpak was more or less advertised as docker for desktops
<singpolyma>muradm: why not? I can make a VM from a docker image and vice versa. It's all the same
<nckx>lilyp: Wait — Flatpak is related to Docker Desktop!?
<nckx>I had no idea.
<muradm>from my end being reproducible is making same output over and over again, so i can do a = make_image(); b = make_image(); compare_bitwise(a, b) should be equal
<singpolyma>muradm: that's reproducible builds
<muradm>VM image never will satisfy that criteria
<lilyp>I didn't know Docker Desktop existed until now
<nckx>Hm, no, doesn't seem to be.
<nckx>lilyp: Never mind.
<singpolyma>Totally different
<muradm>because every time you make VM even with same input, your will get different images
<trevdev>Hey guix, is there some reliable way to add a folder to the %load-path in the context of `guix-home`? I'm trying to use a module to configure an mcron job, the context is getting lost in the gexp and part of that seems to be the `(current-filename)`
<nckx>lilyp: I got… excited is not the right word, but I'd already cut a new piece of red string all the same.
<muradm>and there will not be a way to compare one with another
<muradm>that makes VM not reproducible
<singpolyma>muradm: you're just talking about a different thing though
<lilyp>Docker Desktop is confusingly just a desktop app for the docker web page (or is it?)
<muradm>although both images could do job in same way
<singpolyma>No one is the reproducible dev environment space cares about reproducible builds. Quite the opposite. The while point is to stop doing builds at all
<dhruvin>AFAIK docker desktop is gui client for docker daemon that's running in linux vm (if you're using windows or mac).
<jpoiret>reproducibility vs. indiscernibility of identicals
<nckx>lilyp: No, it's a (GUI) tool to create & ‘share’ docker containers.
<lilyp>trevdev: -L?
<jpoiret>Yeah basically docker desktop on windows just setups WSL and uses docker on the linux kernel
<lilyp>nckx: oh, so it's half of what I said and the other half is the gui frontend described by dhruvin
<nckx>lilyp: My (subconscious) thought process went something like Docker Desktop → GNOME Builder → wait GNOME builder pushes flatpaks now rite → related???
<muradm>singpolyma: how comes, what is then bisecting then for instance?
<jpoiret>It's not rocket science
<trevdev>lilyp: `guix home container /path/config.scm -L /path`?
<nckx>lilyp: Oh. Weird.
<bjc>i would argue that what docker provides is “consistency” not “reproducibility”
<singpolyma>muradm: bisecting implies source code and history. Off topic / out of scope
<nckx>jpoiret: How dare you, it has lots of rockets, 🚀 yaay ship ship ship
<jpoiret>bjc: isolation rather
<singpolyma>bjc: sure. That's a fine set of words. But not the ones in use so better to understand what people already say than try to teach every person new words
<bjc>that, too
<lilyp>nckx: I'm pretty sure GNOME software ships in flatpaks, thus builder too
*nckx nods.
<lilyp>I'm also quite sure GNOME Builder allows you to easily build flatpaks, but I don't think it does publishing
<dhruvin>I think people interested in guix are often in it to see, and modify the "produce" part of reproducable. Whereas docker folks seem to be just interested in identical copies of zeros and ones able to be executed in a preconfigured environemnt.
<jpoiret>Docker is often not even consistent because you can configure containers in a bunch of different ways that can make them misbehave
<jpoiret>So isolation it is
<singpolyma>You can, but if you use the company-mandated run script you're probably not gonna set such options
<muradm>singpolyma: you said dev don't care reproducible builds, if so bisecting wont be invented
<singpolyma>muradm: I said "reproducible dev environment" people don't care about that
<singpolyma>And indeed I doubt they do much bisecting on the environment
*muradm ¯\_(ツ)_/¯
<dhruvin>Is there a way we can approach rust/go/node developers about the issues we discussed above, or is it just lost cause?
<jpoiret>I think it's a lost cause
<jpoiret>Mainly because it's already too late
<jpoiret>There are many people that over the years have voiced their concerns
<singpolyma>If your tool needs a whole ecosystem to changes practises to work out, your tool may be missing something ;)
<jpoiret>But many of these tools are supported by big private industry players, who don't really care about those kinds of arguments
<bjc>the economics of programming have changed. without serious change, npm is the future
<singpolyma>I live guix, and I have no live for npm, but we can't ask upstream to move for us, that's just silly
<lilyp>I think we should do our best to influence other players
<muradm>topic pops up again and again, just yesterday had discussion wrt maven
<lilyp>if we can move gnome or meson, that'd already be a huge player
<jpoiret>The issue is that downstream never has much influence on upstream's practices
<lilyp>and as far as I'm aware, meson would have a good reason to eat cargo rather than simply coexist
<jpoiret>You can't just email
<singpolyma>Right. We get to consume upstream however we like but we can't change them
<jpoiret>Individually we might, but we can't convince every single developer to adapt to downstream
<lilyp>We don't need to convince every single one, though
<lilyp>once we have a large enough group, the movement becomes self-sustaining or even self-accelerating
<jackhill>right, we shouldn't feel entitled to change, but we can work together. Also if the tooling authors help, that would go a long way.
<singpolyma>Guix is already miles ahead of most downstreams with the importers
<singpolyma>That's the future for sure
<jackhill>I'm curious to understand more hwo the relationship between Haskell devs and Nix, Stack, and the new Cabal tooling went. Also how well all the messaging around more managable depedency graphs have gone for Go (and maybe Clojure?).
<dhruvin>One of the complaints I see is that system level package managers are too slow to update their dependencies. This may be a reasonable excuse to not rely on let's say guix or debian or others. Velocity is what these new language level package managers are about. And I see that.
<jackhill>My experience working outside of Guix for work is that it's not nessisarily that the problems aren't apparent, but our developers are more pragmatic there about focusing on their narrow scope of work and will work around problem even as they are experiencing them.
<dhruvin>I don't feel go devs are careful enough about minimal dependencies. They have decentralization, yes, but I still see too many packages with years old dependencies.
<lilyp>but that's a problem with *distributions*, not with package managers
<dhruvin>ah! rightly said. ^
<lilyp>Bloomberg packages their C++ ecosystem with dpkg/apt
<jackhill>I do like what LTS Haskell is doing of moving the curation of what sets of packages work well togheter outside the context of any specific tooling.
<lilyp>any big company wishing to abuse guix for their own dependency nightmare can roll a channel
<dhruvin>I like the lts haskell idea too.
<dhruvin>lilyp: But these new language package managers work on all distributions, regardless of what package manager/distro combination you use.
<dhruvin>That's what gives a common environment to all devs with different machines/preferences.
<dhruvin>I agree that they can do the same with guix (not sure to what limit though) too.
<lilyp>To the exact same limit.
<jackhill>I see non-free platforms playing a lot into it as well. As a language ecosystem designer I, naturally, want to reach folks on those systems as well, which means I must provide my own tooling
<dhruvin>lilyp: We do have vetting in place though. Everyone will have to create their own guix channel, or submit packages to an unvetted channel; to be able to publish packages as easily as with npm et al. Right?
<lilyp>jackhill: wouldn't pkg-config have solved that problem for most everyone though?
<jackhill>And Rust seems to have done a good job building a vibrant community around their fairly new language. I don't know how much cargo and practices around it made that possible, but I'd bet that Rust folks think it to be part of their success.
<lilyp>dhruvin: as a package maintainer, you just have to provide a guix.scm in which you can do whatever.
<lilyp>on a company level, you have to merge those definitions into something meaningful
<jackhill>lilyp: seems like there might need to be some tooling on top of pkg-config. I dunno, I don't really udnerstand the problems my colleagues have with their chosen systems or why they persist in that coice.
<lilyp>well, that's what we call a build system normally
<lilyp>autotools, cmake and meson all three offer ways of incorporating pkg-config easily
<lilyp>(well, as "easily" as writing cmakelists that is)
<lilyp>why can't cargo.toml or pip do that?
<jackhill>oh, do they not‽
<lilyp>speaking of it, why does every language and their mother need their own package file format?
<dhruvin>lilyp: I am unclear about the consumer part here. I get that as a software developer I have my own guix.scm with my package definition somewhere in repo, but how will my users get the exact dependency they need from my repo (or via some other means)?
<jackhill>even Fortran's getting into the party
<lilyp>dhruvin: again, you can code those directly into your guix.scm, or alternatively provide a manifest.scm which is simply (packages->manifest [a bunch of copypasta'd package definitions])
<lilyp>I'm so thankful C++ has three package managers and they all hate each other.
<dhruvin>Updating dependencies then becomes non-trivial right?
<lilyp>dhruvin: wdym?
<dhruvin>For a consumer. Unless I'm misunderstanding you.
<lilyp>again, as a consumer, you should only take the package itself, not the dependencies
<dhruvin>I depend on package a v1.0.0, later a updates to v1.0.1, I want to update my dependency to v1.0.1 as well. How would I do that?
<lilyp>you already manage the dependencies yourself
<lilyp>let's say you have packages A, B, and C
<lilyp>where A depends on B and C and "bundles" them by whichever means
<lilyp>all owned by BigCompany.
<lilyp>now the maintainers of B and C do their work independently and either package gets pushed
<lilyp>BigCompany can then update their channel to bump them
<lilyp>and the maintainers of A can either go with the update or continue to build against old versions
<lilyp>if package A fails to build against newer B or C, BigCompany can keep older C for compatibility
<lilyp>but they can also kindly mail the A maintainers to please ship an updated version that links against C 1.2.3.tyvm
<unmatched-paren>lilyp: re elogind: the seatd readme puts it best:
<unmatched-paren>Also i'm fairly sure elogind is based on a very old version (2yrs last i heard?) of systemd
<lilyp>guix is based on a 10 year old version of Nix
<unmatched-paren>fair point
<lilyp>the last release of elogind is a year old
<dhruvin>Knowing about what has been updated and reconciling is done by `npm update` seamlessly. The flow you mentioned requires all downstream projects to keep a track of said upstream project's channel(s) of communication where updates are conveyed. npm and others seem to centralize it making things convenient using `npm update`.
<dhruvin>Can something like this be done with guix?
<lilyp>guix refresh?
<dhruvin>Ideally projects should have minimum dependencies and keeping track of dependencies is part of their work. But with newer software having countless dependencies makes it much harder to do. If not done automatically using tools like `npm update`
<dhruvin>Totally forgot about `guix refresh`
<dhruvin>You're right. With proper updaters things can indeed be automated with guix.
<lilyp>but even if it didn't exist, it's not like `git pull'-based CI doesn't work (concerning other systems based on e.g. apt or ebuilds)
<dhruvin>One more thing that these language level package manager handle is resolving compatible set of dependencies using semver or such. Does/can guix do this as well?
<mitchell>dhruvin: guix has a much more general approach to dependency management
<unmatched-paren>dhruvin: I guess technically you could use specifications like foo@2 for 2.*
<dhruvin>unmatched-paren: Then the resolution to which guix package of my dependency to be used comes to me. Won't it?
<unmatched-paren>dhruvin: What do you mean?
<singpolyma>dhruvin: yes. That's true
<singpolyma>Guix doesn't have a "solver" it uses humans for that
<mitchell>It would be cool if it did
<dhruvin>For examaple A -> B~v1.0, B1.0 -> C~v1.5, A -> C~1.2; We need A, B~v1.0 and C~1.5 (due to B)
<unmatched-paren>"Using advanced 'natural intelligence' technologies!""
<lilyp>That's not how Guix works.
<mitchell>dhruvin: When a package is defined in guix these relationships are formed by the other package definitions in that commit. If your package builds with a given commit, it will always build (unless you do some weird transformations on it)
<dhruvin>Yes, I was describing how npm and others do this
<lilyp>In Guix, you specifiy "exactly" the package to build against.
<unmatched-paren>To be honest, if you need a complex solver for figuring out your dependencies, you probably have too many of them.
<lilyp>Also, Guix normally doesn't propagate dependencies.
<lilyp>If there's a conflict, then boom, there goes your web page.
<vagrantc>admittedly, one of the theoretical appeals for me to guix's model was that it might actually be able to support multiple concurrent incompatible package versions as sanely as possible ... that seem to be the trend for many languages these days ... but it's still a *lot* of work
<dhruvin>lilyp: Right. I get that. I was wondering can we come up with exact dependencies of all dependencies that are compatible to each other or not.
<mitchell>right, usually the sources get patched at build time to refer directly to store paths and not rely on the caller PATH variable
<lilyp>You invoke guix build?
<mitchell>or guix graph
<lilyp>Continuous integration and integration tests exist for a reason.
<vagrantc>rekado_ or cbaines : where did you get the boot firmware for the honeycomb lx2 boards you set up?
<mitchell>dhruvin: I don't know if there is a general way to determining dependency "compatibility" outside of build/test with each variant
<lilyp>vagrantc: ironically, it does that better for C/C++ than modern languages
<lilyp>mainly because propagated-inputs tend to get overused with many of the latter
<mitchell>lilyp: It is a subtle concept. Its hard to imagine finding programs in the store that are not in the PATH
<dhruvin>IIUC, the problem of resolving exact dependencies from a compatible dependency specification (a problem which the language pms have created) was solved by a solver, which language pms offer.
<lilyp>i think you're mixing package management and build systems
<mitchell>dhruvin: That is not a reproducible way to go. If you package randomly uses different dependencies what can you say about its correctness?
<lilyp>which fair, programming language package managers tend to do often
<mitchell>Even using the same semver is not enough if it was compiled differently.
<lilyp>for the record, specifying libfoo >= 1.0 is perfectly valid
<lilyp>libfoo == 1.0.3.random_commit is silly
<dhruvin>mitchell: Specifying what minimum version to support in a package file with a lock file that has exact versions of all of my dependencies is what these language pms do. I'm trying to draw a parallel here.
<lilyp>you might be drawing a circle
<dhruvin>circle it is
<mitchell>"However, such version specifications involve a high degree of wishful thinking, since we can never in general rely on the fact that any version in an open range works..." --- The Purely functional deployment model
<dhruvin>But am I able to convey my point?
<mitchell>dhruvin: I understand this, but the theory guix is built upon does not think about components in this way
<lilyp>relevant mailing list archive:
<mitchell>Chapter 1 of Dolstra's thesis goes into great detail on why thats the wrong approach
<dhruvin>Alright. So functional package management focuses on specifying exact versions for reproducibility and other benefits. I get that. But is there a way for me to generate these exact versions and keep them updated as I develop my software over let's say some months to years.
<apteryx>the burden of maintaing your own fixed versions is on you
<lilyp>guix refresh?
<mitchell>Yes, you do this by updating the package definition using git. If you need to go back you can guix pull --commit
<mitchell>druvin: What we mean by "exact" is different than the usual usage
<acrow>IIUC, the general case is that the guix commit you work from fixes the versions of your various package inputs. So, if we could go back to a guix commit from 20yrs ago for a package sheepherder that depends on perl it would be built with the most current perl from that time (say perl4). If you pull a current guix commit with the same sheepherder package it would build with the current perl (6?). Unless, the package pins the versions
<acrow>of its inputs, no?
<lilyp>acrow: sure, but you yourself specify which guix commit to use (inside the CI) and how much to import from guix vs. ship on your own
<dhruvin>mitchell: By exact I mean what's used to compile my software. By range I mean, I'm comfortable with a dependency set (and updates) as long as they all obey given range.
<mitchell>dhruvin: That kind of compromise is not required in guix though. You don't have to accept any kind of range, even if some other package uses a different version. Specify one that works and stick with it until you need to update it
<apteryx>raghavgururajan: hey, I just gave a go at finishing hw-probe: here's my 2008 machine (!):
<dhruvin>In my example about A B and C, I'm fine with B~v1.0 and C~v1.2, I don't mind what exact version of B and C gets selected at build time, as long as they are within this bounds
<unmatched-paren>dhruvin: that's not a good way to go about things though, since it's not ever going to be reproducible
<dhruvin>The whole system is reproducible with a lock fiel
<dhruvin>A lock file is quite similar to what guix has, exact dependencies
<lilyp>Guix manages its exact dependencies more elegantly than a lock file though.
<mitchell>chapter 1, 2, 3, and 9 are worth understanding
<lilyp>All you need to specify to make it solid is the channels.scm
<lilyp>All you need to omit to make it liquid is the channels.scm.
<dhruvin>The reason why I want a wiggle room is because my upstream can update their package with security updates that I can get whenever I do npm update.
<lilyp>Have fun writing a lock file that's that short.
<mitchell>dhruvin: good luck trying to tame npm... down that path lies madness. I understand now why you are thinking about this the way you are
<dhruvin>I was suggested guix refresh, I haven't tried it for this exact problem. But I know about it. Maybe it solves this.
<unmatched-paren>dhruvin: it's fairly simple to update everything manually, so long as your dependency tree is sane. which it often isn't.
<unmatched-paren>At least with npm.
<unmatched-paren>s/often/almost always
<mitchell>They need to throw the whole thing out and start from scratch
<dhruvin>Lock file isn't consumed by humans. you're right about it getting extremely large
<unmatched-paren>mitchell: As in, throw the whole language out. V8, mozjs, Node, Deno, npm, the whole lot.
<jackhill>I understand part of the problem with npm isn't (only) the sive of the graph, but that it has cycles, and many packages can't be build from source. Tooling can solve some of those problems but not others
<jackhill>but there is much good free software implemented in it, which would be sad to loose. Language proliferation is probably a problem for free software (incompatible things, "wasted" effort), but the alternative is not good either.
<mitchell>if thats what it takes lol. It seems like this depency hell has wormed its way pretty deep into the culture of the language. Throwing out npm would mean throwing out a lot unless someone can unravel the yarn ball they call a depency graph
<acrow>mitchell: +1
<mitchell>I say that as someone who doesn't do any web development lol
<unmatched-paren>mitchell: Ah, yes, throw out Yarn too. In fact, why not HTML and CSS too?
<mitchell>Burn it all down!
<mitchell>But i wouldn't start a new project based on npm given all i've learned about software reproducibility and management
<unmatched-paren>No, burning won't help. We'll need to stick them on Mars and bomb them with nukes.
<jackhill>for new software, something like Elm that doesn't have JS dependencies could be a reasonablre choice.
<jackhill>(which just need to get a bootstrapped GHC)
<lilyp>Does Elm really make things easier though?
<unmatched-paren>jackhill: which is easier said than done, as I'm sure you know :P
<unmatched-paren>lilyp: no, they have all the same things
<unmatched-paren>language-specific package manager, etc
<jackhill>unmatched-paren: indeed
<jackhill>lilyp: well, we have an elm-build-system in Guix and many packages already
<lilyp>we have an npm-build-system in Guix...
<unmatched-paren>And of course there's the issue of Haskell, which means it's in many ways worse than JS
<mitchell>the build-system isn't the problem lol its the packaging
<mitchell>unmatched-paren: whats wrong with haskell?
<unmatched-paren>mitchell: GHC
<mitchell>idk what that is
<unmatched-paren>it's _incredibly_ hard to bootstrap
<mitchell>oh why?
<jackhill>unmatched-paren: (hopefully this isn't too off topic) I'm curious to understand more how Haskell is worse than JS. I know of breaking compatibility between version often.
<jackhill>oh, yes, of course
<unmatched-paren>jackhill: I meant GHC's bootstrap, which you noted :)
<unmatched-paren>mitchell: hahaha and
<unmatched-paren>Oh, wait, the latter is the wrong one
<unmatched-paren>I can't find part 2 in searx or any links to it
<mitchell>Silenced once again by Big Haskell
<unmatched-paren>Damn you, Mr. Curry!
<unmatched-paren>mitchell: aha!
<mitchell>Why can't we make a haskell implementation in lisp that can build GHC? Maybe thats a dumb question
<mitchell>perhaps its just a lot of work?
<unmatched-paren>Haskell big. GHC-dialect haskell bigger.
<unmatched-paren>Haskell also seems to be a lot of work to parse
<unmatched-paren>because it's got ridiculous amounts of significant whitespace
<berkowski>(Hopefully) quick question: Why do packages installed with `guix home reconfigure` not then show up in the output of `guix package -I`
<unmatched-paren>and not even reasonably well-behaved significant whitespace like Python
<unmatched-paren>for example, this is how you define multiple variables in a single let:
<unmatched-paren>let foo = bar
<unmatched-paren> bar = baz
<unmatched-paren> baz = quux
<unmatched-paren>note the whitespace; this is required
<unmatched-paren>where foo = bar
<unmatched-paren> bar = baz
<mitchell>is like make and tabs?
<unmatched-paren>mitchell: no, way worse
<mitchell>or is it spaces
<unmatched-paren>it's spaces
<mitchell>only an academic could create such a thing
<unmatched-paren>all the foo = bars need to be lined up
<unmatched-paren>of course, there's also the wonderful:
<unmatched-paren>case foo of (foo, bar) => bar
<lilyp>berkowski: they're in a different profile
<unmatched-paren> of (bar, baz) => baz
<lilyp>you can use the -p flag to guix package
<unmatched-paren>I can't remember the exact syntax but that's approximately it
<unmatched-paren>the of ... need to be lined up
<unmatched-paren>oh, yeah, do has the same thing:
<mitchell>berkowski: guix package will list the profile at ~/.guix-profile but guix home stores everything in ~/.guix-home. Also guix package --profile= can be used to point to different profiles
<unmatched-paren>do foo <- bar
<unmatched-paren> bar <- baz
<unmatched-paren>If I was making a haskell-like language, the first thing I would change is the syntax
<unmatched-paren>(and then remove default lazy evaluation...)
<mitchell>and add parenthesis!
<mitchell>and macros
<unmatched-paren>--actually i wouldn't but don't tell anyone i might be executed by the scheme lords--
<mitchell>I'm calling the police
<berkowski>lilyp, mitchell: Yeah, ok that makes sense. Slowly migrating to `guix home` on a foreign distro and I'm liking it so far. Stuff like this trips me up though
<unmatched-paren>Meanwhile, if I was making a Scheme-like language, the first thing I would do is add static types...
<mitchell>It's a fundamentally different approach
<mitchell>unmatched-paren: I'm a common lisp guy which allows type specifications. Does scheme not allow this?
<lilyp>the fun thing about scheme is that lazy evaluation seems easier than static types
<unmatched-paren>mitchell: No, as far as I know
<lilyp>mitchell: there are dialects which do, but no rnrs
<jackhill>unmatched-paren: Options exist: 😁
<lilyp>can we bootstrap haskell with rascal?
<unmatched-paren>lilyp: what's rascal?
<unmatched-paren>oh, right
<berkowski>and yep, `guix package -p ~/.guix-home/profile -I` gives the list I expect. Thanks!
<unmatched-paren>just mentioned there, sorry /o\
<jackhill>I would assume not, but maybe it would be less work to bridge the gap than with ancient GHC and hugs?
<jackhill>and apparently rascal is now called Hackett. I don't know if Lexi is still working on it.
<mitchell>I am trying to create an installation os with a pre-built os that my pinebook can just copy to the emmc. How should I go about doing this?
<apteryx>hm, I'm getting 'error: channel-source->package: unbound variable' on make check-system TESTS=basic
<apteryx>time to 'make clean-go' ?
<jpoiret>just clean the corresponding go file, no?
<jpoiret>it should be in gnu ci
<apteryx>thing is it hasn't moved since 2020
<mitchell>Can I treat an operating system like a package? How do I put something like a vm script into an operating system declaration?
<apteryx>jpoiret: OK, rm gnu/ci.go helped, thank you
<mitchell>Or more specifically how do I include multiple operating system closures with a call to guix system
<mitchell>Or put another way, how can I ensure that the derivations the installation os comes up with are all available in the sd card store.
<dhruvin>It was mentioned that one needs a guix.scm file in their repo. I'm trying to understand how functional package management works, from scratch; like using guix/nix. Is this an example of it?
<lilyp>dhruvin: that's a basic definition, it will defer to whatever value these variables hold in the guix channel at time of evaluation
<dhruvin>So one specifies a list of dependencies as (native-|propagated-)?inputs and the exact versions are evaluated later right?
<dhruvin>I believe they are based on whatever current profile says they are.
<dhruvin>So it's upto the profile (derived from channels.scm IIUC) to keep them compatible with each other. Is this correct?
<apteryx>dhruvin: that example defines a package, the same as a package would be defined in the Guix project itself
<apteryx>the dependencies (after the ',' comma) variable are provided by the Guile modules in imported (used-modules ...)
<apteryx>and these modules come from the current version of Guix running
<apteryx>so rather than pinning each dependency uniquely, you pin the universe via the Guix commit used
<mitchell>thats a good way to put it
<apteryx>what I often though is specify a list of dependencies in a manifest.scm file, that can consumed simply by 'guix shell'
<apteryx>what I often do*
<mitchell>I use a manifest.scm to create a profile for each project that gets activated by direnv scripts
<apteryx>an alternative way if the software is already in Guix and you don't need to experiment with its inputs much is to simply setup the development environment via 'guix shell -D the-package-in-guix'
<dhruvin>I will also have to give a list of channels as well, right? If there are more than one source from which these packages come.
<dhruvin>apteryx: Yes, I used to use guix environment extensively.
<apteryx>dhruvin: channels are handled at the guix level, mainly ~/.config/guix/channels.scm. You can also specify a channel file checked in your repo
<mitchell>Oh is that how i'm supposed to replace guix environment
<dhruvin>mitchell: yes, guix environment -> guix shell -D
<dhruvin>So I don't specify the versions at all? The project that depends on my package does. Is that correct?
<dhruvin>Downstream project/user that depends on my package*
<unmatched-paren>guix pull --profile=guix-profile -C channels.scm will work probably
<unmatched-paren>and then you could do guix shell --profile=guix-profile ...
<unmatched-paren>maybe there's a simpler way?
<dhruvin>I used to use manifest.scm file. Wherein you can bring packages from old and unrelated (not in channels.scm) channels as well IIUC.
<unmatched-paren>oh, yeah, you could use an inferior
<unmatched-paren>in your guix.scm
<unmatched-paren>if you really, really want to lock down the guix version
<mitchell>Its a handy trick to avoid kernel rebuilds every other update
<unmatched-paren>I've used it when a kernel update broke my system
<unmatched-paren>e.g. there was a bug where the system hung at boot because of something to do with iwlwifi
<dhruvin>I think channel commits define snapshots of (possibly) compatible set of packages, whereas if my requirements get complex, I transform those packages in question using guix's programming interface.
<dhruvin>Is that correct to say?
<lilyp>that works, but if your requirements get "complex" it might be easier to just write out the graph on your own
<mitchell>They should all be compatible. If you need to do complicated things in your channels you should create new package variants through the scheme api. You can include arbitrary amounts of code in your channels to do the complex things so that you shouldn't need to do anything more complicated than swapping out sources at the command line
<dhruvin>Alright. We are depending on channel provider to make sure of compatibility. If it's not met, we make packages compatible, using the scheme api mitchell mentioned.
<dhruvin>Now about updates. We still rely on channel commits for majority of our dependencies.
<dhruvin>And we only update or pin packages that we want in manifest. Using inferiors that unmatched-paren mentioned.
<dhruvin>Is that correct to say?
<mitchell>You dont necessarily need to pin things in the manifest. If you want to freeze time, you can use the output of guix describe to guix --time-machine in order to recreate things as they are right now from a manifest. The manifest is defined by the guix used to evaluate it.
<dhruvin>time machine will cause every package to be from an older commit, right?
<dhruvin>Pinning is helpful when I need old definition of one package but new of others.
<dhruvin>Although I suspect I'll be needing this often.
<unmatched-paren>dhruvin: You can do pinning with inferiors,
<unmatched-paren>and yes, time-machine reverts to older guix commits
<unmatched-paren>*reverts everything
<mitchell>But if you want a package as it existed at some time you can use time-machine to get it without affecting other things in your profile.
<dhruvin>So the unit of distribution here is a package definition. And you have channels that host these definitions and keep them compatible and up to date. If your needs diverge from channels' definitions you use programming interface to specify what you want, or you can replace channel package with whatever you wish, locally in your manifest.
<dhruvin>I believe this summarized what I was trying to understand.
<dhruvin>Thanks everyone for your valuable inputs in clearing out my doubts. I hope others may have found something helpful from this. :)
<mitchell>libtsm is failing in the configure stage because it can't find XKBCommon when cross compiling to aarch64-linux-gnu. libxkbcommon is a native input, is this the reason? Should it actually be a regular input?
<unmatched-paren>mitchell: yes
<unmatched-paren>C libraries should always be regular inputs, unless they're only used during build (e.g. libcheck, since it's only linked when testing)
<unmatched-paren>s/only/usually only/
<mitchell>Should I make a patch? Can i finally be useful?
<unmatched-paren>Probably, yeah
<unmatched-paren>mitchell: oh, wait
<unmatched-paren>it seems to only use a header file from xkbcommon
<unmatched-paren>and not the actual library
<unmatched-paren>so native-inputs should be fine...?
<unmatched-paren>oh, maybe it's interacting badly with cmake
<unmatched-paren>try moving it to inputs and see whether it works i guess -.o.-
<mitchell>will do
<apteryx>rekado_: hi! I'm looking at reassinging that .41 IP from node 130 to node 129; but I don't see where that IP is set.
<apteryx>node 130 seems just a regular node defined in berlin-nodes.scm
<rekado_>mitchell: we’ve already got a Haskell implementation in lisp. See yale-haskell. It’s just not useful for GHC.
<rekado_>we also have Hugs (written in C), which provides Haskell 98 (+ some extensions), but you can’t build even an ancient version of GHC with it.
<rekado_>apteryx: I assigned the IP manually on node 130, like the troglodyte sysadmin I truly am.
<apteryx>haha, ok
<rekado_>mitchell: the most realistic way forward is to build GHC 4 from generated C code.
<rekado_>mitchell: it wouldn’t be strictly built from source, but that would still be orders of magnitude better than using a big binary of GHC 7 to build later versions.
<efraim>successful hacking today with my daughter, we got ldc building with GDC so now we should be able to get D packages for more architectures!
<unmatched-paren>efraim: awesome! I'd tried that before but hit a wall with cmake
<unmatched-paren>CMake likes to put walls in front of you.
<apteryx>rekado_: the eno2 interface of 130 is down, do you know why?
<apteryx>I wanted to test external connectivity to it worked with a temp SSH user before proceeding to the move
<rekado_>apteryx: it says NO-CARRIER; that suggests that no cable is connected.
<rekado_>could also be the switch; maybe IT hasn’t allowed that network interface to connect
<apteryx>if remember I could attempt SSH the week ago, if memory serves me
<apteryx>a week ago*
<rekado_>SSH where?
<apteryx>to that .41 IP
<apteryx>or is that impossible?
<rekado_>no, that’s possible. But it was assigned on a different server.
<rekado_>and we’ve moved the cable from 130 to 129
<rekado_>maybe the switch config whitelists only the mac address of that one network interface on 130.
<rekado_>I just sent a message to the network team to ask about this.
<podiki[m]>is (package-name this-package) preferable to #$name ? (for copy-build-system install-plan paths)
<singpolyma>podiki[m]: depends who you ask
<podiki[m]>if I'm asking singpolyma? :-)
<podiki[m]>i'm wondering if it is one of those cases with package inheritance, though normally I think of that for things like inputs
<singpolyma>Oh, then (package-name current-package) for sure
<podiki[m]>well I'm not inheriting nor do I expect the package to be inherited, but trying to learn the good habits
<singpolyma>I just think #$name looks dumb and is harder to read, honestly
<podiki[m]>(packaging a little python program for oauth2 tokens, e.g. ms exchange email)
<unmatched-paren>Why is this crate refusing to build when `tinystr-macros` is not provided, even though it's an optional input and disabled by default?
<unmatched-paren>It has a bootstrapping problem if I do enable tinystr-macros
<podiki[m]>singpolyma: fair enough, a bit more explicit perhaps, though I feel like I'm always trying to save horizontal space :)
<unmatched-paren>because tinystr-macros requires tinystr
<singpolyma>Meh, scheme is bonkers verbose you kinda just have to live with it
<singpolyma>Like, delete-file-recursively. Srsly?
<unmatched-paren>fdel-rec would work just as well :P
<unmatched-paren>podiki[m]: yeah, i do too
<jackhill>singpolyma: if you don't like #$name, I suppose you could use the long form: (gunexp name) :)
<podiki[m]>meanwhile in common lisp land I found I quickly embraced the old car/cdr/caadar/etc. construction...yeah hard to read and parse at first, but efficient keystroke and space
<singpolyma>jackhill: that's better for sure
<unmatched-paren>podiki[m]: scheme has that too
<singpolyma>fdel-rec or you know rm-f
<apteryx>rekado_: ok, thanks. I'll try setting the IP on 129 and see if it's reachable
<rekado_>apteryx: unlikely, because the cable has been moved.
<rekado_>sorry, misunderstood
<rekado_>130 has no cable
<rekado_>confused myself!
***toluene4 is now known as toluene
<maximed>I noticed some discussion about distributions and package manages & rust
<maximed>Something that I think has been neglected in these kind of discussions:
<maximed>(Except for the bundling and pre-compiled things that occassionally happen), I actually like things like and pypi and contentdb
<maximed>but not Cargo and pip
<maximed>I think we should make a distinction between distributions and ‘package registries’.
<maximed>Package registries like pypi and (with their metadata like whatever pypi has and Cargo.toml for Rust) are useful for distributions for being a clear source on
<maximed>- what, exactly, does 'foobar' refer to?
<maximed>- where can I find the source code? (tarball, version control repo, whatever)
<unmatched-paren>maximed: agreee
<maximed>- and what are the dependencies (+ guess on minimum version and maximum version, though often those are inaccurate and can be widened by the distribution
<unmatched-paren>although it does centralize things to some extent
<unmatched-paren>perhaps there should be a common protocol for describing packages from package registries?
<maximed>unmatched-paren: Yep (on the centralisation)
<maximed>though FWIW, in case of ContentDB (for minetest mods), we currently download the actual mods from upstream's git repository
<maximed>so if ContentDB goes down, not a huge problem
<unmatched-paren>I don't see the point of storing source code on If you're worried about archiving and longevity, there's swh...
<maximed>OTOH, aren't the individual distributions Debian, Guix, ... examples of centralisation?
<unmatched-paren>maximed: I guess you're right.
<maximed>unmatched-paren: Then SWH becomes a single point of failure though ...
<maximed>Though FWIW, I'd like to not only include,
<maximed>but also some alternative locations
<unmatched-paren>maximed: fair. We need SWHH to archive SWH just in case!
<maximed>So we have upstream's website,, SWH,, ...
<maximed>(also: saves some tarballs)
<maximed>and FWIW, there's some ideas on downloading this more decentralised & distributed via IPFS & GNUnet and such: e.g.,
***toluene1 is now known as toluene
<maximed>(I'd like to add "gnunet://fs/..." URIs to package origins someday, but the gnunet-scheme code isn't there yet ...)
<maximed>bjc,lilyp: Antioxidation levels are at 97% (if you don't me having changed how packages are submitted to, before that, it was more like 65% or something)
<maximed>With caveats: no tests yet, optimisation settings are just -O0 for now, ...
***mark_ is now known as mjw
<apteryx>what should the reply from openssh look like when attempting to login as root?
<apteryx>we have 'permit-root-login’ (default: ‘#f’)', but I get to the passhprase or public key challenge anyway
<apteryx>shouldn't it abort before that?
<apteryx>OK, it does work though. Probably SSH accepts the connection and then it is rejected by PAM.
***pi2 is now known as johnjaye
<apteryx>anything we can do for ?
<apteryx>I tried adding a static-networking-service for hydra-guix-129, and ended up in such a situation
<apteryx>oh, %base-services already defines a static-networking-service-type for the loopback
<apteryx>so I need to modify that instance
<civodul>hey, wazup?
*civodul goes through a terrible week with little hack time
<apteryx>civodul: howdy!
<unmatched-paren>civodul: \o
<ternary>What's the best way to test a shepherd service that I am making in a custom channel without having to commit/push/guix pull?
<apteryx>is someone extending static-networking-type? i'm trying but failing so far
<apteryx>so far, what I have/see:
<apteryx>if I remove the static-netorking-service-type modification, the config builds fine
<maximed>ternary: You can use channels without channels. More concretely:
<maximed>You could remove the channel from your list of channels, and instead tell guix with each command where to look for extra stuff: "guix build -L /location/of/git/checkout/of/channel channel-package"
<maximed>"sudo guix system reconfigure -L /location/... configuration.scm"
<maximed>(Removing the channel is not necessary per se, e.g. I haven't needed to do it for antioxidant, but seems rather fragile to have it both in your list of channels and in -L w.r.t. which module ‘wins’ etc.)
<ternary>Oh I didn't realize a reconfigure worked with a local path. That removes some of the pain for sure
<ternary>I assume I can't test the service without doing a reconfigure though
<maximed>ternary: You can, to some degree, with VMs!
<maximed>"guix system -L$THE_DIRECTORY vm the-configuration-of-the-vm.scm"
<maximed>Unless I got the wrong subcommand, it will output the location of a shell script that you can run to automatically start a VM with the given system configuration.
<maximed>Warning: if it does anything graphical, chances are that the default memory limit is too low, so you might need to add a -m3G or such somewheere.
<ternary>Oh that's a cool idea. I'll have to give that a shot
<maximed>Also there's some kind of number of cpus option you can add, which makes things much faster.
<maximed>Don't recall the exact way to add -m and the cpu option though ...
<ternary>I mostly just want to test if it starts and gets configured properly, so I shouldn't need any of that
<maximed>You can, of course, just do "guix system -L... reconfigure" on your system, outside a VM (simple!)
<maximed>though sounds rather daring to do things directly on an important ‘live’ system to me
<maximed>nevermind, you probably referred to -m and cpus
<apteryx>rekado_: I understand why you did it manually, I've been going at this for at least an hour with no solution in sight
<civodul>apteryx: the code at LGTM, except that 'provision' must be a pair i think
<civodul>it's not an extension, but you could also extend
<civodul>as in: (simple-service 'ext static-netorking-service-type (list (static-networking ...)))
<civodul>(untested but with 80% confidence :-))
<apteryx>does extending remove the requirement to set provision to some value?
<apteryx>I think it either should accept no provisioning or document that it's needed but be careful to not provide 'networking multiple times because that's the default, or something, as seems to have bitten here:
<apteryx>civodul: few, it works. thanks for saving my sanity
<apteryx>the problem was indeed (provision '())
<maximed>I'm thinking of setting the optimisation flags of rustc appropriately.
<maximed>there's a 'thin' and 'fat' version of LTO
<maximed>More specifically, there's this blog post:
<maximed>I'm thinking of using the default thin-local LTO, and setting opt-level=3 (the cargo default)?
<rekado_>apteryx: have you successfully done it manually yet?
<apteryx>first manually, and now via the config
<apteryx>rekado_: ^
<rekado_>ah, good
<apteryx>it works, but something's missing to reach iDRAC yet
<maximed>Alternatively, I could go for space savings with opt-level=s/z
<rekado_>i see that ping .41 actually works
<rekado_>guess I should retract my request for help from the networking team
<maximed>Will go for Cargo's default opt-level=3 for now (which enables LTO by default IIUC)
<apteryx>yep, you can try SSH'ing to it, it will only allow SSH keys now
<apteryx>rekado_: I've pushed the config now, will send an email
<maximed>What level of debugging information would be desired?
<maximed>The default is no debug info at all.
<maximed>I would go for at least line tables?