<sss1>note what it's i686 machine <vagrantc>sss1: but a separate /boot partition isn't supported, as far as i know <sss1>i still see guix system as potential replacement for my current host systems, but looks like it still too early <sss1>i really love concept of nixos/guix ) <sss1>in past i hae wasted a lot of time to achieve what now can be achieved with just a few commands on nixos/guix <raghavgururajan>vagrantc: Separate boot-partition is supported when used grub (not grub-efi) with GPT scheme. <sss1>does not grub-efi always separated ? <nckx>(= No, it's never supported, no matter your partition scheme.) <sss1>as far as i know efi partition is fat only .... <nckx>raghavgururajan: Thanks! <raghavgururajan>GPT scheme with Legacy BIOS or UEFI, requires BIOS Boot Partition and EFI Boot Partition, respectively. <nckx>Which has nothing to do with /boot. <vagrantc>but what handles copying kernel+initrd and company to the /boot partition? <nckx>There are partitions, with 'boot' in the name, but there endeth the surface-level similarity that isn't. ☺ <sss1>so my problem is, luks support in grub is limited (on time of my last test). and i want to use luks2 + argon2id which is not supported in grub yet ? <nckx>The 'BIOS boot partititon' is a few 100K of raw bytes (no file system) that stores the fat of GRUB that won't fit in the traditional 'pre-MBR gap'. It can't be mounted, at /boot or anywhere else. <nckx>sss1: Then you're out of luck with the mechanisms currently provided by Guix. <nckx>You can copy bzImage + initrd to an unencrypted /boot partition & tweak grub.cfg 'by hand' (e.g. a bash script) but it's not managed by Guix (yet). <sss1>nckx: i can wait, currently i am using hand-made init, at least at boot stage <vagrantc>or add support for luks v2 to grub, and make many people happier :) <nckx>It's partially supported IIRC, just not 'well'... <nckx>So the path to happiness is becoming shorter, just at a very GRUB pace. <attila_lendvai>how are tests run when cross-compiling a package? i'm reading that dependencies for tests should go into native-inputs, but the host won't be able to run the tests when cross-compiling... <sss1>guix can't connect to self for offloading <nckx>Used to be a known bug, maybe still not fixed. *nckx pings apteryx who was last mumbling about it (ages ago, mind you). <sss1>also, guix pull still can't offload <civodul>the Xiden threads on guix-devel were a bit... wild *vagrantc remembers running "guix pull" on a single-core ~1GHz armhf machine with 512MB of ram <vagrantc>the lack of offloading was very eventful from the systems's perspective, but very uneventful from my perspective <nckx>I don't remember offloading being recursive... <nckx>As in, I once wanted it to be & it wasn't. <sss1>hm...., problem with ssh pubkey gone now, i have done nothing about it ..... <attila_lendvai>is it possible to define a local function somewhere to be used inside a modify-phases? <attila_lendvai>although, what i'm getting is not a scheme error, so scratch that. i get guix build: error: some outputs of `[...].drv' are not valid, so checking is not possible <attila_lendvai>replacing --check with --rounds=2 gets me to the scheme error. so, there was a reason after all for that ugly copy-pasting inside go-github-com-apparentlymart-go-openvpn-mgmt... <civodul>sss1: for offloading purposes, you need an SSH key without a passphrase <nckx>You also need to make sure that root on the user-facing machine has accepted the offloading server's host key. This is stateful. E.g. by running 'sudo -i ssh user@machine' once by hand. <nckx>It's quite possible to change something without realising it ('doing nothing') that changes how & whether offloading works. <sss1>i am familiar with openssh and public keys, os all this done of course <nckx>I assume you are or you wouldn't have got this far to begin with. <sss1>probably i have run test in wrong terminal with wrong user in first place <nckx>Being familiar doesn't mean not making mistakes. <attila_lendvai>so, in scheme code, inside a package form, i cannot use any abstractions besides variables? or is that a peculiarity of the modify-phases macro? <dissoc>im working on writing a package with patches. when i use search-patches i get error: patch not found. it's in packages/patches/foo.patch <flatwhatson>dissoc: are you writing it inside a guix clone, or in your own channel? <dissoc>also i was installing the package: guix package --install-from-file=file.scm <flatwhatson>you need your own search-patches function. the existing one will only search gnu/patches <dissoc>ah. i see. simple enough. thanks again <iskarian>attila_lendvai, regarding pinning versions for go-ethereum: it's a tough call, because it means we have more package versions to maintain. Maybe it would be best to stick go-ethereum and all the version-specific packages in a separate .scm file? <sneek>iskarian, you have 1 message! <sneek>iskarian, maximed says: About the git-fetch patches: I've written those let& and let*& macros and let them replace the 'let' and 'let*' macros and it seems to work (with some changes). I should be able to submit them sometime this week <attila_lendvai>iskarian, yep, that makes sense, to add a go-ethereum.scm with all the pinned stuff in it. but is it The Right Way to do it in the long term? to add countless go packages using guix import go? <attila_lendvai>iskarian, or hacking more on the other way where go itself downloads the dependencies (see my recent mail to guix-devel) <iskarian>I don't know. This is one of those places where the Go methodology and the Guix methodology really conflict. Go wants to have a million versions of everything, forever available, and to allow any package to mix-and-match. Guix wants the minimal number of versions of something available. <attila_lendvai>i'm willing to do either of them, but i lack the bird's eye view perspective to decide <attila_lendvai>iskarian, that sounds like the second way, to allow go to fetch stuff, and guix to check the hashes <iskarian>How do we check hashes if it's not already in Guix? ;) We're not just going to trust a hash from a proxy <vagrantc>iskarian: verify according to whatever mechanisms upstream provides ... <vagrantc>iskarian: check that tarballs reasonably match VCS repositories at the very least ... <vagrantc>not that guix does well there, as it ships lots of things not in VCS <iskarian>I believe what attila_lendvai is suggesting is to have packages be able to download versions of go dependencies not explicitly specified in Guix <vagrantc>how do you verify hashes of an unknown object? <vagrantc>perhaps some schools of magic have some tricks :) <iskarian>Go provides a facility for providing a go.sum file which contains a hash of the repository; but because it's in-channel it only protects against corruption in transit, it does not solve the issue of "how do I know the source repository hasn't been tampered with?" <attila_lendvai>iskarian, yep. allow go's package manager to fetch all the things, and then calculate/check a hash of it <iskarian>This would be a new way of doing things for Guix; currently, all packages are separately verified, compiled, tested, and so on. <iskarian>In addition you end up with duplication of effort when a dependency has to be modified for Guix, with the same change being copied to all packages using it. <attila_lendvai>what's "the derivation hash"? the hash of the binary output? i may be misunderstanding/misrepresenting what's described on that issue. <attila_lendvai>iskarian, i don't understand your last remark about duplicate effort. this way i just calculate the hash, add it to the go-ethereum package, and leave everything to go. guix packaging of go stuff becomes completely orthogonal to this. <attila_lendvai>note that NixOS does the same, i think, although it puts all the dependencies into a tgz and puts it into the store/cache, so that vendoring is memoized *attila_lendvai hasn't learned the proper nomenclature yet <iskarian>Let's say that golang.org/x/net requires a patch to work correctly on Guix. Then if we start packaging Go packages with all their dependencies as part of the source, then every package which uses golang.org/x/net will have to copy-and-paste that patch. Currently, only go-golang-org-x-net package would have to be patched. <iskarian>Additionally, this means that there will be a copy of the source for golang.org/x/net in every package which uses it, rather than one copy. <attila_lendvai>iskarian, i think such patching will be a rare thing with go, but this is only an impression <attila_lendvai>iskarian, of the *source*? guix also stores the source of everything in the store? <iskarian>(And not only a copy of the source, but the dependency will have to be fetched for every package.) <vagrantc>well, you could create "packages" that only ship sources and patch them once, and use them for various inputs <iskarian>That's essentially what Go does currently, except most of these also build and test themselves <iskarian>most Go packages in Guix actually only put their source in output; no compiled artifacts <attila_lendvai>iskarian, out of curiosity, do you know how go vendoring is done on NixOS? because i don't really know the details, and i'm wondering whether that could/should be "ported" to guix... <iskarian>I doubt that Nix's methodology would square with Guix's <iskarian>(I'm not familiar with the Go effort in Nix) <iskarian>Yes, and to be clear, I'm not trying to shut you down, but rather argue the harder points first to see if there is a way forward <iskarian>It would be lovely to not have to handle the mess of Go(/Rust/...) dependencies in Guix, but the current trajectory seems to be to package dependencies individually <iskarian>(I am a relative newcomer to Guix myself; only been around a few months) <attila_lendvai>how do i see which go packages are not built, merely sources? all i see is (build-system go-build-system) in golang.scm, which i assume means compiling them. which brings the question, what if a project wnats to be compiled with a different version of go itself? <iskarian>Some packages (like go-golang-org-x-net) use #:tests? #f and delete the build phase, and those are definitely merely sources. However, I would say the vast majority of "go-..." packages only have source in their output <iskarian>To compile a package with a different version of go, say "go-1.16", use (arguments `(#:go ,go-1.16)) <attila_lendvai>iskarian, indeed. i just checked go-github-com-apparentlymart-go-openvpn-mgmt and its output only has sources. i don't understand why, though. <iskarian>Currently only go-1.14 and go-1.16 are packaged in Guix <iskarian>I have a patch for go-1.17 which I haven't yet sent for various reasons, but soon(tm) <vagrantc>doesn't go staticly compile everything? so for any go library, you would just need the sources available and not a compiled library ... unless i'm missing something <iskarian>in Guix, no Go package uses compiled artifacts from any other Go package in order to build <attila_lendvai>but how come i've seen some tests failing? the go-build-system by default tries to build it, run the tests, but only package the source when it succeeded? <iskarian>Okay, so. Currently, every dependency is a separate Guix package. Each dependency is treated just like a normal Guix package, and since Guix builds and tests all the inputs to a package before building that package, all those dependencies are built and tested. <iskarian>However, because the Go build system has not seen a lot of love, no non-executable artifacts are saved, only source is copied into the output. <iskarian>(which works, because Go needs the source of all dependencies, but build artifacts just serve as a cache) <attila_lendvai>iskarian, i think saving binaries may not a good idea. e.g. same dependency is used by two packages that need to be compiled with a different version of go. (not sure whether linking those is supported by go) <iskarian>attila_lendvai, you're roughly correct, it would be wasted space; but it wouldn't hurt, since Go just treats them as a cache and recompiles them if it would produce different output <attila_lendvai>tools like go-ethereum should be reproducible builds, and compiling a random dependency somewhere with a different version of go may lead to a different binary result <iskarian>It doesn't save much time anyway, so probably not worth it <attila_lendvai>(keep in mind though, that i'm very new to this. add pieces of salt as necessary... :) <iskarian>when you say "reproducible builds", do you mean the Guix package should be reproducible by different people using the same definition, or that the Guix package should produce the same output as someone compiling from source on a foreign distro? <iskarian>Ah, in that case version pinning is definitely necessary. <attila_lendvai>hrm... ok, so i think i'm convinced to give it a try to guix import go -r --pin-versions all the dependencies into a separate go-ethereum file/package <attila_lendvai>doesn't seem to be an unreasonable amount of work, especially if i put them in its own package <iskarian>I do think it's worth having the larger discussion about packaging source-only dependencies separately for Rust and Go <attila_lendvai>iskarian, heh, it's sleep time here, don't give me nightmares! :) <attila_lendvai>damn, it's past 3am here. iskarian, thank you for the help and brainstorming! i'll give it a go tomorrow morning, and report back with my progress <iskarian>I'll have to look into the importer issue *attila_lendvai has patched/hardwired the imported to deal with that one url... :) <iskarian>Okay, I see the issue. I thought I was clever when I decided to check that the import was for correct url. Perhaps it should fallback to the first one if none match? *attila_lendvai waves goodbye and leaves <iskarian>I vaguely recall an effort to overhaul the Rust build system/ecosystem. Does anyone know who might be behind that? <iskarian>sneek, later tell maximed: The implementation goes over my head, but the overall approach seems sound. I still think the (let (...) (package ...) idiom feels clunky and should be replaced with something else, though. Also, I think https://issues.guix.gnu.org/50274 might have the git-fetch updater effort in mind? :) <iskarian>sneek, later tell attila_lendvai: One issue you may encounter with reproducibility with go-ethereum is that the Go build system does not use modules (yet!) and I believe Go embeds the module version of dependencies in built artifacts, so the result may differ if go-ethereum is supposed to be build in module-aware mode. <iskarian>sneek, later ask efraim: Were you the one who had a plan to convert cargo inputs into regular inputs? If so, I would be interested in an overview of your approach. <clone1>Did anyone ever come up with a way to recompile all of the emacs packages you use using emacs-next? I can find discussion about it over the years but i didn't see any solution <sneek>clone1, bricewge says: By any luck, do you still have the code from which you submitted #46907? It's inapplicable with git “error: corrupt patch at line 15”. Would you mind re-sending the patch? <apteryx>clone1: nothing clean cut is readily available, but there was an attempt based on package-with-python that got close (I tried it, had some issues) <apteryx>with motivation it could probably be made to work <sneek>Welcome back leoprikler, you have 1 message! ***o is now known as niko
<efraim>iskarian: yeah, that's me. My plan was basically cargo-inputs -> inputs and cargo-development-inputs -> native-inputs, try to find any circular dependencies among the ~2500 packages, and see about disabling tests and removing some really old and potentially unneeded packages that got pulled in through cargo-development-inputs <sneek>efraim, iskarian says: Were you the one who had a plan to convert cargo inputs into regular inputs? If so, I would be interested in an overview of your approach. <PurpleSym>zimoun: Wrt sanity-check: Maybe we can just replace the / with a dot and it’ll import/build fine? Can’t go into depth right now unfortunately. <leoprikler>raghavgururajan: maybe because it's the wayland backend? <raghavgururajan>But wait. Wouldn't wayland clients say "Could not open wayland-display: :1"? <leoprikler>Well, the common denominator in those failing tests is that they set the GDK_BACKEND to wayland <leoprikler>so either you disable those tests or find a way to use wayland as backend <leoprikler>so you get one round of tests for x11 and the same for wayland <raghavgururajan>(system "Xvfb :1 +extension GLX &") (setenv "DISPLAY" ":1") (system "weston-launch --backend=x11-backend.so") <mothacehe>apteryx: many thanks for fixing the python path & repack fixes, I'm currently testing them :) <attila_lendvai>iskarian, thanks for the importer fix! i ran it on go-ethereum: ./pre-inst-env guix import go -r --pin-versions github.com/ethereum/go-ethereum@v1.10.8 >/tmp/x.scm but apparently it's possible to refer to subdirectories in go.mod... <sneek>Welcome back attila_lendvai, you have 1 message! <sneek>attila_lendvai, iskarian says: One issue you may encounter with reproducibility with go-ethereum is that the Go build system does not use modules (yet!) and I believe Go embeds the module version of dependencies in built artifacts, so the result may differ if go-ethereum is supposed to be build in module-aware mode. <attila_lendvai>iskarian, these are entries in the go.mod of go-ethereum: github.com/aws/aws-sdk-go-v2 v1.2.0 ; github.com/aws/aws-sdk-go-v2/config v1.1.1 ; github.com/aws/aws-sdk-go-v2/credentials v1.1.1 <attila_lendvai>iskarian, these are subdirectories of the repo, and i guess the dependency means a checkout of only that directory, but at the specified version *attila_lendvai shakes head... <attila_lendvai>iskarian, re reproducibility: thanks, noted. it's probably somewhere down the road, though. let's first get to an executable that works... :) <attila_lendvai>iskarian, an idea: add an error handler around go import, and allow the user to skip the dependency that errored out (for adding it by hand) <apteryx>mothacehe: I'll try to test things here; I have these patches + rust + fontconfig locally to build; it'll take a little whlie <apteryx>the rust one is to start the bootstrap from 1.39; the fontconfig one adds a search path for XDG_DATA_DIRS (which it supports to discover fonts from 2.13.94) <apteryx>also since it now supports per-profile fonts discovery, we could/should probably drop "--with-add-fonts=" that causes the the system & user profiles to be treated specially <attila_lendvai>what is a let* entry with 3 elements?! this doesn't seem to be standard scheme, or i forgot how to websearch... (in lookup-nod in recursive-import) <dstolfa>attila_lendvai: let* allows for use of previously defined names in the following definitions <dstolfa>so it has an order of evaluation, whereas regular let doesn't <attila_lendvai>dstolfa, it has 3 elements in one of the binging. i.e. (let* ((name value WTF?)) ...) <dstolfa>oh i see. is it perhaps a macro invocation? <attila_lendvai>i doubt entries in let* are macroexpanded. that would lead to insanity... <dstolfa>so it's (let* ((name1 name2 name3 (foo)))) <dstolfa>mbakke: i usually use SRFI-8 for that kind of pattern :D <zimoun>PurpleSym: about sanity-check, LGTM. And IMHO, you can push your patch because it is a fix. <attila_lendvai>iskarian, i'll need to go offline now. let me know please whether you've read these, and/or whether i should summarize the issues/ideas with the go importer to guix-devel <apteryx>civodul: hi! Is it expected to see indexing objects 29% [############### while doing 'guix pull --list-generations' ? <jackhill>apteryx: did you experience that a week or so ago as well? If not this is the second report of that that I've seen. <jackhill>apteryx: ah, ok. I have no idea. I agree that it seems odd. <apteryx>I do have a channel configured, so perhaps it's related <jackhill>apteryx: I have a channel configured as well. Admittely, I don't often guix pull --list-generations but i just tried and it didn't happen. Strange. <cage> ./pre-inst-env guix install telescope -> "checking for libevent_core >= 2... no" <cage>the funny thing is that if i download the same tarball and run the ./configure from the shell it works! <roptat>cage, you can use -K (--keep-failed) with guix build and examine the content of config.log in /tmp/guix-build* <roptat>that might give you a hint about what's wrong <cage>roptat: semms that yours was a good suggestion, i have inspected the directory under guix-build and i noticed a file named "environment-variables" if i try the configure script after "source environment-variables" the configure script fails with the same error as guix build <apteryx>hmm, the "#:tests? must not be explicitly set to #t" lint check doesn't bode well with our Emacs build check phase (which is disabled by default). <cage>roptat: without "source environment-variables" the configure passes <roptat>it could be a missing dependency, so guix cannot set the right env vars in the build environment <roptat>which variable is missing? then you can find which package provides a definition for it and add it to the package inputs <roptat>note that the build environment is completely isolated from the host system, so it can only see what you declare in the recipe, it doesn't care what's installed on the system or in your user profile <roptat>maybe you're indeed missing libevent? <roptat>oh no, it's part of the definition already <roptat>so what does config.log say about the check for libevent? how does it fail? <cage>the enviroment vars in "environment-variables" files seems to contains the libevent flags for the compiler and the linker <iskarian>sneek, later ask efraim: so what's your solution to building all source packages? I believe avoiding that that was part of the reason for putting inputs in cargo-inputs in the first place <iskarian>sneek, later tell attila_lendvai: Currently, you would make three packages for those aws repos, and for the latter two use '#:unpack-path "github.com/aws/aws-sdk-go-v2"'. AFAICT "guix import go github.com/influxdata/influxdb-client-go/v2" works correctly; what's the issue? *lfam starts working on linux-libre 5.14 packaging <apteryx>good news! the rust bootstrap will be reduced to ~4 hours on core-updates after the patches for bootstrapping from 1.39 lands. That's 25% of the time it would take on the master branch (~16 hours). <sneek>lfam, muradm says: could you run test case for seatd/greetd services? <cage>roptat: if you think is useful i can paste the config.log <apteryx>lfam: I wanted to make it as fast as possible, after it became a more integral part of GTK/GNOME. <lfam>Yeah, it's definitely a problem right now for the branching workflow <lfam>It's basically impossible to test core-updates locally <podiki[m]>what's the easiest way to remove multiple services if there is already a modify-services form? put a remove lambda outside? <dstolfa>apteryx: rust is now an integral part of gtk/gnome? <lfam>apteryx: Do you think the bootstrap update could happen on a separate rust-updates branch that might be completed more quickly than the next core-updates? *dstolfa gets ready to wipe gnome if minor security fixes end up becoming gigabytes in size <lfam>Or do you intend to cherry-pick it to core-updates-frozen? <lfam>Rust is going to become an integral part of the entire system sooner or later. If there are performance or space usage issues, we'll have to address that <apteryx>dstolfa: it made a debate in Debian couple years back <podiki[m]>oh modify-services has a delete (good ol' manual) <cage>thanks again for your help <iskarian>apteryx (+ others): Do you feel there could be a better way of representing Rust(/Go) dependencies rather than full package definitions? Some macro, perhaps? <lfam>There have been discussions and proposals on the mailing lists iskarian <lfam>To me, the current situation with Rust is untenable, because it loses many of the advantages of Guix tooling <lfam>Go is slightly better but still needs an overhaul <iskarian>lfam, do you have references or something I can search for to find them? <lfam>At this point, I think we should only have "packages" of end-user Go applications and either use bundled dependencies or have an under-the-hood dependency set-up mechanism based on go.mod <lfam>Our packaging model is fundamentally different from how Go software is developed <roptat>cage, oh! I think it's missing pkg-config in native-inputs <lfam>iskarian: If you search for my name and "go" or "golang" in the mailing list archives, you should find a lot of discussion <lfam>(I rather not type my name in the chat) <iskarian>thanks, I'll have to do some mailing list archaeology (anthropology?) :) <lfam>For a while I thought we could have a Go-specific method of instantiating Go dependencies based on revisions. So each Go library would have a canonical package, but then you could concisely instantiate a different revision when using the library in a package that uses it, without a lot of boilerplate <lfam>And without publicly defining each revision <lfam>I still think that would be an improvement on what we do now <lfam>But I've grown weary of package inheritance; it's really painful to understand and edit the packages in an inheritance chain <lfam>And like I said, we are really going against the grain of how Go is actually used by developers <iskarian>I thought of that too. We still end up having to manage *a lot* of dependency packages <lfam>Since Go modules were introduced, our go-build-system is basically obsolete, so I think we need to fix that before inventing new Guix features, although motivation is not fungible <lfam>I got tired of the churn in Go-land and I don't actually write Go, so I kind of gave up on taking care of it for Guix <iskarian>I've been playing around with different overhauls of the build system which uses modules, but like you said, there are some fundamental disconnects between Go and Guix-land that make it difficult without bolting-on extra machinery <lfam>That message is somewhat obsolete too. My understand was still very primitive at that point. I will see if I can find a more recent summary / proposal <lfam>Do you think the disconnects are largely in terms of dependency management? And lack of "versions" in Go? <iskarian>Rather, I think it's due to too many versions in Go <lfam>Right, same difference :) <lfam>Guix makes it possible and effective to have multiple versions of some package, but it's still inconvenient <lfam>The reason I worked on Go for a while is that I wanted us to have a Syncthing package. I did a ton of work unbundling the dependencies, massaging their custom build.go, and landing the go-build-system, but now we are just using the bundled depenencies. They are all free software of course, so it's basically fine although not idiomatic for Giux <lfam>Heh, it's gratifying that someone else came to the same conclusion. It means I wasn't totally in the weeds <cage>roptat: turned out the problem was actually a missing dependency on pkg-config, thank you very much for your help and suggestions! <lfam>I didn't understand what "internal" modules were in Go for a while <iskarian>Yeah, I'm thinking something along the lines of a "go" origin perhaps? <lfam>I know that bundling / vendoring is considered a Bad Idea by distros, and in general I agree with that position. But for Go, I now think that there is little or no value in unbundling, considering the effort it requires <lfam>What would it do iskarian? Something different from `guix import go foo`? <iskarian>I mean, in a package, rather than git-fetch or url-fetch, you would have go-fetch <iskarian>That could intelligently download dependencies, and still leave us with a static source <lfam>Also, to zoom out, the entire set of values that inform what distros think about bundling should be understood in the context of the history of distros and distro tooling. This context has obviously changed since these values were developed and transmitted throughout the community decades ago <lfam>iskarian: So, it would read go.mod, fetch everything, and put it into a single tarball or directory tree? <iskarian>Essentially, yeah. Maybe it would fetch them all into separate store items and symlink them, so we get *some* de-duplication of dependency source <lfam>It would be nice to fetch them, in order to "trust but verify" upstream's bundling, especially if we are working from tarballs, like in Syncthing <lfam>At this point I'm really only familiar with Syncthing, which is somewhat atypical. Like I said, they use a custom build script <zacchae[m]>I'm curious: does anyone here use hurd as your kernel over linux-libre? <moshy>zacchae[m]: It only really works in qemu for me. And even then I've had trouble getting additional packages to work <lfam>iskarian: I think it's the only workable approach. Even Debian is allowing bundled dependency trees <lfam>And, all the relevant information is contained in the source tree of the thing one is packaging. The network location of dependencies is fundamentally baked into the code ("import paths") <iskarian>I know macports specifies all dependencies, which are then pre-downloaded by their build system <iskarian>This allows them to verify the hash of each dependency <lfam>Right, that's part of my older proposal from 2017. Guix requires hashes to be known in advance before fetching from the network <lfam>Such things are "fixed output derivations" in Guix parlance <zacchae[m]>moshy: I suppose that's about as good as I could expect. I do have a Librem mini I'd like to try it on though ***chipb_ is now known as chipb
<lfam>iskarian: So, a Guix solution would require either 1) a way for packagers to record hashes of each dependency or 2) use upstream's bundling <iskarian>Probably 1) would be more palatable to Guix-ers, but would also need a working importer/updater to make it feasible <lfam>Don't we have a working importer? <iskarian>A working importer for the new format, that is <lfam>Or do you mean a Guix packaging style? <lfam>What do you think of the pseudo-code proposal from my message in 2017? <lfam>Or, that general approach? <lfam>"I think we should have a special go-package procedure, used in the inputs field of the calling application, which would build the relevant library modules of the correct Git commit." <lfam>I'm curious how it compares to what you had in mind <iskarian>It's definitely workable, modulo needing to record a hash and being able to use "golang.org/x/net/..." now <moshy>zacchae[m]: SATA/NVMe weren't supported at all when I recently tried. And other hardware support may be missing too, but the kernel will likely boot on most machines. <lfam>It keeps the dependency graph local to the package <lfam>It's been a while since I thought about this stuff <lfam>I wonder about transitive dependencies and how they would be resolved. I don't recall / understand how this is handled idiomatically in Go <iskarian>I'm not sold on it yet. Since nearly all Go dependencies will just have source in their output, is there value in having separate packages with source and output for them, even if they're only local to the package? <lfam>I think there is value in terms of the Guix user interface. Like, `guix show` et al <iskarian>With this approach of manually specifying dependencies, I think we'd want to specify all transitive dependencies manually. <nckx>Have this wonderful morning, #guix. <lfam>Regarding transitive dependencies, does go.mod describe a full transitive dependency graph? Or only the first layer / deps that are used directly by the application? <iskarian>You do have a point. Perhaps what we actually need is better tooling as far as inspecting sources? :) <lfam>I do think we should write tools to handle most of this stuff automatically <iskarian>as of go 1.17, go.mod lists all transitive dependencies required to build the main package <apteryx>iskarian: at the scope of what I've worked with, I like that packages are defined as packages... ;-) <lfam>And in terms of automating this with tooling, the human would only have to verify licensing and home-page, and write a decent synopsis and description <lfam>But like I said, it's been a while since I thought about it. And I haven't kept up with new developments in Go in a couple yeras <lfam>So a good solution for us could be very different from what I'm suggesting <lfam>It does seem like go.mod includes all the relevant info required to automate the creation of the dependency graph <lfam>Which makes sense because, internally, Go has reimplemented Guix, more or less <lfam>I think that davexunit had a quip about that <iskarian>There are still a lot of packages which will stay with old go versions, but the importer already handles looking around for dependencies, so that's fine <iskarian>(I realize now that the Go importer would have been a lot easier if we just called "go list"...) <lfam>davexunit said something like "every deployment system eventually includes an incomplete and buggy implementation of guix" <lfam>And then Go implemented a memoized cache for dependency management :) <lfam>Heh, not saying that Go's is buggy or slow <nckx>But our incomplete and buggy implementation of Guix is better. <iskarian>The idea just started knocking around in my head last night, after atilla_lendvai brought up packaging go-ethereum, a monster of a package; I'll definitely have to do some thinking on it <lfam>Go's only handles Go, nckx! <lfam>Does that mean that ours is definitively better? ;) <zacchae[m]>moshy: I just read on gnu.org somewhere that it does support SATA (could have been added after you tried), so hopefully that works. I'm mostly worried about my wifi card *nckx can't handle Go so 🤷 <moshy>zacchae[m]: Then it might not have been detected in an installer in my case. I'm assuming you have ath9k on your Librem Mini 2, in theory could be supported under hurd. Although can't guarantee it <iskarian>apteryx, can you expand on "I like that packages are defined as packages"? <iskarian>lfam, what do you think about testing inputs specified with your proposed go-package? <lfam>Like, inputs used only for testing? <iskarian>Like, should 'go-package' inputs be built and tested like other packages? or just used for their source? <lfam>IIRC, Go libraries can't really be built, except when building an application that uses them <lfam>Like, there isn't something like `go compile`, right? <lfam>I mean, you can build a given import path <iskarian>They can certainly be built, but saving/re-using the artifacts is the issue ;) <lfam>But a Git repository (the atomic unit of Go development) may provide multiple import paths <lfam>So, how do you enumerate and build each one? <lfam>Is that understand correct? <iskarian>You can do "go build ./..." or "go build github.com/some/package/..." <lfam>I discussed this in one of those emails that I linked to, I think <lfam>I just don't think it's something that is done <lfam>At least that was my impression a couple years ago <iskarian>Sure, Debian does it. Given an import path "example.com/package", Debian's dh-golang runs "go list example.com/package/...", then removes any packages you specify to exclude, then runs "go install" with that list of packages <apteryx>iskarian: I meant that I don't see a reason, from my limited experience with Go packaging, why it'd be superior to have them represented as some other data type than a 'package' record from (guix packages), especially given all the complications it'll cause (as lfam hinted at, the tooling we've come to rely on). <lfam>That is why, initially, Guix's Go packaging created a package per import-path: to build them. But later I realized that this was seriously unidiomatic and was causing difficulties for packagers, so we decided instead to package entire repos. I think it's important to try to work idiomatically or else you can't get any help upstream <lfam>And when packaging a Go Git repo, you can't just build every command / library with a single command <lfam>I see that Debian has addressed this somehow <lfam>I guess I don't see the value. Either the library builds successfully while building the dependent application or it doesn't <lfam>Let me know what I am missing :) <iskarian>The argument in favor of testing dependencies is that they could have more comprehensive tests which reveal subtle bugs, even though it compiled <lfam>Yes, there is some value there <lfam>I don't think that upstream application developers do this however, so we'd be doing something extra and should weigh the cost and benefit accordingly <iskarian>However, if we really wanted to do this testing, we could instead test all dependencies in the end-user package <lfam>Zooming out, one of Guix's values is that we try to provide what upstream intends to distribute, as much as possible. We don't do significant development of packages (like Debian), and we don't change defaults unless we have to <lfam>This doesn't fully translate to the process of building packages, especially libraries, but it still should be given some weight <lfam>So if the upstream development and deployment workflow does not run tests, we shouldn't feel obligated to run them <lfam>Good muradm! Back from a crazy work week. I saw your message about running the tests <lfam>iskarian: But, that's not a reason to not run them. Just something to keep in mind <lfam>A lot of upstreams, especially in new languages like Go, do not see much value in the distro model. So we should try to work in a way that does not piss them off too much, whatever that means in practice :) <iskarian>lfam, when in doubt, make it an option! ;) <str1ngs>lfam: to be fair the go module does work on windows too. :) <iskarian>apteryx, I see your point. There's definitely an advantage insofar as it's easier to replace/modify a single dependency this way. <lfam>One of Go's advantages is that it works easily on soooo many different systems <dstolfa>Go likes to use syscalls directly, even on systems that clearly state that it's not a stable interface <dstolfa>so updating your macOS might result in broken Go <iskarian>Oh, did you see that Go is planning on making Go 1.18+ bootstrap with Go 1.16? <iskarian>They don't want to maintain Go 1.4 anymore <zacchae[m]>How do I find the source code for (gnu packages packagename)? The guix documentation directs you to look at the source for modules like that, but I don't know how to find said source <iskarian>So we'll have to do Go 1.4 -> Go 1.16 -> Go 1.18 <iskarian>They just say "well, it'll be easy to get a Go 1.16+ binary" <lfam>Probably about as easy as first building 1.4 <lfam>Was there an announcement iskarian? Or some other reference I can read? <iskarian>I believe GCC 11 has the Go 1.16 toolchain, so it'll be possible to bootstrap with GCC 11 instead <lfam>"Many of the systems Go runs on today aren't supported by Go 1.4 (including darwin/arm64 for M1 Macs)." <iskarian>Speaking of GCC, currently each custom-gcc essentially recompiles GCC for whatever language it specifies <iskarian>I wonder if it would be more efficient to have one main gcc package, and simply break up the build result into separate packages? <iskarian>*one main gcc package which builds all languages we need <lfam>Hm, interesting. I wonder if some people who know the GCC package better than has some thoughts on that? <lfam>"... who know the GCC package better than me has some thoughts on that?" <str1ngs>I think that can be done, but they would be outputs then? <iskarian>(current languages we have, in addition to C/C++: fortran, d, go, objc, objc++ <iskarian>Currently, gcc has the following languages: ada, c, c++, d, fortran, go, jit, lto, objc, obj-c++ <iskarian>The only one we don't have a package for is ada (lto is included in the main GCC) <iskarian>The only one with a modification is 'jit', for which we use '--enable-host-shared' *lfam tries major Debian upgrade 🤞🤞 <lfam>I'm doing my servers first... I dread doing it on my laptop <lfam>Years of customizations and random config files <dstolfa>heh, can't possibly be worse than upgrading a major version of rhel :P <lfam>What's that, every 10 years? :) <dstolfa>especially one where you customized the installer and use about 50 unsupported things <dstolfa>10 years, but the pain accumulates i'd say :P <lfam>sshd stopped responding during the upgrade but I needed to answer a prompt <lfam>Luckily this server is in the building <lfam>Thanks for composing that! <iskarian>Alright, I've got to go for now. Thanks for the input on Go lfam, apteryx! <apteryx>is the lack of /bin/sh really a "POSIX violation" ? <dstolfa>Applications should note that the standard PATH to the shell cannot be assumed to be either /bin/sh or /usr/bin/sh, and should be determined by interrogation of the PATH returned by getconf PATH <attila_lendvai>iskarian, thanks for looking into it! i only have your redirect patch locally. i'll read up those two issues you linked to. meanwhile, these are the failing ones for me: guix import go --pin-versions github.com/prometheus/tsdb@v0.7.1 and guix import go --pin-versions github.com/ethereum/go-ethereum@v1.10.8 <sneek>attila_lendvai, you have 2 messages! <sneek>attila_lendvai, iskarian says: Currently, you would make three packages for those aws repos, and for the latter two use '#:unpack-path "github.com/aws/aws-sdk-go-v2"'. AFAICT "guix import go github.com/influxdata/influxdb-client-go/v2" works correctly; what's the issue? <attila_lendvai>iskarian, the former is eventually initiated by the latter, which takes a couple of minutes <attila_lendvai>iskarian, there's no issue with the /v2 stuff. that was a haphazard remark from me, ignore it. <iskarian>attila_lendvai, `guix import go --pin-versions github.com/prometheus/tsdb@v0.7.1' works for me with that redirect patch... <civodul>apteryx: hi! just saw a message of yours earlier today: i don't think "guix pull --list-generations" should display "indexing objects", that looks fishy <civodul>actually, it *can* happen: it needs an up-to-date checkout to determine which news entries apply <civodul>normally you already have an up-to-date checkout so you don't see "indexing objects" (i don't see it on my laptop) <civodul>but if you "rm -rf ~/.cache/guix", it'll re-clone the thing, i think <the_tubular>Everytime I use a "guix only" feature I'm so impressed lol ***GNUcifer is now known as cehteh
<the_tubular>Anything that is container / vm related really impress me <zamfofex>Hello, Guix! Is there a way to build GCC for x86 from x86‐64? I’m trying to compile a program that only works on x86 (32‐bit), and it requires libgcc. I tried ‘guix build guix build --target=i686-linux-gnu gcc-toolchain’, but it failed at the “configure” phase with an error telling me that “the C compiler can’t create executables” while cross‐building coreutils. If I’m doing things wrong, any suggestions are appre <civodul>zamfofex: hi! to build 32-bit binaries on x86_64, you don't need to cross-compile (--target) <civodul>instead, you can pass "-s i686-linux", which does a native build, only 32-bit <civodul>(in theory cross-compilation should also work, but it's expensive and this configuration is untested) <civodul>C quizz: what precedence rules apply to "return result == CPNATIVE_OK && entryType == CPFILE_FILE ? 1 : 0;"? <lispmacs[work]>is there some way to restrict guix lint -c cve output to package actually installed on your system? <zamfofex>civodul: I’m trying it, and it seems to be working (and even downloading substitutes, which is a good sign). That’s useful to know, thank you! <zamfofex>Also, about your quiz: I’d expect for it to be the same as “return (a == X && b == Y) ? 1 : 0” rather than “return a == X && (b == Y ? : 0)” (though I don’t know). <zamfofex>Oops, I forgot a “1” there. I hope I was clear enough, though! <civodul>zamfofex: yeah, i think you're right! <civodul>lispmacs[work]: as a rough approximation, you can do "guix lint -c cve $(guix package -p ~/.guix-profile -p /run/current-system/profile -I | cut -f1)" <roptat>civodul, && has precedence I think <roptat>like return (result == (CPNATIVE OK && entryType) == CPFILE_FILE)? 1: 0;, but not sure <roptat>also fun fact, a boolean is an integer, so that's not a type error <dstolfa>roptat: == > && > ?: in C, so you'll end up with ((result == CPNATIVE_OK) && (entryType == CPFILE_FILE)) ? 1 : 0, and i sadly know this because a lot of code does really annoying stuff by omitting parentheses it should never omit by trying to be "clever" <dstolfa>apparently being clever implies that you have to make the code impossible to read quickly to some programmers... <roptat>mh... I thought I saw that issue with == and &&, but maybe that was a different language <zamfofex>I think it makes sense that logical boolean operators would bind more loosely than comparison operators. <zamfofex>Since that allows e.g. “a < b && c == 0” <zamfofex>Imagine if that became “a < (b && c) == 0”. <zamfofex>That would be fairly unintuitive, I think. <zamfofex>I’d expect “?:” to bind very loosely. I’d imagine the only thing that would bind looser than it within an expression would be the comma operator, but maybe there is something else. <dstolfa>i think it's just the comma zamfofex *dstolfa notes that people should just use parentheses in any case where they start chaining many operators <civodul>roptat: so i'm looking at the disassembly of that Java_java_io_VMFile_isFile thing <civodul>the one in master is longer than the one in core-updates <zamfofex>dstolfa: Fair enough. I wonder how e.g. “a ? b, c : d” parses, or if it fails. <roptat>could you send me both versions? <roptat>it could just be that gcc is better at optimizing on core-updates than on master, it's not the same version, is it? <dstolfa>zamfofex: that would parse, though the behavior is hilarious <dstolfa>zamfofex: IIRC, what would happen here is you'd get c or d depending on a being true <zamfofex>More “on topic”, it seems like the package ‘gcc-toolchain’ doesn’t actually include ‘gcc:lib’. Is there any way to refer to ‘gcc:lib’ from the CLI? Or otherwise be able to figure out a way to download the 32‐bit version to the store? <zamfofex>dstolfa: Note that if “b” were “b()” or “b++”, it would be evaluated for its side‐effects. <zamfofex>I use the comma operator sometimes to avoid using curly braces if their bodies would be small enough. E.g. “if (a) b++, c++;” <zamfofex>I’m careful to avoid letting it become too large, though, as it can quickly grow to damage readability, I think. <civodul>so yes, it could be a broken optimization <zamfofex>Is there a reason for “gcc:lib” to be so difficult to refer to? <dstolfa>zamfofex: i've seen code like w = z = ++y, q = ++x; before quite often. it does what you'd expect but my god why not just use a semicolon <civodul>right after the ENOENT from cpio_checkType <zamfofex>dstolfa: I think the problem (in my case) is that the semicolon would end the “if” statement. To use it, I’d need to wrap the statements in curly braces, which is advantageous if there are more than two or three short statements, but I think it adds unecessary clutter if the statements are short enough. Also: If you want to continue, I think it’d make sense to do so in a DM, since I think this is a bit too off‐topic here. <dstolfa>fair, we can just end it here since it was just a fun little convo, coming up with C monstrosities is indeed off-topic :D <zamfofex>It was fun, yeah! I always enjoy talking about syntax and whatnot. <lispmacs[work]>I see we have a lot of medium/severe security fixes still in staging branch. How long does stuff usually stay in staging? <civodul>important fixes go to master, possibly as grafts <zamfofex>In case anyone comes across this conversation in the future (since this is a question I have had multiple times before), in order to find the store path of ‘gcc:lib’, it suffices to run “gcc --print-file-name=libgcc.a”, and it should give you a path within the store entry for “gcc:lib”. In my case, I had to run that for the 32‐bit GCC from the store. ***sneek_ is now known as sneek
<roptat>also, the pair of parenthesis is probably not needed <roptat>looking at the disassembly, it looks like gcc optimized away some of the comparisons at the end <roptat>only two instructions between the last call and the moment it restores the stack <civodul>hmm with this patch the ant-bootstrap build fails with "Could not load the version information." <civodul>which suggests NullPointerException while trying to access version.txt (?)