IRC channel logs


back to list of logs

<gabr000>i'm new to IRC and guix as well
<gabr000>english is not my first language so ... that's it
<gabr000>i'm tryng to package Lutris and having some troubles in the checking phase
<gabr000>the argument #tests #false also not works very well
<gabr000>wonder if someone already tried package lutris and have something to say or help
<podiki[m]>I've seen a package def somewhere, though even as free software not sure if it fits with guix given what it is for?
<podiki[m]>but I don't think it ran tests either (for whatever reason, I did not try it)
<robin>the lutris search shows...~90 libre games out of 13.5k total
<robin>gabr000, if you paste your package definition (via someone may be able to help
<Guest73>has anyone managed to use xinit without a display manager
<Guest73>sx,xinit,startx dont work for me
<Guest73>they give errors
<Guest73>using something like slim sucks
<excalamus>good evening, guix
<gabr000>hey, trying to paste my code in the debian pastebin. how do i get the URL?
<gabr000>i send it but dont know how to get the url...
<podiki[m]>if Guest73 checks later (or here under new name), I don't know exactly, but there were messages on the mailing list (either help or devel, can't remember) about it
<jwoe324><drakonis> "jwoe324: why not ask the nix..." <- yeah i asked and people weren't sure. you might be right about the init system
<drakonis>i think systemd provides nscd functionality
<drakonis>yup, its that
<drakonis>nixos defaults to using resolved and networkd so that's why
<jwoe324>actually according to my tests i think nix needs nscd too it's just not documented
<excalamus>gabr000, you should be able to just paste your snippet and press send. The URL will update with some numbers. It's the new URL you will want to share.
<gabr000>now i cant send the file because of spam and neither find the url where it is stored. xD
<psyklax>Is here some good documentation somewhere just for being a user of Guix? I've got some basic questions, like where am I supposed to put dotfiles, what's the correct way to source a .profile?
<excalamus>psyklax, in my experience, no, beyond what's in the manual. I've found it best to try things out and ask when it doesn't go right. I also try to document and publish what I do so that others can (hopefully) benefit.
<psyklax>How do you set it up to automatically source .guix-profile/etc/profile ?
<psyklax>what files in $HOME are being sourced?
<excalamus>that's not something I'm familiar with, but have you seen this?
<excalamus> there's a section about sourcing a profile
<excalamus>but I only glanced at it
<psyklax>Looks like it should be sourced by default, because /etc/profile looks for $HOME/.guix-profile/etc/profile and sources it if it exists. Just need to reboot I guess... Will have to test
<psyklax>Seems to be so. The issues I was having before were not because of the profile not being loaded, but because I used the library wrong
<excalamus>cool, glad to hear that it seems to be working for you now.
<psyklax>Guix is going to be a mind-bending journey. Nothing is familiar here.
<excalamus>welcome aboard :)
<lfam>Is anybody having trouble building a Git source tree of Guix since the recent translation updates?
<lfam>Specifically, "make[2]: *** [Makefile:4995: doc/] Error 1"
<lfam>I fixed it with `rm -r po && git checkout po`
*lfam shrug emoji
<lfam>Oh no, it still doesn't work
<lfam>I'll try it from a fresh checkout
<apteryx>lfam: perhaps ./bootstrap would have fixed it
*apteryx is getting more confident with core-updates-frozen-batched-changes, although it'd be nice if the CI assisted :-). not sure why it keeps failing to evaluate the branch. 'make as-derivation' passes locally
***LispyLights is now known as Aurora_v_kosmose
<podiki[m]>CI would be helpful, then someone can switch to it without rebuilding everything locally ;-)
<podiki[m]>after that branch is built and merged, any major blockers left on core-updates-frozen? or is it just fixing more individual packages to have better coverage before merging to master?
<apteryx>I know mothacehe was working fixing the failing system tests
<apteryx>after we've fixed the tests and are confident the branch is not far from master in terms of coverage, I guess the plan will be to merge to master, and then branch off to an RC branch soonish.
<apteryx>then polish things and prep the release material
<lfam>apteryx: I actually can't build from a fresh checkout
<lfam>I wonder if it is working for anyone
<lfam>And previously I was trying with `make clean && ./bootstrap && ./configure ...`
<lfam>I think the full error message is this: <>
<podiki[m]>thanks for the info apteryx, we're getting closer!
<podiki[m]>(and for the hard work on that rebuild branch)
<apteryx>podiki[m]: :-)
<apteryx>podiki[m]: I just repushed the branch with what I'm testing
<lfam>I get the same error building a fresh checkout on the berlin server
<lfam>Is anyone able to check if they can build a fresh Git checkout?
<apteryx>seems you found a real issue, lfam!
<lfam>Seems like it, but then again, the CI was able to build what it needed:
<apteryx>builds fine in a dirty tree
*apteryx needs to get some sleep
<apteryx>I hope someone else can try it in a fresh checkout to compare!
<lfam>I filed a bug: <>
<qzdlns[m]>morning guix!
<abrenon>hi guix
<civodul>Hello Guix!
<sneek>Welcome back civodul, you have 1 message!
<sneek>civodul, apteryx says: I've refreshed the branch with the latest fixes; the gdk-pixbufs loaders commits are on top
<civodul>efraim: hey! i wonder if cross-compilation support for Go back in de4f5df95db6c2e7071bf5e44c0d7ae928da1025 broke Go packages on non-x86_64 platforms
<civodul>it fails with "go install: cannot install cross-compiled binaries when GOBIN is set", even though we're not cross-compiling
<efraim>I'll try to take a look at it
<efraim>also GOARCH is set to amd64, looks like it knows it's not natively i686
<efraim>is this on master? I was able to build for aarch64 on aarch64, will try other architectures
<civodul>efraim: that's on master, yes
<civodul>how's GOARCH set?
<efraim> and 187, and something comparable in go-build-system
*efraim is on a phone ATM
<efraim>I'd have to check git log but I think we set GOBIN before too
<efraim>It looks like GOARCH is being set wrong and it's trying to compile from i686 to x86_64
<efraim>I wonder if we set an 'or' to use goarch for GOARCH or use GOHOSTARCH
<efraim>(setenv "GOARCH" (or goarch (getenv "GOHOSTARCH")))
<wigust->hi guix
<nckx>Hi wigust.
<Guest26>hint: Did you forget a `use-modules' form?
<Guest26>hint: Did you forget a `use-modules' form?
<Guest26> error: rust-cargo: unbound variable
<Guest26>im getting this error even tho i included the crates-io module
<Guest26>please help
<nckx>Guest26: There is no rust-cargo variable.
<nckx>There is rust-cargo-0.53.
<Guest26>uh do i have to include that number at the end?
<Guest26>guix search cargo literally shows me there is a package called rust-cargo
<Guest26>provided by crates-io module
<nckx>Guest26: So ‘rust-cargo’ is the package name, that is, a string in the (name "…") variable, whilst rust-cargo-0.53 is the name of the Scheme variable:
<nckx>The CLI shows name strings, not the variable names you use when programming.
<nckx>They usually match but here, for some reason unknown to me, the variable name includes the version.
<Guest26>i see
<Guest26>can u help me with 1 more thing plzplz
<Guest26>i've been struggling for the past 3 days
<nckx>Maybe? ☺
<Guest26>to get xorg to work wtihout a display manager
<Guest26>i simply wanna use startx lol
<nckx>I use Wayland with a DM, I'm really not the right person to ask, sorry.
<nckx>I know some people do (but it's a minority).
<nckx>If you can't find them here, try mailing help-guix at as well.
<Guest26>i found a few mails about the issue
<Guest26>non of them seemed to help
<Guest26>one offered a solution that didnt work
<Guest26>and others just said its not easy to do it lol
<nckx>That's certainly true. Despite your ‘just’ above, it's not actually the easy way or even well-supported on Guix, I'm afraid (as you found out). If those with experience can't help you I certainly can't.
<rekado>FWIW I’m getting the same error as lfam when building the master branch.
<rekado>I’m using “guix environment guix”, but i guess the problem is that I’ve done “guix pull --branch=core-updates-frozen”. Will try with an older Guix.
<excalamus>good morning, Guix
<nckx>Hi excalamus.
<zacque>Hi, I'm learning to build the scdoc application ( in an isolated guix environment but keep getting this error: "ld: cannot find -lc" during compilation
<zacque>Reproducible steps:
<zacque>Main thing is that is present in the $GUIX_PROFILE/lib directory
<zacque>And it doesn't work even if I pass the library path as a linker flag to the gcc compiler
<efraim>civodul: I've pushed a fix for the go-build-system so x86_64 should be able to build natively for i686 again
<efraim>I tested with aarch64 and armhf
<zacque>zacque: I can build the package successfully outside the guix environment
<nckx>rekado, mroh: Could/did you try with an explicitly UTF-8 locale?
<rekado>nckx: I have LANG=en_US.UTF-8
<rekado>and I did “guix pull --branch=master -p /tmp/g” and then entered the environment with /tmp/g/bin/guix environment guix
<nckx>I get the same error as lfam but when I set LC_ALL=en_IE.utf8 it builds fine.
<rekado>zacque: are you using the gcc-toolchain package?
<nckx>Not sure if UTF-8 is still goodthink.
<jpoiret>looks like it wants to statically link libc instead of dynamically, no?
<jpoiret>and sure enough there's no .a
<euandreh>zacque: the problem is the -static LDFLAGS
<euandreh>jpoiret beat me to it :)
<rekado>nckx: I guess I’ll have to run ./bootstrap again…? Because after export LC_ALL=en_US.utf8 make still complains.
<nckx>Not sure: I ran everything from scratch, from git clone into a ‘/tmp/guix-with{out,}-utf’ directory, just to ward off the evil state gods.
<euandreh>zacque: try removing line 4 of the Makefile (the -static flag addition) between steps 4 and 5
<zacque>rekado: Ah, yes, I'm using the gcc-toolchain package
<zacque>jpoiret, euandreh: Ah, I see, I'll give it a try
<rekado>hmm, no success :-/
<apapsch>Hi! Docs tell me on static-networking-service: This procedure can be called several times, one for each network interface of interest.
<rekado>I’ll try a fresh git clone, but that’s annoying
<nckx>It is.
<apapsch>Though the IP parameter is a single address. This makes the procedure not usable for dual IPv4 and IPv6 configuration on a single interface, no?
<excalamus>I built a Plover plugin as a package. Part of that process was needing to update the definition for Plover itself to include (native-search-paths (package-native-search-paths python)). Without that, the plug would not be recognized (plugins need to be in site-packages). The definition for that and all the work surrounding it is here: Now, I'm trying it update
<excalamus>the Plover definition to a different version and to fix problems like no icons. I've started over with a new definition: The new definition builds and I've installed it. The strange thing is that it recognizes plugins! This is not what I expected since the definition doesn't have native-search-paths.
<excalamus>does package-native-search-paths change something globally? Or I guess it wasn't actually needed?
<civodul>apapsch: hi! yes, static-networking-service is not very capable ATM
<vivien>apapsch, I have this workaround:
<civodul>apapsch: i've been working on making it more expressive, using roptat's guile-netlink
<civodul>i hope to submit a patch in the coming days
<civodul>several, even
<apapsch>great, thank you both!
<civodul>what vivien shows is a great option in the meantime!
<jpoiret>zacque, euandreh: what's weird to me though is that nothing tells the cc to link libc statically
<vivien>You can add it as a service: (service static-ipv6-service-type (static-ipv6-configuration (address "...:...") (interface "<interface name>")))
<jpoiret>excalamus: native-search-paths doesn't actually work that way
<jpoiret>search-paths in general are specified on a package, and every package that depends on it has their own path added
<jpoiret>you sohuldn't need to add the python search-path to your package
<euandreh>jpoiret: Actually it does, line 4 of the Makefile adds -static to $(LDFLAGS)
<jpoiret>-static doesn't link libc statically iirc
<apapsch>vivien: hoovering your service now into my channel, thanks :-)
<euandreh>jpoiret: On GCC, I think it does
<apapsch>static networking seems useful against dhcp, as the latter periodically resets /etc/resolv.conf, overwriting custom config after starting dnsmasq
<vivien>Sometimes people complain about guix not pulling from "dumb" git HTTP servers, I’ve discovered that the solution is to simply keep a mirror of your dumb channels on disk, and have a cron job synchronize them with the regular git program.
<excalamus>jpoiret, I see you're correct. When I reinstall the current Guix Plover, which doesn't have the python search-path in it, the plugin is still recognized.
<rekado>on core-updates-frozen I get “guix build: error: gcry_md_hash_buffer: Function not implemented”
<rekado>just trying to run ’./pre-inst-env guix build salmon’
<jpoiret>excalamus: does that clear up how search-paths work?
<jpoiret>tbh they're not the easiest thing to understand in guix
<euandreh>vivien: the problem with that is that you can't have a channels.scm file that fully declares the desired channels
<excalamus>I'm confused now because how does Plover know that the plugin exists? This is the plugin definition:
<vivien>euandreh, I don’t understand. In your channels.scm, you use the path to the local git checkout.
<vivien>(not checkout, clone, sorry)
<excalamus>I mean it must have been something else that caused it to not be recognized. Since I reinstalled Plover from Guix, it seems to work, so yeah, dunno
<zacque>jpoiret, euandreh: Thanks, it works after commenting that line out. But I wonder why the package definition (see "guix edit scdoc") doesn't need a patch to remove that line? Asking since I'm learning to write its package definition
<euandreh>vivien: instead of "guix time-machine -C channels.scm -- environment -m manifest.scm -- cmd", having a "git -C ~/path/to/channel pull && guix environment -m manifest.scm"? Hmm, I didn't think of adding to the channels.scm the path of the local disk
<vivien>It’s more git fetch origin +refs/heads/*:refs/heads/* --prune
<nckx>euandreh: Your cron job/wrapper script could parse/sniff channels.scm to get the desired commit, but I feel like ‘solution’ is a tongue-in-cheek word anyway.
<vivien>But that’s done in a cron job, so you don’t have to run it yourself :)
<euandreh>zacque: that is weird, I thought the package was new, it indeed doesn't do that
<euandreh>vivien, nckx: yeah, I have a bit of knee-jerk reaction with this topic, because of Git itself, and how it doesn't provide any C API, so libgit re-implements it but chose not to do so for the dump http protocol.
<vivien>(I still think the dumb http protocol is the smartest one)
<nckx>I thought upstream was trying (unsuccessfully, but whey) to deprecate the dumb protocol since forever.
<euandreh>I like it the best too, just a static file server
<nckx>It's certainly the easiest to set up when you just don't care about setting up a git CGI server.
<euandreh>nckx: they're deprecating the "Git protocol"
<nckx>Ah, git://?
<euandreh>not the "Dumb HTTP" protocol
<vivien>nckx, even with a CGI server, cgit will still serve dumb HTTP.
<nckx>Not used cgit.
<euandreh>vivien: I don't think that's true, CGit uses the smart one, doesn't it?
<nckx>By default or only?
<euandreh>by default, I guess
<nckx>I'd expect to hear more complaints if it were only.
<euandreh>I mean, without opting in or out
<vivien>To be honest, maybe I’m destroying any cgit effort to serve smart HTTP by running behind a reverse proxy or other configuration errors
<euandreh>(which is what by default means :D)
<jpoiret>excalamus: when you build a profile, you get a union of all installed packages in the profile, with the search paths set to $GUIX_PROFILE/thesearchpathspec, so as long as you installed the plugin, it's gonna be visible to python
<euandreh>vivien: where are you running it?
<vivien>Well, is cgit behind a reverse proxy
<massn00b[m]>Hey can someone with the default kernel cat out their kernel config and tell me whether the ATH9K flags are set?
<nckx>But you can check it yourself too:
<euandreh>vivien: It is indeed using the dumb protocol, and now that I checked, so is my cgit instance
<nckx>(Also checked /run/current-system/kernel/.config on a VPS running the default kernel; it's not unset somehow.)
<euandreh>vivien: I remember that this wasn't the case before the last time I touched its configuration, now I'm left wondering what hapenned
<euandreh>or just plain evolution, given that the dumb protocol is better! :D
<vivien>I also have a bunch of ATH9K including CONFIG_ATH9K=m
<vivien>(in /run/current-system/kernel/.config)
<vivien>euandreh, since savannah’s cgit serves smart HTTP, I think it’s definitely a configuration problem.
<nckx>Why's that, euandreh?
<nckx>I am a collector of rare & fine opinions.
<nckx>I'd love to have one about on-the-wire git protocols, which I currently do not.
<euandreh>vivien: yep, it is indeed
<euandreh>nckx: let me add the following to your collection, then!
*nckx dusts off the pedestal.
<euandreh>IMHO, the "Dumb HTTP" is better than both the "Smart HTTP" and "Git protocol". That is because there is nothing on the protocol itself that is about Git: the client just requests object by object, and the server responds. The server doesn't even need to know that it is a Git repository, those are all just static files.
<singpolyma>But why would we use any of those protocols at all when we have the ssh transport?
<euandreh>I acknowledge the need for some optimizations, like pulling big repositories or pushing changes, but I'd rather have those addressed someway else instead of having a new agreement on how to exchange things
<excalamus>jpoiret, okay. So, my best guess as for why things didn't work before but do now is that my profile was rebuilt in-between.
<euandreh>singpolyma: HTTP protocol is more common for public repos
<excalamus>does that sound reasonable?
<singpolyma>euandreh: I know it is. Which is very sad
<euandreh>singpolyma: Oh, I didn't think you're going that way. You mean anonymous SSH clones?
<euandreh>hmm, I have never setup an SSH server that accepted anonymous connections, it sounds, I don't know, different.
<euandreh>Now I'm inclined to do it.
<euandreh>singpolyma: Why do you prefer anonymous SSH for that?
<euandreh>nckx could use your view for their collection too :)
<singpolyma>I prefer ssh for most things. It's a just-smart-enough secure protocol with common clients for every platform
<nckx>& thanks euandreh
<singpolyma>Much better than layering TLS hacks on HTTP or building webapps that just do file transfer in the end
<euandreh>nckx: :)
<euandreh>singpolyma: but why adding a TLS layer is a hack?
<vivien>Git is often used in a way that each participant has a full copy, slightly modified. Publishing a git repository is in fact just publishing your version. So, there is a natural distinction between you and other people. Only you control (write) the repository, and the others just fetch. So, it’s natural to have 2 different access protocols: one for you (SSH), and one for everyone else (dumb HTTP). Since others don’t need to push to y
<vivien>our published copy, authentication by HTTP is not necessary.
<euandreh>I mean, you could compose a TLS program with an HTTP server, and the server wouldn't even know about TLS.
<vivien>I understand that different people have different uses of git, and I know that’s the case for guix, for instance, where a lot of different people have write access.
<euandreh>singpolyma: (I didn't try this actually, I saw it as a suggestion on the homepage of quark
<euandreh>vivien: but the write and read don't need to be different protocols. You can do read and write via HTTP with HTTP auth, or read and write via SSH with anonymous SSH access.
<vivien>Right, but if you publish different repositories, all of them will have the same SSH key configured to accept writes. So better use the keys of the system.
<singpolyma>euandreh: TLS always feels gross to me, with x509 and CAs
<vivien>singpolyma, I agree with that.
<singpolyma>vivien: I don't really like using git push ever. I guess it makes sense for making public copies sometimes given current network conditions, but I usually prefer pull/fetch for actual work
<excalamus>ugh, have to work now...
<shtumf[m]>I have 2 ssd, one has Manjaro GNU Linux installed on it and grub on it, I want to install GuixSD on the second disk, and probably this means that grub will also be installed on this second disk, after installation of GuixSD is done, what happens ? Does the GuixSD grub overwrite the fist disk Manjaro grub or will grubs be on each disk separatate, what do you suggest ? MAybe the question is realy does GuixSD when it install grub on ssd2, does
<shtumf[m]>it overwrite also the grub of Manjaro GNU Linux on ssd1 ?
<shtumf[m]>I suspect that all I have to do is manualy add the entry to borthgrubs for both ssd disks
<shtumf[m]>could it be done also this way that there is just one grub and I add in this grub for both ssd disks
<roptat>hi guix!
<vivien>singpolyma, I like the distinction between what is public and what is private in your repository. That way, you can commit more carelessly because you’ll be able to rebase your work before pushing.
<roptat>shtumf[m], you can choose where grub is installed by guix, that's the targets field in the bootloader specification, in your os specification
<roptat>also, it's called "Guix System", not GuixSD anymore ;)
<shtumf[m]>thank you
<roptat>you'll find more documentation here:
<nckx>Hmm: “current policy: frequency should be within 1.20 GHz and 1.20 GHz. The governor "schedutil" may decide which speed to use within this range.”
<nckx>Or, y'know, not: “current CPU frequency is 3.03 GHz.”
<nckx>Am I missing something?
<kozo[m]>Good Morning Roptat
<vivien>Dear guix, can I open a local server during the test phase of a package build?
<rekado>vivien: I think so. We have a couple of packages that start an X server, for example.
<vivien>Thank you
<qzdlns[m]>hi guix
<nckx>Hi qzdlns[m].
<mothacehe>hey guix!
<mothacehe>civodul: I have the vague impression that the latest publish patch (separate narinfo thread) is improving the situation on berlin, maybe you could confirm it on your bordeaux server?
<qzdlns[m]>hey mothacehe
<mothacehe>hey qzdlns[m]!
<nckx>mothacehe: Which situation?
<mothacehe>nckx: the several "cannot build missing derivation" errors on the CI situation that is discussed here:
<rekado>oof, there are still a lot of packages that fail to build on core-updates-frozen. Will try to fix some of them soon.
<rekado>blender, gerbv, pcb (because of gerbv), awscli, diffoscope, gxtuner, pdfpc, peek, lepton-eda (because of gerbv), qpdfview, and lilypond
<rekado>(avr-gcc also might be broken)
<rekado>(the hurd package is also broken)
<nckx>mothacehe: Awesome.
<nckx>I didn't know that was related to threading.
<mothacehe>rekado: re hurd package: doh fixed the childhurd test on core-updates-frozen a few days ago
<mothacehe>nckx: the underlying issue is that the publish server takes more than 10 seconds to serve some narinfo/nar requests and the connection is but by nginx
<mothacehe>computing narinfo in the main publish thread was problematic, but looks like there are other problems out there
<mothacehe>(reads taking more than 2 seconds on berlin as noticed by civodul)
<mothacehe>*cut by nginx
<nckx>/var/log/messages on berlin is -EWCTAKESWAYTOOLONG lines long and still doesn't contain the last boot messages.
***chris is now known as chrislck
*nckx wanted to see what kind of storage hardware was up in it.
<chrislck>n00b question: how do packages work with auto-updating sw like firefox?
<jonsger>is there no logrotate nckx?
<nckx>wc -l is *still* running. I kilt it before it starts timing out Cuirass or something.
<nckx>jonsger: Probably. My point was more a general lament.
<jpoiret>chrislck: auto-updating is disabled for those. They wouldn't be able to do it either way as the store is read only
<nckx>We might be logging a tad more than is needed (but then you never know beforehand, do you).
<chrislck>ah tx
<nckx>So what kind of storage hardware is up in this? 2s reads seem super suspish…
<jonsger>something from Dell I think...
<nckx>The words HBA are rattling around in my brain from long ago, but that's about it.
<nckx>Indeed a Dell thing.
<civodul>cbaines: hey! i was looking at reproducibility of a recent revision:
<civodul>IIRC, you mentioned that data.guix would miss substitute info from ci.guix because it wouldn't retry fetching narinfos, is that right?
<civodul>and because ci.guix doesn't send notifications
<attila_lendvai>civodul, i replied to the git-auth issue. i'll be more-or-less around in the next hour or two if you want a more interactive feedback loop. but i'll be around in the upcoming days, too.
<rekado>“festival” also fails to build. It’s the usual linker error, so should be easy to fix.
<rekado>any ideas what this might mean? “guix build: warning: failed to load '(gnu packages browser-extensions)': Function not implemented”
<rekado>and: “guix build: error: gcry_md_hash_buffer: Function not implemented”
<rekado>that’s what I get trying to build a package on a checkout of core-updates-frozen.
*attila_lendvai has just noticed that one of the commit messages is nonsense (using dynamic-wind)
<old>any recommendation for an alternative to xdg-open? Hopefully one that can be configured via Scheme?
<sailorCa`>Hi, fix me if I'm wrong. I'm going to define a go package. When I do a `guix build` it sapwns a container. The container by default has no network access. That's why on a check phase (that requires some network resources) I've got a problem.
<nckx>s/by default //
<sailorCa`>so, is there any solution except to skip check phase?
<cehteh>what kind of network does that test need?
<nckx>Provide all test requirements as native-inputs? I don't know if that's feasible; I also don't know enough about Go to know if it well then stop trying to invoke ‘go build’ and/or calling out to the network.
<nckx>Disregard that ‘go build’ bit, I misread.
<sailorCa`>it wants to download a file "" from the internet
<cehteh>tests should not rely on external resources or make network requests, esp not from the internet
<nckx>You could try to provide that as e.g. a ("jquery-for-tests" ,(origin …))
<cehteh>try hosting that file within your test environment
<nckx>and put it wherever the test suite expects it to exist.
<sailorCa`>that's not a my library, so I would prefer to avoid patch a source code
<sailorCa`>just skip a test is enough for now, thanks
<cehteh>patch the test?
<nckx>If the test suite unconditionally downloads it even if it exists in that location, you can use substitute* on the build script to disable the download.
<sailorCa`>yep, a patch is an option
<nckx>Why not?
<nckx>I don't follow your reasoning.
<nckx>None of the packages that Guix patches during the build are ‘ours’.
<cehteh>could it work with file:// like urls?
<nckx>Heh, fun idea‌ ☺
<sailorCa`>a url is hardcoded, so I defintely need to patch it
<sailorCa`>ok, thanks
<cehteh>and maybe it expects http
<cehteh>then you can patch the test out or (somewhat ugly) let the build depend on a local webserver on http://localhost
<cehteh>but tests going over the internet are a nogo anyway for zillions of reasons
<sailorCa`>I agree
<nckx>Are you planning on submitting your package for inclusion in Guix, sailorCa`?
<sailorCa`>For now I want to learn how to define and build packages. Then I'll create a my channel and if results will be good I'll send it to upstream.
<singpolyma>Should adding ,gcc to native-inputs not be enough to get a cc command?
<nckx>No. Patch the build system to use (cc-for-target) as the ($)CC.
<nckx>(And remove the explicit gcc input unless you need a non-default GCC version.)
<nckx>sailorCa`: Great!
<nckx>Feel free to punt (by disabling tests) if it helps you get up & running; we can always discuss re-enabling the test if you do submit it for inclusion.
<nckx>If it's ‘too hard’, #:tests? #f is acceptable, it's just the last resort.
<sailorCa`>ok, thanks
<cehteh>for a long time i wished for having an '--no-check' option for guix package which disables tests at a central flag
<cehteh>installing/building on slow hardware is pita when the package decides it defaults to running an extensive test suite
<podiki[m]>agreed, even on faster hardware sometimes tests take longer than building the package, and it can be handy to disable them all for testing changes
<singpolyma>nckx: I'm using ruby-build-system, which is built on gnu-build-system at least partly. Should I expect one of those to set $CC ?
<nckx>& the only cc-for-target usage in ruby.scm sets a cc environment variable (lowercase, sic), which doesn't exactly look terribly standard.
<nckx>But maybe it's worth a try if CC does nothing.
<singpolyma>I gues gnu-build-system just assumes ./configure will handle it?
<robin>huh, my fontconfig cache managed to get *really* out of date (e.g. fonts i installed weeks/months ago weren't showing up in gnome-font-viewer, gucharmap, etc.)
<singpolyma>nckx: Can I set an environment variable during a phase in some "easy" way?
<nckx>Trivially! (setenv "CC" ,(cc-for-target))
<nckx>Make sure you're quasiquoting arguments for that to work (i.e., `, not ').
<singpolyma>Right, sorry, I meant during an existing phase
<nckx>Hm, no, you'll have to add-before 'build or so.
<singpolyma>ah, and it won't reset the env between phases?
<nckx>You can setenv and chdir between phases and it will persist.
<singpolyma>ok, cool
<nckx>The only way to know for sure what your Ruby build system is doing is to grep the source for -riw cc or so.
<singpolyma>seems to have worked. thanks :)
<podiki[m]>robin: I keep seeing this on the early on the "forgotten issues" section
<podiki[m]>it would be nice to have some after install hooks or reminders of things to do manually after installing some packages
<robin>seems like a possible candidate for an install hook
<raiguy>libvirt can't find network "default". i had to copy it from my system profile to /etc/. annoying.
<robin>cehteh, that sort of exists, as the --without-tests package transformation option. but then it will be a different store item than if built without the option, so probably not useful for most things in practice (installing applications with slow test suites, maybe?)
<lilyp>hmm, you could try running a substitute server that builds --without-tests variants
<lilyp>though where to actually cut off tests is probably an issue
<jab>Hey #guix!
<jab>I am currently working on improving the opensmtpd-service.
<jab>The records file is here:
<jab>I'm super excited to find some use cases for (thunked) in some of the fieldnames.
<jpoiret>what does thunked do again? are they the fields that are evaluated with a 'this-record reference in the context?
<jab>take a look at (type opensmtpd-table-type
<jab>in the file. the lambda just below it, uses a variable called "this-table", which is the record being defined.
<jab>In this case I am using the (thunked) to examine other fieldnames, to decide what this fieldname should be...
<jab>The other use case is here: (local-delivery opensmtpd-action-local-delivery
<jab>I've made one field have a default value...If the other fieldnames have a value, then this fieldname is given the value of #f.
<jab>It's a cool way of making fieldnames mutually exclusive.
<jpoiret>hmmmm, from what i gather "thunked" is not just on init, it is effectively every time you access the record member
<jpoiret>one thing i don't understand then is how it differs from writing a simple procedure outside of the record definition
<jpoiret>oh, because you can actually specify a "non-default" one when creating the record, my bad.
<jab>jpoiret: well in my first use case, it seems simpler to use (thunked) to determine what type of data is in one fieldname than to use an outside function.
<jab>jpoiret: and you are probably right about effectively every time you access the recerd member.
<jpoiret>jab: there isn't a use-case when the end-user would want to set that field directly, right?
<jab>in the first case, by design the user SHOULD not set the fieldname type of record <opensmtpd-table>.
<jab>but I'm open to other ideas...what would be the use case?
<jab>Why should the user specify the type? Isn't it easier to let the program determine the type at run-time?
<jpoiret>i don't think there's any, and then it would make more sense to make type a procedure rather than a field, since it would be actually impossible to set the type to something else
<jpoiret>i don't think there's any reason to use a thunked field in this specific case
<jpoiret>(also the simpler the better imho)
<jab>jpoiret: ok. I suppose that your idea has merit...and it would probably help the next hacker after me understand what is going on....
<jpoiret>in any case you're going to call `opensmtpd-table-type`, so it doesn't hurt readability
<jab>jpoiret: what do you think about my second use case? ensuring fieldname's are mutually exclusively used?
<jab>that's on (local-delivery opensmtpd-action-local-delivery
<jab>Essentially, I want (service opensmtpd-service) to use work for local I am defining the default value of local-delivery.
<jab>But if someone says (service opensmtpd-service-type (opensmtpd-configuration (relay (opensmtpd-relay-configuration (host "smtp://"))))), then the local-delivery default value will be #f and NOT be used.
<jpoiret>i guess it works, but you could've also had a single field that would discriminate based on the type of its value, eg if it is an `opensmtpd-relay-configuration` or an `opensmtpd-local-delivery-configuration`
<jpoiret>(if you do want to make it exclusive)
<jpoiret>or just document that both modes are exclusive and let the user be responsible with its uses lol
<ajarara>If I have a patch dependent on another patch, should I lump the two patches in the same debbugs thread? Should I wait until merge/feedback of the dependency?
<ajarara>They are very different packages (libfido2 -> libcbor) so no motif between them. Just thinking about what eases maintainer lives.
<jab>jpoiret: oh...that's a good idea too. thanks for that suggestion!
<char>Is there a way to handle circular dependencies. It is for a testing library, so testing library depends on the library it is tesing. I don't really care about the testing library; I'm just trying to package the library being tested.
<nckx>char: Untested in more ways than one, but this is how I'd do it:
<nckx>With 1 less typo:
<nckx>But only 1, there's still another, of course.
<nckx>Can't make it too easy!
<mekeor[m]>can i roll back a system generation without needing an internet connection?
<mekeor[m]>currently it fails due to missing internet connection
<mekeor[m]>although i did not garbage-collect or any thing. i just want to roll-back to the last generation from some minutes ago