IRC channel logs
2025-11-04.log
back to list of logs
<pomel0>I'm not sure how to get the PID of grim, if that program isn't running all the time and just for a moment when it takes the screenshot <pomel0>ah but the program runs too fast for me to get to /proc/PID/environ <padtole>not sure how you're launching it but a more aggressive hack is to replace the binary with a wrapper that lets you inspect things, e.g. it could log the commandline and environment and spawn xterm <padtole>not the "right" solution, but it's there if nobody else has one <padtole>i guess you'd want to set a `guix gc --verify=contents,repair` running in the background after something like that <pomel0>well, in the end, I just changed to "exec grim" to "exec grim $HOME/Imágenes/Capturas/\"$(date +%c)\"" <pomel0>less elegant solution but it's not like I need $GRIM_DEFAULT_DIR for any other program <padtole>if you modified a package file your changes will disappear on upgrade or repair unless you add them to the package definition <pomel0>no it's uh the sway home service configuration I edited this on <padtole>oh cool i gotta still learn about homes <pomel0>yeah I think it's become my favourite thing about guix <pomel0>so far I mean (less than a week in lol) <padtole>i didn't even know about them, i've been using packages, manifests, shells, systems and manual commands <home-service-que>Hi Folks... what is a "extra-content (default: "") (type: raw-configuration-string)" ??? <vhns>it seems it was something with my browser that was blocking it <jlicht>I can't seem to use guix time-machine anymore from recently pulled guix <trev>fanquake: do you think brink/opensats would be willing to look into the guix fundraising campaign, since it's important for reproducible builds of core? <trev>perhaps you can look into it as well, futurile <fanquake> trev: Yea quite possibly. I'll follow up <fanquake>Is there a spending outline available somwhere? <futurile>fanquake: yes, I also created a preliminary budget based on what we'd spent in the past and an estimate of what we might be able to do on fundraising. It was conservative as there have never been any formal project sponsors. I can provide more info if that's helpful. <futurile>trev: yeah, the fundraising has been taking a lot of effort so far (setting up platforms, dealing with them and keeping people updated). The next thing I want to do is (a) get people to promote that we're raising money (b) start exploring if we can find organisations/institutions that would sponsor/support Guix through the Foundation <futurile>I don't really know who those organisations are, so very open to ideas/help/pointers heh <fanquake>futurile: I'd be happy to hear any more details <futurile>in my imaginary, ideal world we could get enough 'energy' to look at things like speeding up Guix, actually promoting it, doing more to sponsor development etc. For now just trying to make sure we can be sustainable. <trev>futurile: yep, just wanted to bring up some of the bitcoin charity orgs that help fund FOSS devs. I'm hopeful that they can help fund guix since it's integral to the main bitcoin implementation (bitcoin core) <kestrelwx> simendsjo: Re: secrets, it was rekahsoft's project that only had metadata in configuration. <NaN23>Hello! Anyone at 39C3 this year and interested in hosting self-organized sessions? <simendsjo>I see. I'll just add the secrets to the required accounts manually for the time being. I thought about creating it myself, but my notes easily crept up in scope, and now it's not so simple anymore ;) <sham1>Being able to do secrets in Guix would be amazing. Like, if I could just store my password encrypted in a repo and then have it get synced everywhere, that would be amazing <sham1>One step closer to proper guix-ish gitops <trev>sham1: why not just gpg encrypt a file and then load it in? ideally a scheme file <sham1>Because I've never thought of that, tbh <padtole>i'm on i686 without graphics, my system inits are getting hung up failing to build qtdeclarative. is there likely to be some way to simply not include this dependency since i'm without graphics? <padtole>i was trying to bootstrap minimally earlier and found most of the dependencies were wound up into gexpressions and bootstrap packages, this scared me, qtdeclarative looks easier though <padtole>profile hooks ... no way to disable profile hooks without patching guix >( <padtole>what's the "right" way to see what's pulling a dependency in? <cbaines>padtole, you could try the --no-grafts --dry-run options, that might provide some more information <padtole>kestrelwx thanks yes i do from the default config.scm omigod <padtole>qtdeclarative is still there, but now i can check changes much faster <janneke>still fighting with mixed rust/python package python-polars-runtime-32 <janneke>istm that i got everything built, now it barfs on installation; i either get (the default) <janneke>error: found a virtual manifest at `/tmp/guix-build...polars/Cargo.toml` instead of a package manifest <janneke>`cargo install` is only for installing programs, and can't be used with libraries. <janneke>btw, there's no program, there's only a library; /me is baffled <janneke>maybe i should just figure out which files to install where, and use copy-file <sham1>finthecalculator: yeah, I know about sops-guix, but I can't for example make it so that my /etc/passwd entry for my used is populated from a secret there. Well, I probably *can* but it would take a lot of effort and it probably wouldn't be as declarative as one would hope <janneke>hmm, other packages suggest there should be a .so file <sneek>Welcome back allana, you have 2 messages! <sneek>allana, csantosb says: "Changes which affect more than 300 dependent packages should first be pushed to a topic branch", see "22.11.2 Managing Patches and Branches" <sneek>allana, gabber says: if you manage to get the attention of our release team this may well land on the release scheduled for the beginning of next year. have a look at the relevant discussions in our mailing lists for further information. <avalenn>if I add a package gcc as input to my package, I have access to the main output of gcc (gcc:out). How do I access to other output (gcc:lib) ? <padtole>avalenn: i'm new to this but it looks like it's `(,gcc "lib") i've also seen `("gcc" ,gcc "lib") <avalenn>(list gcc "lib") indeed works, thank you, I will reread the manual to see how I missed it <padtole>avalenn: clarifying that i have not read the whole manual myself, this would be very hard for me. but yes it is generally assumed everybody has <avalenn>If I want to use two different outputs of the same package I have no choice but to use the deprecated version to explicitely differenciate both <avalenn>identity: with (list `(,gcc "lib") `(,gcc "out")) both have same key in inputs alist <yelninei>janneke: hurd and gnumach are updated to their latest snapshot again, I also replaced the intr-msg-clobber glibc patch with the fix from glibc 2.42. Plus some little things all over the place (e.g a previosuly automatically skipped coreutils test that fails after correcting the location of the mtab file ) <sneek>yelninei, you have 1 message! <hako>"search-input-file" or "search-input-directory"? <identity>avalenn: but does that actually matter for e.g. (search-input-file inputs "/lib/whatever-gcc:lib-provides")? <padtole>gnuboot offers a src tarball which has a lot of needed snapshots in it, but is missing the configure script. they also offer an adjacent git .bundle that has the autoconf stuff. i'm thinking of downloading both and merging the dirs, but i'm not sure how to write the origin. maybe a snippet that references the bundle as a second origin somehow? (or i could git-fetch and write a snippet that collects <yelninei>sneek later tell apteryx: Did you solve the issue with missing bunzip2? Should we just skip the elfutils tests in that case. <identity>padtole: you can include an origin as an input <padtole>identity: something like (let ((origin-2 (origin ...))) (package (inputs (list origin-2) (source (origin (snippet #~( ... #$origin-2 ...)))))) ? <identity>i think you missed a closing bracket there <padtole>recalling that gexps track their own inputs, thinking i can go #$(origin ...) <identity>i do not know what you are trying to achieve, so i do not know whether what you are trying to do is a good approach, but i assume that should work <hanker>I'm trying to run a Guix container with `sudo $(guix system container ./system.scm)`. <hanker> 8 (primitive-load "/gnu/store/kk0yj4xgncrilm7p2cp7h7knfqb?") <hanker>In gnu/build/linux-container.scm: <hanker> 365:8 7 (call-with-temporary-directory #<procedure 7f85c7d0caf0?>) <hanker> 483:16 6 (_ "/tmp/guix-directory.qwEJlD") <hanker> 1270:26 5 (safe-clone 2114060305 #<procedure 7f85c8461b80 at gnu?> ?) <ieure>hanker, See topic, use a pastebin. <padtole>identity: i'm trying to perform a git fetch after unpacking a tarball, roughly. i think i've inferred that i should add the git bundle as an input and perform the fetch in a build phase. not sure how to access inputs from a snippet <FuncProgLinux>padtole: I think the best approach to that would be to have two separate packages. <padtole>FuncProgLinux: two packages for one source tree? there's a .tar.xz that has everything but is missing the autoconf files and doesn't make; then there's the .bundle that has the .git tree, so if i fetch both and reset --hard from the bundle, in the source tree, it should start building <ieure>padtole, Why don't you build from the upstream Git repo? <ieure>Sounds like the release tarball situation is totally wack. <padtole>ieure: just wasn't sure if that's advised, i read it's better to use files than git (personally i would disagree) <FuncProgLinux>padtole: I don't know if this is similar to the newer MATE sources that requires cloning submodules. But you can enable (recursive? #t) if you do need submodules to generate your ./configure script <ieure>padtole, Strongly disagree with that. I think any steps between the Git repo and build are attack surface area. That's how the xz backdoor was injected. <padtole>ieure: oh, it's because the projects packaging coreboot download a ton of other projects before even starting the build, when built from git <ieure>padtole, Sure. You'd have to handle that stuff within the build, either with the source origins in your package input, or by packaging the dependencies and pointing the coreboot build at them. <padtole>libreboot and derivatives are basically a suite of custom buildtooling that downloads and builds other projects in specific manners, all based on trees of shell scripts and configure data. it would be ideal to redo it with guix packages, but it seems a big enterprise. maybe i can find a middle ground somewhere. we'll see <FuncProgLinux>oof, that's way out of my league :s but I too find strange to build from pre-built tarballs than git sources <padtole>some way of upgrading packages to hold tarball information alongside git information would seem nice to me, to kind of partially migrate packages between the two representations or let the user choose some day <FuncProgLinux>It is rather annoying. At least on the MATE side the packages I managed to update/rebuild from git sources required at least 3 more native-inputs <ieure>Every time I get sad about automake/autoconf, I look at how new languages like Go and Rust work, and get even more sad. <ieure>Can we really not come up with better solutions than this? sigh <FuncProgLinux>I knew (or at least read in some forums) that Rust's cargo was inspired by npm so yeah...now I know why a simple CLI tool suddenly reports 500+ crates being downloaded/built. <rrobin>the number of crates explosion problem is actually in reverse <ieure>I do not like Go much at all. <ieure>It has the most barbaric error handling of just about anything (it does, sort of, clear the extremely low bar of "better than C," though with a 30 year gap between C and Go, this deserves precisely zero props.) <rrobin>crates.io crates can only host a single "library" inside so people are forced to have multiple crates if they have multiple compilation units (for other reasons) <ieure>And even though this is the #1 complaint of most Go programmers, the dev team has not only decided not to do anything about it, but to stop even trying. awesome. great <ieure>errors as values can be fine... but no tooling to handle them better than manually unwinding the stack after every function call is awful. And (SomeValueType, err) is unrepresentable in user code, so you can't program your way out of the mess. No stack traces, completely bananas that this was even seriously considered, much less unfixed after 15 years. <ieure>Competing, mutually incompatible third-party libraries that try to fill in the gaps. No inheritance in the type system, so your code is full of ugly handling for wrapped errors and smuggling richer values in under the error interface umbrella. <ieure>Just a terrible awful system where ever "hold my beer" design decision compounds into Pain. <FuncProgLinux>I found go very similar to Pascal/Delphi. I would have preffered that object system. <cdegroot>I think that Golang was developed so Google could hire cheaper programmers, never understood why people outside Google wanted to use it. "You can be absolutely lazy about distribution because it creates bloated static executables" never has been a compelling reason to me... <cdegroot>(I thought they had something in terms of concurrency, luckily someone relieved me of that notion during a conference talk ;-)) <ieure>Go's concurrency has never impressed me. Like a lot of Go things, they're easy to start using, but very difficult to use correctly. <identity>cdegroot: static executables are very useful, unless you want to create, on average, 2 tarballs for every operating system you target; i consider laziness to be a virtue in this situation <cdegroot>Yeah, but here I am as an outside. I look at Golang when it is released, because, well, I'm assuming that Google does not build a programming language for fun. I declare it too limited to my taste - and I think that that was a correct judgment - but something lingers around the whole thing around "awesome concurrency", etc. <cdegroot>Luckily, someone at CodeBEAM then patiently explains how she's worked extensively in both languages, how the concurrency features of the one map _exactly_ onto the other, and that she prefers the BEAM (Erlang) model. That was a big relief :) <cdegroot>The whole static executable thing to me is something "below the line". A nice-to-have once you've already decided that it's the language for you. Then, I spent a good chunk of the '90s automating builds targeting all the operating systems so I have probably a more than average grasp on that. <cdegroot>Smells like SREs like it most, and they have a history of chasing bad languages ;-) (Perl, Python, Golang, I guess they're now all aboard the Rust bandwagon? Each one of them unfit for what they need, they should've learned Guile back in the day) <FuncProgLinux>cdegroot: I find it useful as well but it dies when the excecutable is not 100% purely static <identity>Go makes cross-compiling as easy as just setting an environment variable or two and running the compiler again, from what i heard <cdegroot>Do you _need_ it? I think not. I mean, don't you want integration tests and platform-native CI/CD anyway? <FuncProgLinux>cdegroot: Oh lol you got me with that one xD Go and Perl. I did learn rust but I ended up discouraged when most things required a crate and when people got he same JS trend of rewriting everything. <identity>the Go runtime *is* awesomely concurrent, though other things can be said about the language itself. most concurrent runtimes can not detect a dead-lock, nor do they have a garbage collector that can compete with Go's. Go took no small part in (trying to) make the «GC is terribly slow» myth go away <ieure>identity, Cross compiling is all fun and games until you end up with a bunch of comments holding ad-hoc build configuration syntax and extra source files. <cdegroot>I really am on board with the idea that the Erlang style of concurrency is much easier to understand and work with, with hardly any drawbacks. <FuncProgLinux>(I say it as a very biased SRE lol) I would have loved to know more in-depth things about programming. But I still need to learn more C because g <cdegroot>And I am serious if I say that SREs should start using Guile as well :-) <cdegroot>(I would not learn C. And that comes from someone who knows the language, likes it a lot, and has worked in it for the better part of 40 years. Rust does away with so many bad things and you don't _need_ all the crates) <identity>i could say a bunch of less-than-nice things about Erlang, but i would give up my role as devil's advocate (and those are more nitpicks in the grand scheme of things) <cdegroot>Rust essentially codifies what "we C coders" had for _years_ as best coding practices. I think it's worth it. I also think that unless you're writing a kernel or device drivers, or maybe the core couple of bits for a language runtime there's no need for it. <cdegroot>identity: Erlang-the-language? "meh". BEAM+OTP - the runtime system? Best concurrency/distribution platform ever. I use Elixir, it's a kind of Lisp thing I can live with :) <identity>cdegroot: i meant Erlang-the-whole-thing, really <cdegroot>"it works and it is boring", which is my favorite style of doing things. <cdegroot>(I've seen one other distribution/concurrency platform up close that I tihnk was good, Java+Jini, but that got drowned in a sea of nonsense. Deep inside, there's good stuff there but you need to dig hard) <identity>boring is good when you are getting things done, but not as much when exploring, and i mostly explore <cdegroot>Yeah, I explore in Lisp and write daytime production code in Elixir and that makes an otherwise very exciting job quite boring, in a good way :) <cdegroot>(one of my projects is seeing whether I can port all of OTP on top of Common Lisp) <FuncProgLinux>I'm stuck with TypeScript and Go basically. I don't have much issues with the latter...but the first one..oh no. <cdegroot>There's plenty of "let's just take the simple stuff" - actor libraries, etc. But to me one important thing about OTP is that you can have a million active processes and they all get GC'ed individually. <cdegroot>So it will have to end up as a VM layer of sorts on top of CL, quite a project. But a fun one and would give me my dream setup :) <dthompson>the big issue with erlang style actors is the lack of capability security. you can enumerate processes. <yelninei>what would be a good branch for updating dbus? <cdegroot>dthompson: yes. It's one of the things I loved about Java+Jini (the JVM is completely capability based) and one of the things I would add. In fact, I have dug through the BEAM source code to see what adding capabilities would entail (too much lol) <cdegroot>However, for all practical purposes, it's not a big issue for most systems, which run just trusted code everywhere. <home-service-que>Him is anyone using autossh? I think i found a bug but not 100% ... maybe a hanfling error from my side.. <home-service-que>- If autossh is configured for a user it will start at boot and not create a PID file which breaks the autostart <home-service-que>- Only if autossh is restarted after login via `sudo herd restart autossh` the PID file is created and the functionality given <yelninei>ieure: Wouldnt it make sense to combine it with other big rebuild updates <ieure>yelninei, Not sure what other stuff like that is pending. I think since last year's extremely long core-updates cycle, folks are more hesitant to bundle unrelated builds together. <ieure>yelninei, I'd probably just do it separately. <sham1>Is there some way I could do the equivalent of `guix time-machine -C channels-template.scm -- describe -f channels > channels-lock.scm` programmatically from Scheme? Obviously this command could just be run by a makefile but there's some stuff I would want to do with the resulting channel list which would be nice if I didn't have to do `(load)` or any of that stuff <sham1>In this case `channels-template.scm` is just a channel file where the channels aren't pinned to any commit and then the lock of course will have the commit on them <ieure>sham1, Fetch the Git repos for your templates, update channels-template with the commit hash of their HEADs. <ieure>time-machine is a *really* slow way to do this. <ieure>I don't know what Guile / guile-git stuff you'd need, but there's gotta be the stuff you want in Guix, otherwise the command you pasted wouldn't work. <sham1>Yeah, that's what I'm also thinking of. I shouldn't need to build the entire guix command and all that just to run `describe` just to update my lockfile <sham1>In fairness, it does make the subsequent steps of updating home environment, system, and then doing a pull faster because it's of course already pulled <padtole>arright i've written some boilerplate to package the different subbuilds of a libreboot tree into packages, making working with them much more manageable: <padtole>do i have to turn its download logic into origins, or can i just let it perform its network download? it says name lookup failed <ieure>padtole, Guix packages *cannot* access the network during build. <sham1>Makes it hermetically sealed <ieure>Yes, it's a critical piece of making builds reproduceable. <padtole>i am thinking evilly to myself that the download methods do this somehow and i could copy them ... <sham1>Don't they do the thing before the build? <padtole>can i extract the commit hashes from the metatree origin or do i have to code them all into the package .scm <ieure>I don't understand the question. <Rutherther>git-fetch removes the .git folder, so in the build you cannot extract any commits anymore <ieure>The Guix stuff that manages the package builds will make sure any origins needed have been put in the store, their hashes match, and expose them to the build container via the filesystem. The actual package build doesn't have network, so there is nothing you can copy into your package build which would give you network access. <padtole>the things it downloads are within the source code for the build repository -- it has a config file for each subproject, listing commits. so if i need to turn them into origins, i'd be reinventing the processing of these config files. do i have to do that in advance ...? <sham1>I guess you could have a procedure that produces a bunch of origin records <sham1>And then just list the stuff somewhere <padtole>ok i'm looking at librewolf and found a computed origin that uses a uri with a promise, maybe i can figure this out 0.0 <ieure>padtole, When you ask "do i have to do that in advance ...?" do you mean, can you compute the origins as part of the package build process? <ieure>This would reproduceability, because that's effectively letting a package download arbitrary stuff from the Internet during its build. <ieure>It'd just be a more annoying way of letting the builds run curl and download junk willy-nilly. <ieure>*would break reproduceability. <Rutherther>what you could do is make a new origin / fixed output derivation that will fetch them and then just put the hash for all the stuff in. But such package will probably not be accepted to guix repo if that's your goal, contributing it there <Rutherther>really the fixed output derivations can do any arbitrary commands, be it git clone, git fetch, wget etc. <padtole>well i would actually like to develop good practices <ieure>padtole, I think your problems fall into the "maintainer automation" bucket, and aren't really part of the package build process itself. They're tools to let people maintain the package more easily. <padtole>i'm trafficked by an AI virus, we should wait for the netsplit to resolve to not worsen anything <padtole>i am so sorry if i was immensely confusing, i had a netsplit and got scared <Rutherther>padtole: imo a good practice here would be to generate set of origins from the information in the repo, before hand, as part of the packaging process <ieure>padtole, I did not have a netsplit and I'm not sure why they're scary. <ieure>And come up with some way to make that reusable for when you need to update it. <padtole>ieure thank you for your feedback, maybe i need to learn more about netsplits <identity>ieure: i mean, suddenly seeing a *lot* of QUITs scroll by really fast is a bit scary, is it not? <padtole>ok so all the hashes go in the .scm file, that does sound like a good idea <ieure>padtole, Netsplits are when the connectivity between IRC servers is disrupted and one or more fall out of sync. So depending on which server you're on, you end up only talking to the subset of users reachable on your side of the split. <padtole>my mental model of them implied that all users would see the same netsplits, but i don't actually know how they work, i'd have to read up on it <sham1>Yeah. the only "QUIT" messages you get are for people on the other servers in the IRC server network <ieure>padtole, No, what you can see during a split depends on your position within the user/server topology. <padtole>i am thankful for guix's pursuit of reproducibility and transparency <ieure>padtole, In the 90s, you'd take over channels on EFnet by forcing a server with no users to split, connecting to it, and joining the target channel. From the split server's perspective, you'd be creating the channel, so you'd gain ops. When the split healed, you'd retain ops, and could kick the others out, add a channel key, etc. <padtole>back when most of the world was offline, we did that stuff for fun <ieure>padtole, Fond memories of the /WINNUKE script someone whipped up. <ieure>EFnet #windowsnt was a battlefield while that vuln was unpatched. <ieure>I don't think people quite grasp how bad security was back then. It was *wild*. <ieure>Winnuke was a one-packet buffer overflow that would bluescreen any Windows machine on the Internet. <padtole>a couple times i explored those things, once my winxp was hacked and i looked it up, used the same vulnerability to attack back the ip that hacked me. found somebody had left a note on their desktop telling them to secure their machine. <ieure>Microsoft took 2 or 3 days to patch it. And their patch didn't fix the actual problem, it looked for a magic string in the most popular exploit that was circulating, so you could change one byte and it'd work again. And it took *weeks* for a proper patch to land. <padtole>it's amazing for me to be in these chats with people nowadays. very rare. i'm sorry if i come off weird, it's been a couple decades since i chatted with coders. <ieure>I was only barely a coder back then. <tux0r>real programmers will never call themselves "coders". coders are who do the dirty work for programmers. ;o) <sham1>Real programmers don't use Pascal <padtole>mainstream languages are all crippled, currently thinking maybe there's a lisp engine faster than scheme <tux0r>sham1: i use pascal *and* lisp! am i a real programmer now? :/ <padtole>anybody who's a real programmer can show it by just writing an ada implementation so that coreboot's native graphics layer can be included in guix <cdegroot>padtole: pretty much all Common Lisps I looked at spit out machine code, and with optimization settings and type annotations emit very compact machine code. <tux0r>cdegroot: clasp spits out .net code ;o) <sham1>Yeah. SBCL is very good, although IIRC there are other free software CL implementations that might even be better nowadays, dunno. And of course some of the non-free ones also exist and output good enough code <cdegroot>SBCL has a pluggable backend so you can even make it emit asm for extensions that your CPU happens to support but SBCL in general not. <tux0r>sham1: clozure cl is still pretty nice, to be honest. but i "only" have sbcl around - "good enough" for most cases, except that its stacktraces *could* be better <nixos-liveboot>Hello Guix maintainers, just finished a fresh Guix System install (encrypted root) over a previous NixOS setup. install went fine, but it didn’t register the Guix EFI boot entry. it only left /boot/efi/EFI/Guix/grubx64.efi without adding it to NVRAM. had to manually run `efibootmgr --create --disk /dev/nvme0n1 --part 1 --label "GuixOS" --loader <nixos-liveboot>'\EFI\Guix\grubx64.efi'` to make it boot (about to reboot I am in liveboot right now I don't know if that worked). <nixos-liveboot>also, the installer kept failing several times during the bootstrap package fetch step (TLS handshake errors when fetching guile/gcc/python/ruby etc). retried a few times and it eventually worked, but it wastes my time and your bandwidth. <ieure>Network stuff is 100% because they installed 1.4.0 and it points to savannah. <sham1>It'll be interesting to see how much better things will be when 1.5.0 comes out <sham1>And we get new ISOs in the page <cdegroot>Yes. I recently went through the same exercise (NixOS -> Guix, not encrypted but "worse", stuff on LVM) and... it's a tad rough. I tried to bake a fresh ISO but for some reason that didn't pan out either. <sham1>Did you try a kettle instead