IRC channel logs

2025-08-28.log

back to list of logs

<ieure>It should be unmounting them, the kernel flushes before unmounting.
<madage>I've recently had a data corruption issue which was being caused by the ssd firmware taking too long to write and hanging tasks
<madage>after trimming the issue has stoped
<nikolar>which ssh
<nikolar>*sshd
<nikolar>*ssd
<nikolar>my fingers really want to type ssh
<madage>heheh... old habits die hard
<madage>hmm I don't remember the brand
<madage>let me see if I still have hdparm
<Deltafire>i enabled trim on my desktop machine, it's surprising how much time it adds to 'guix gc'
<madage>hmm does not say the vendor name... model p3-256
<madage>you mean continuous trim?
<madage>I've just trimmed once and intend to do so manually
<nikolar>it's probably fine to add it to cron on a daily basis or something
<Deltafire>yeah, added it to the mount options
<vagrantc>Deltafire: that is generally discouraged these days and likely to wear it out faster
<Deltafire>oh? i'll remove it then
<vagrantc>cron or similar makes more sense, as it can batch the operations all at once
<vagrantc>weekly or daily, depending on the amount of churn
<vagrantc>and apparently a performance hit
<vagrantc>:)
<madage>I've heard weekly recommended elsewhere, but thought I could take the safer approach and do it now and then, after garbage collecting
<madage>since it's such a poor man's ssd
<Deltafire>from what i can see, the reboot action in shepherd calls libc reboot (it doesn't unmount filesystems)
<Deltafire>the man page for that warns: If not preceded by a sync(2), data will be lost.
<Deltafire>i don't see and sync or unmount happening in shepherd, but i might be missing something
<Deltafire>*any
<Deltafire>the reboot action also handles halt (poweroff)
<Deltafire>checking /var/log/messages, just a message from elogind reporting that the system is rebooting
<Deltafire>it does stop a bunch of file systems, no message about stopping the root filesystem but if it was stopped it couldn't write to the log
<vagrantc>hmmmmm....
<Kabouik>unmatched-paren: just checking on you. You helped me years ago, multiple times, when I was starting to package my first Guix packages, and this was greatly appreciated. I remember you were quite active, but now I realize I haven't seen you posting in years, and your public sourcehut repo also hasn't been updated in two years. Still here?
<Kabouik>untrusem: yay, finally it works: https://git.sr.ht/~mlaparie/guix-private-channel/tree/master/item/mlaparie/packages/rust-apps.scm (not even sure what was the culprit but it's late enough for me to be happy anyway).
<Kabouik>Thanks a lot for the help.
<apteryx>feedback from a non-technical user: struggling to update their software because sometimes substitutes are missing and local building is triggered and fail after a few others (qtbase in this case), possibly because of the lack of memory on their machine.
<apteryx>or maybe their substitute servers are misconfigured
<untrusem>Kabouik: glad I could help, it seems you just changed `cargo-inputs` to `mlaparie-cargo-inputs`
<untrusem>apteryx: guix have non-technical user, interesting
<simendsjo>untrusem: I guess they are using guix on a foreign distro and just doing `guix pull` and `guix install`...? I'm not sure how much further you can go without getting into some scheme hacking.
<frankie>Hola, I get on guix pull: "warning: failed to load '(guix platforms loongarch)': In procedure abi-check: #<record-type <platform>>: record ABI mismatch; recompilation needed". I guess this is a problem with an already built .go file for such module, but how can I solve this kind of warning?
<frankie>Interestingly enough it happens only for the regular user, not for root. I also tried to remove the personal profile, without result.
<Deltafire>hmm.. shutdown my laptop last night, and this morning it complained that / was not cleanly unmounted and needed a manual fsck (with errors)
<identity>emacs*xwidgets packages seem to be broken because of webkit version mismatch
<untrusem>note: currently hard linking saves 50141.32 MiB
<untrusem>guix gc shows this everytime, what is hard linking as is it recommended?
<identity>untrusem: hard linking is, basically, merging identical files so they are stored in the same place on the filesystem. the daemon does it automatically when building stuff, iirc
<Kabouik>untrusem: No, I changed other things, because just that change was tried in an earlier commit and didn't work. Among other things, I also renamed the package to just its rust name, instead of `rust-packagename` as I think was the standard before
<untrusem>Kabouik, nice
<untrusem>identity, ohh, soft linking is like copying then?
<untrusem>symlink i think
<Kabouik>I don't know if that was the culprit though, but if it was, it's frustrating because it's stupid and I tried similar things earlier, but probably forgot one occurrence or something.
<untrusem>I should package my rust application too, it depend on other application too, so little complicated
<identity>untrusem: no, symbolic linking is like a pointer to a different file, “look over there instead”
<untrusem>I see, I always forget these terms
<untrusem>even though I use it
<untrusem>well do you hard link store?
<identity>“The daemon performs deduplication [in other words, hard-linking] after each successful build or archive import, unless it was started with ‘--disable-deduplication’.”
<identity>so most people do, i assume
<untrusem>I see, I will look into it, now I have to make a habit of using sync after guix pull,gc and reconfigure
<untrusem>though its been a month since I last messed up my guix
<Deltafire>heh, i've just got mine back together after mystery file system corruption
<civodul>Hello Guix!
<identity>hi civodul
<untrusem>> heh, i've just got mine back together after mystery file system corruption
<untrusem>I have broken my guix 4 times, how about you?
<untrusem>btw what would be the debugging approarch for `no code for modules error`?
<untrusem>I am trying `-L` to build a pakage but guix can't find it and shows the above error
<Deltafire>twice on this laptop, both due to filesystem errors. Luckily i've got /home on a separate partition
<Deltafire>not had any issues with the desktop computer, which gets a lot more use
<Deltafire><untrusem> btw what would be the debugging approarch for `no code for modules error`? - after my recent experience, i'd check the modules haven't become 0-byte files ;)
<untrusem>ok I solved it
<untrusem>turns out i need to be present in the correct path, for (verito pcakage system) file i need to be in the directory one above the verito one
<charlesroelli>Is it possible to suspend a guix shell? If I run "suspend" in a normal bash subshell, I see "[1]+ Stopped bash" from the parent shell, and if I run "suspend" in a guix shell, it gets stuck and I have to "kill -CONT" the process to resume it.
<identity>‘guix’ spawns a subshell, and when you suspend the subshell you suspend just that: the subshell, not ‘guix’, and guix is waiting for the shell to quit, which it does not
<untrusem>Kabouik: what did you do to solve the error, can you elaborate, getting the same error, lol
<untrusem>though I am packaging a program not available as crates
<untrusem>let me read the cookbook again
<Kabouik>Damn, I rebase my commit history to hide the insane mess my different attempts were. :p
<Kabouik>rebased*
<Kabouik>There are still some commits related to that rust package (tock) here: https://git.sr.ht/~mlaparie/guix-private-channel/log I would try first to rename the package to its original name, not rust-something, if that's what you're using. Also I added the guix utils module to rust-crates.scm, which I think was not in the template.
<untrusem>ohh I think, its because my package has cargo workspace dependency, its not as simple as this one :
<charlesroelli>identity: thanks, it makes sense
<Kabouik>Yeah, I was surprised too that someone like me could help someone like you regarding packaging, to be honest!
<untrusem>ehh, I am a complete noob
<euouae>It's time that I try to move to Guix again
<untrusem>euouae: welcome back
<chuck1316>Hey guys, does anybody know if there is some sort of automatization for writing and debugging code for Scheme Guile? I have thought an approach where there is a knowledge base with scrapped code from the Guix main repo, that interacts with Claude code using RAG, to code and debug for my use case. Have anybod
<chuck1316>Have anybody heard about anything like that?
<untrusem>btw in my guix forks the files have `.in` extension, do we need to remove the extension by hand or is there a automated script for that?
<untrusem>like the pre-inst-env.in , guix.in etc
<euouae>these are autotools files
<euouae>the .in is for input and they're templates processed to `pre-inst-env` and `guix` after autoconf/automake
<untrusem>to to buid them I need to run autoconf?
<euouae>well yes, where did you find them?
<untrusem>in my forked guix repo
<euouae>what repo did you fork? link?
<untrusem>codeberg.org/guix/guix
<euouae>there's no top-dir *.in files, where are you looking at?
<untrusem>in build-aux/ and in scripts/
<untrusem>there were pre-inst-env.in and guix.in respectively
<euouae>pre-inst-env is the shell script you run to load up the proper environment to work in the directory
<euouae>In <https://guix.gnu.org/manual/en/html_node/Requirements.html> it says "The build procedure for Guix is the same as for other GNU software, and is not covered here." and to see README and INSTALL
<euouae>INSTALL is not in the tree, so that's a mistake, they probably renamed it at some point
<euouae>untrusem: Building from git gives you these instructions, that's the ones you should follow <https://guix.gnu.org/manual/en/html_node/Building-from-Git.html>
<untrusem>ohh I was asking because contributing they say to use `./pre-inst-env guix link pcakage` and I didn't had that script
<euouae>You want to contribute where?
<untrusem> https://codeberg.org/guix/guix/pulls/2205#issuecomment-6747052
<untrusem>I normally just run `guix lint <package>` but they suggested this so looked into this
<euouae>yeah you don't have to use ./pre-inst-env.
<euouae>It's a technicality, in the unlikely situation that your `guix lint` is outdated from ^HEAD it won't be a crime
<euouae>But what he's suggesting has to do with hacking guix directly from its source tree
<euouae>There might be a guide for the workflow to that, you could just ask him. I am not familiar enough with guix contributinos
<untrusem>csantosb, you here?
<cbaines>civodul, hey, are you around?
<civodul>cbaines: hi! yes!
<civodul>wazup?
<cbaines>civodul, I did reply to https://codeberg.org/guix/maintenance/issues/24 but I'm happy to try and talk through the details in case you're unsure
<civodul>cbaines: thanks! so yes, in short, we need changes in qa-frontpage so that it would submit derivations from data.qa, right?
<civodul>i skimmed over manage-builds.scm but it’s not clear yet to me what tod o
<cbaines>that sounds reasonable, it's important to keep in mind that for the build coordinator, there's not the same one to one mapping between derivations and builds that there is in Cuirass
<cbaines>so the qa-frontpage is making requests to the build-coordinator to submit builds for derivations related to patch series (and branches)
<civodul>right
<civodul>i see the sub-commands in guix-qa-frontpage.in, which relate to issues or branches
<civodul>i guess we would have sub-commands dealing with PRs? or can we have something unaware of PRs that just deals with “repositories” and “branches” in the sense of the Data Service?
<civodul>(as a first step at least)
<cbaines>civodul, the qa-frontpage does submit different tags based on why it's submitting the build (as well as picking different priorities), and that behaviour is based on what the builds relate to (e.g. a branch or patch series, and even how many packages the patch series affects)
<cbaines>e.g. looking at the current activity https://bordeaux.guix.gnu.org/activity you can see from the tags that these builds have come from the qa-frontpage e.g. https://bordeaux.guix.gnu.org/build/271d4654-1e47-4395-9c16-bded413e70d4
<cbaines>these tags are quite important as it allows the qa-frontpage to manage the builds
<cbaines>so when a branch is pushed to and there's a new revision, the qa-frontpage finds the builds that don't relate to the most recent revision (by querying the build-coordinator for builds by the tags), and cancels them
<cbaines>there's similar stuff for patch series, it also notices when it should not longer be building a branch or processing a patch series, and cancels all the builds
<civodul>ok
<cbaines>this isn't just a neatness thing, but it's important because the derivations for these builds might be removed from the QA data service since it should be removing those revisions
<cbaines>and you're not going to be able to perform the build if the derivation is no longer substitutable
<civodul>right
<civodul>so we’d create builds with tags like ‘pull-request: 123’
<cbaines>yeah
<civodul>problem is, the Data Service has PR derivations, but it doesn’t have provenance metadata about them (it doesn’t know which PR they correspond to, etc.)
<cbaines>Maybe the qa-frontpage can get the HEAD for each Pull Request from Codeberg, then it can just look up the corresponding revision on the data service
<cbaines>e.g. https://codeberg.org/guix/guix/pulls/2352
<cbaines>get the HEAD bd3251e8e71c67d8f0851ccedc4ea4ae7a12420b
<cbaines>then you can query the data service https://data.qa.guix.gnu.org/revision/bd3251e8e71c67d8f0851ccedc4ea4ae7a12420b
<cbaines>luckilly, the data service doesn't need any more information than the commit hash
<civodul>okay
<civodul>yes, that makes sense
<cbaines>civodul, one complication is that the qa-frontpage currently depends on clean comparisons between two revisions (in most cases)
<civodul>how’s that a complication?
<cbaines>so to find out what derivations to submit builds for, it compares the base (as in the merge base) revision for the branch or patch series against the HEAD revision
<cbaines>which depends on both of those revisions being processed by the data service
<civodul>yes
<civodul>but i suspect that’s fine
<civodul>because in practice PRs are against a public branch, which the data service already knows about
<cbaines>maybe, but with the number of unprocessed revisions on the master branch https://data.qa.guix.gnu.org/repository/1/branch/master , I imagine there's going to be quite a few unprocessable Pull Requests
<civodul>hmm ok
<cbaines>but yeah, some should work, and hopefully the coverage of the master branch can improve
<civodul>yeah
<civodul>i wonder what shortcuts we could take/quick hacks we could come up with to have *something* in place soon
<civodul>for now, i need to head home
<civodul>thanks for the discussion! :-)
<cbaines>you're welcome, unfortunately I have to go now as well o/
<civodul>heh ttyl
<vagrantc>so, maybe once every month or two, something decides i should be getting every comment on every pull request and issue in codeberg ...
<vagrantc>i go in, and find that i am marked as "watching" guix/guix ... i mark it as "unwatched" and then wait another month or two
<vagrantc>not sure if there is something going on with the way teams are instantiated and updated, or what ... or if it is just some random bug in codeberg
<vagrantc>it really kills my ability to consentually follow the parts of guix i am actually interested in or working on...
<ieure>vagrantc, It's probably the job that updates Codeberg teams from etc/teams.scm.
<vagrantc>ieure: yeah, that's my hunch ...
<ieure>vagrantc, Not sure if it still works this way, but I believe the initial version removed collaborators and readded them, that's likely resetting notification preferences.
<vagrantc>how often is it run? manually? was it run recently (e.g. since the last 9 or 10 hours?
<vagrantc>guess i can file an issue ... :)
<euouae>before codeberg, wasn't guix at savannah? what happened
<vagrantc>there was a completely public discussion and entire process followed regarding the switch ...
<vagrantc>i am overall happy with the switch to codeberg, but there are obviously some glitches, as if there were no glitches with savannah :)
<euouae>Maybe Guix was too fast moving for Savannah?
<euouae>I use codeberg for some of my personal projects
<vagrantc>i don't think guix can be accused of being too fast at ... much.
<euouae>oh so they're trying to move away from e-mail?
<euouae>oh no, it's a dual approach, e-mail + pull requests
<euouae>that's nice
<euouae>vagrantc: I do remember reporting bugs that guix people then had to report to savannah people
<vagrantc>well, it's mostly pull requests, email is being phased out (although codeberg does have some email interoperability features)
<vagrantc>still in the transition phase for quite some time ...
<ieure>euouae, Discussion started because, I believe, the email flow was a fairly regular source of friction, especially to new contributors, and it was flagged in the 2024 survey. But during the discussion, Savannah became extremely unreliable, because it was getting DDoSed by malicious so-called "AI" scraper bots, to the point where it was barely functional for Guix users.
<ieure>And the GNU admin team seemed unable to deploy effective countermeasures. There was a 3-4 weeks of people coming in here complaining that `guix pull' didn't work, because Savannah was down.
<ieure>*was a solid 3-4 weeks
<euouae>the DDoS is madness
<euouae>why is it happening, because GNU is attempting to stop it and the bot users are mad? Or because every code forge is getting hit by such a data siphon?
<euouae>To be fair, the e-mail workflow is elitist
<euouae>The core issue being that there are no good free e-mail providers and hosting your own is costly both in time & renting domain/server
<ieure>euouae, The entire open Internet is under attack by these LLM scraper bots.
<euouae>ieure: any technical articles to read on this?
<ieure>euouae, Dozens of such articles, https://blog.xkeeper.net/uncategorized/tcrf-has-been-getting-ddosed/ https://www.akamai.com/blog/security/rise-llm-ai-scrapers-bot-management https://status.sr.ht/issues/2025-03-17-git.sr.ht-llms/
<ieure> https://herman.bearblog.dev/the-great-scrape/
<euouae>isn't it as simple as setting up a data quota per IP?
<identity>also <https://drewdevault.com/2025/03/17/2025-03-17-Stop-externalizing-your-costs-on-me.html>, though drew is the sourcehut guy
<identity>euouae: they will just switch the ip when they run out, and they have lots of them
<ieure>euouae, No, because the LLM companies are using botnets. There's a brisk trade in "residential proxy" services -- companies put TCP malware spam in free phone apps, then sell access to the customer's network connection.
<euouae>"from a wide variety of IP addresses, almost all Chinese-based:"
<ieure>euouae, https://oxylabs.io/products/residential-proxy-pool https://www.webshare.io/residential-proxy https://soax.com/proxies/residential
<euouae>^ That's why you should ban all Chinese ASNs
<euouae>ieure: ... I had not grokked the scale of stupidity that has overtaken the world
<euouae>I knew about residential malware proxies but I had no idea LLM companies utilize them
<ieure>euouae, https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives/
<euouae>depressing
<ieure>euouae, It is very literally destroying the open Internet. I used to like reading technical documentation in eww, but I can't now, because everything -- even some static sites with documentation -- has to deploy defensive measures against these bots which don't work in browsers without JS.
<euouae>well to be fair, it is only destroying HTTP
<identity>only
<euouae>but yes I agree, we're under many attacks. US/EU participate in censorship at alarming levels
<luca>I know a lot of static sites that deploy protections try not to impact non-js clients (such as w3m/git/curl). But misfires happen
<euouae>aside from the LLM madness you just described
<ieure>It is impossible to overstate how awful all this is.
<ieure>My work has a mandate that all programmers use LLMs as much as possible. This is now part of both our hiring process and performance reviews.
<ieure>Other folks in my network have it even worse. One person put up a complete, working, tested PR; they were told to close it and redo the work using an LLM.
<ieure>Another's employer makes them take screenshots or recordings of them using the LLM tools to prove they are.
<ieure>I told our CEO that LLMs were "ethically untenable" and am continuing to not use them. And I definitely couldn't make it through the interview process now. Feels pretty bad.
<euouae>ieure: Yeah. It's silly because LLMs are not a panacea, but as always what do people do? When nuclear energy was harnessed, we started making and stockpiling bombs
<euouae>For my personal woes, as someone with a math phd I can't find employment anywhere, including as a math teacher in secondary education. While partly due to the economy, I'm sure LLMs play a large part to it.
<ieure>euouae, Yeah, between LLMs and the US economy suffering because of the unstable madman in charge, it is a very poor time to be looking for work. My wife is also has a Ph.D in math, got laid off in the spring, and is still looking.
<euouae>In a fleeting conversation I had with a manager of some company, he was gloating about his ability to create apps now "I can make anything in an hour" -- completely unaware of how broken those apps are
<euouae>(That is a guy who can't program.)
<ieure>Our CEO spent fifteen minutes of the allhands before last showing off a thing he vibecoded. It helps his kid snipe Pokemon cards off eBay. Yes, this is exactly the same as our ingress system which processes (a redacted but very large number) of events per day in realtime.
<vagrantc>as much as i sympathize and largely agree with most that has been said ... this conversation has drifted a bit off-topic for #guix :)
<euouae>Did you explain to him that his kid is wasting *his* money and simultaneously getting addicted to faux-gambling while robbing itself of its precious childhood time?
<euouae>I suppose that'd be too dark
<euouae>Yes sorry -- I'll shut up now ;p
<Rutherther>anyone familiar with "Samba requires large file support, but not available on this platform: sizeof(off_t) < 8" when cross-compiling to aarch64? like what is large file system and why it is not supported?
<ieure>Rutherther, "large file," not "large file system." It means support for files > 2^32 bytes.
<ieure>It seems like an error in cross-compilation, LFS existed before 64-bit CPUs.
<Rutherther>yeah... I mean it builds with --system=aarch64-linux, so apparently it is supported by aarch64
<ekaitz>it's probably getting some wrong header file, maybe arm 32-bit?
<Rutherther>I am just trying to get guix pack --target=aarch64-linux-gnu working in the end, currently fighting with talloc not cross-building
<Rutherther>(sorry I mean with -RR)
<euouae>it's a kernel flag
<Rutherther>euouae: and why would cross compilation environment not have it?
<euouae>Oh no it's not a kernel flag, my bad. I think it's related to your toolchain (libc, compiler)
<Kabouik>Anyone successfully using Emacs gptel on Guix? I have tried the MELPA version from the guix-emacs channel (unfortunately I need some packages not available in Guix yet, and this leads to conflicts with some Emacs packages available in Guix now, gptel is one of them), but also in a Guix shell: Even using `guix shell emacs-gptel@0.9.8.5 emacs-pgtk -- emacs -Q` to specify only the gptel package for Guix (not the MELPA one) and then `(require 'gptel)` and
<Kabouik> `M-x gptel` gives me this: gptel--handle-wait: Symbol’s function definition is void: gptel-curl-get-response
<euouae>Rutherther: to get 8-byte off_t, you can use -D_FILE_OFFSET_BITS=64 to gcc
<euouae>I don't know if that's what you're looking for. For me it's already 8 bytes, something is off.
<yelninei>Rutherther: How does samba get the sizeof(off_t) ? When cross compiling it cant execute a test program so the check fails/it has to guess. Can you override this maybe?
<euouae>Ah -- good point. In that case, which test is complaining?
<Rutherther>oh, now I get the issue.
<Rutherther>The thing is, I am giving it --cross-execute flag and I didn't understand what it is, but now that you point out it has to execute something... it acutally expects qemu-aarch64 :)
<Rutherther>this kinda sucks tbh
<Rutherther>but there is another switch called --cross-answers that I think could solve this, I presume it is what gives 'answers' to the questions asked by the configure script, such as large file support
<Rutherther>but I suppose this basically depends on the target system :/
<Rutherther>and of course qemu depends on talloc transitively, so --cross-execute wouldn't be a solution even if I wanted to (unless I made talloc bootstrap without cross compilation support)
<Rutherther>well, now I am thinking I will just use --system instead of --target and call it a day xD
<Rutherther> https://wiki.samba.org/index.php/Waf#Using_--cross-answers
<Rutherther>I also see there is waf-build-system without cross compilation support, well I understand why :D. This sounds like a fun challenge, but I will probably leave it for some other time
<Rutherther>I also see there is waf-build-system without cross compilation support, well I understand why :D. This sounds like a fun challenge, but I will probably leave it for some other time
<euouae>yeah --answers means you can supply the answers yourself manually I think
<Rutherther>yes
<Rutherther>how do other build systems deal with this problem? as in, configure wants to know something where execution will be necessary
<euouae>the cross compiling suite should have the answers to those questions
<euouae>Assuming you're using autotools, <https://www.gnu.org/software/automake/manual/html_node/Cross_002dCompilation.html>, with --host you'd get answers for "sizeof (off_t)" based on *their* supplied answers
<euouae>I don't think it's dynamically checking them, but obviously with QEMU you can dynamically check
<euouae>but it seems that samba is using its own configure/build script based on waf.io
<Rutherther>euouae: I still don't really get it. What command do you execute to get the sizeof answer?
<Kabouik>Re: gptel issue: I opened an issue here: https://github.com/karthink/gptel/issues/1051 Never know, maybe some other Guix user successfully uses gptel and sees that.
<euouae>Rutherther: sizeof() is not a preprocessor macro so you can't resolve it with gcc -E. C build systems will build test programs like `int main() { return sizeof(off_t) == 8; }` and execute them and check their return status.
<Rutherther>euouae: yes and then I am asking how do other build systems than waf know answers to such questions during cross compilation
<euouae>Rutherther: when you're cross-compiling, you can of course build a test program like that, but you can't "execute" it without a CPU emulator or an actual host to run it on
<euouae>Rutherther: like I said, autotools probably provides fixed answers for its configured automake macros
<euouae>e.g. aarch64 bundle of answers for the test macros
<euouae>it also probably skips some tests
<yelninei>Rutherther: For autotools it tries to guess but if you set the cache variable directly then it takes that. e.g. https://codeberg.org/guix/guix/src/branch/master/gnu/packages/text-editors.scm#L911 . But with a nonstandard build system things get more difficult
<Rutherther>I get it now, thanks