IRC channel logs

2023-05-17.log

back to list of logs

<gnucode>howdy people!
<gnucode>I was having fun today editing the hurd.texi document. It'
<gnucode>s pretty fun.
<gnucode>is anyone using lwip to replace pfinet here?
<janneke>apparently our pci-arbiter.static is broken
<janneke>and rumdisk.static too
<janneke>if i copy debian-hurd-20220824.img's /hurd/pci-arbiter.static to /debian/pci-arbiter.static and use that in the grub.cfg it crashes on rumpdisk
<janneke>if i also copy /hurd/rumpdisk.static and use that, it boots
<youpi1>perhaps it's your libp‌ciaccess that needs an update?
<youpi1>we use version 0.17
<youpi1>which doesn't require any hurd-specific patch any more
<janneke>guix it as 0.16
<youpi1>that's too old
<youpi1>we have fixed many things since then
<janneke>ah great
<janneke>i was hoping to ask for a bisect strategy, but you beat me to it with a great suggestion, thanks!!
<janneke>ACTION goes to update libpciaccess in guix (at least for hurd)
<gnucode>morning friends!
<janneke>youpi1: libpciaccess-0.17 fixed our pci-arbiter!
<youpi1>yay
<janneke>now it seems rumpdisk.static is b0rked :-(
<janneke>i managed to boot debian with the guix pci-arbiter, and guix with debian's rumpdisk.static
<janneke>ACTION is hoping for yet another magic guess/spell ...
<janneke>*the guix pci-arbiter.static -- of course
<janneke>i'm getting this error output, to the bare eye identical on debian and guix alike:
<janneke> https://paste.debian.net/1280488/
<youpi1>janneke: you probably need the https://salsa.debian.org/glibc-team/glibc/-/blob/sid/debian/patches/hurd-i386/local-clock_gettime_MONOTONIC.diff workaround
<janneke>youpi1: oh my, didn't you hint me before about that (about 3 years ago?) -- it could be this was dropped, how weird
<janneke>thanks
<janneke>ACTION goes to investigate
<janneke>hmm, the MONOTONIC and centiseconds patches weren't dropped, but forward ported from 2.31
<janneke>ACTION tries a rebuild with the guix patches reverted, and the current debian patches applied
<janneke>oh my, there are some 50 odd patches there...
<janneke>ACTION can imaging upstreaming to glibc can be...hard
<janneke>*imagine
<youpi>janneke: it's not so much upstreaming as it getting it commited that is hard (I can do that easily), but turning them into proper fixes
<janneke>youpi: ah, so it's easy to create "DRAFT" commits, "hey it works", but to turn it into something 100% sane is hard
<janneke>glibc is a pretty serious project...
<youpi>it's the 90%-10% rule, yes
<youpi>getting 90% working is 10% hard, but getting the last 10% working is 90% hard
<youpi>but we do want computers to work 100% of the time
<janneke>makes a lot of sense (me knows it as the 80/20 rule)
<youpi>percentages vary, but that's basically the same :)
<jpoiret>janneke: so do we need some libc changes? I can look into having a glibc just for hurd
<janneke>jpoiret: dunno
<jpoiret>when I tried myself I just used a hack to quickly check if that was the issue causing our hurd to not boot
<janneke>what i'm doing right now, is in cross-glibc revert the monotonic and centiseconds patches we have, and apply the ones youpi linked to
<jpoiret>youpi: do you know if a 2020-ish glibc+kernel headers combo would be able to properly compile on a new hurd? our native-compilation on hurd relies on bootstrap blobs, among which this old glibc, and the binaries it produces seem to crash on our newer hurd
<janneke>our current patches were "adapted" from the debian ones, and they were at the time only necessary for building python
<jpoiret>whereas our cross-compilation uses the newer headers
<youpi>cross-bootstraping from scratch should be just working fine
<youpi>there's no hidden binary blob
<jpoiret>no I mean, our bootstrap libraries are 2 years old compared to our running Hurd that we're building on, so I was wondering if the mismatch would cause problems down the line
<janneke>jpoiret: note that we boot with rumpdisk just fine now, if we use debian's /hurd/rumpdisk.static
<jpoiret>oh, that's great!
<youpi>building mig/gnumach/glibc/hurd are not supposed to depend on whatever you already have
<janneke>so everything else _seems_ fine, but of course, it's rumpdisk.STATIC, so it could still be a glibc problem...
<jpoiret>youpi: well let's say I have some very old linux kernel headers that aren't up-to-date at all, and I build some native binaries with it, but run them on a newer Linux. They might not work, right?
<jpoiret>esp. given the speed at which Hurd/Gnumach seems to move
<jpoiret>maybe it's not really clear what I'm asking
<janneke>and the error message saying
<janneke>[ 1.0100050] panic: rumpuser_clock_sleep failed with error 22
<janneke>is pretty suspicious wrt those clock patches
<youpi>a newer version of the kernel is not supposed to break older userland binaries
<youpi>linux takes a lot of care about that, and reverts whatever breaks that
<youpi>we also try that in gnumach, though there's much less testing of it
<jpoiret>yes, but what about hurd/gnumach?
<jpoiret>alright, that's what I was wondering
<youpi>see the various pieces of code if (foo() == -1 && errno = EMIG_ID_BOGUS) foo_old()
<jpoiret>I'll see if updating the bootstrap solves the native compilation problems we have
<jpoiret>thanks!
<janneke>using debian salsa's glibc time patches rumpdisk still crashes, but differently
<janneke>that might be good news?
<janneke>ACTION sends mail...
<janneke>that took me 5h30' to build...
<gnucode>hey hurd people!
<gnucode>I tried a grep -r command in a httpfs node to demonstrate to a friend some of the issues that it has...
<gnucode>well the fsck saved my bacon!
<gnucode>kudos to everyone for making this OS pretty stable!
<janneke>gnucode: +1
<gnucode>:)