IRC channel logs

2023-09-16.log

back to list of logs

<gnucode>azert no need to apologize. I thought it was funny!
<nikolar>hello gnucode
<nikolar>or should i say GNUmoon
<gnucode>haha
<gnucode>I am working on some updates to the Hurd manual.
<gnucode>Nothing ground breaking. But I did copy a section over from the wiki about libnetfs
<nikolar>cool
<janneke>headsup: the "getauxval" crash building gdk-pixbuf, gobject-introspection, etc. was fixed by adding this glibc patch https://salsa.debian.org/glibc-team/glibc/-/blob/5af8e3701c63ad202b652b5051bec592b8385820/debian/patches/hurd-i386/unsubmitted-getaux_at_secure.diff to guix's glibc/hurd-2.37
<janneke>yep, that was it; pretty happy we have gdk-pixbuf now!
<janneke>so, in guix we're just carrying 5 hurd-specific glibc patches now
<janneke>all courtesy of upstream, of course :)
<gnucode>janneke: what is gdk-pixbuf for?>
<janneke>gnucode: it's an image library and somehow a dependency for running `guix system reconfigure'
<gnucode>janneke: that's surprizing. Are you running Guix Hurd on real hardware?
<janneke>yeah 'twas to me
<janneke>i've got it installed on my x60 but for development i use achildhurd VMs
<gnucode>ok. I am running Debian GNU/Hurd on my T43. Works enough to be a daily driver. Been using it for 2 or 3 weeks now.
<janneke>the last missing dependency or so it would seem, and it required a glibc patch, i.e., a world rebuild
<janneke>that's pretty impressive!
<janneke>guess your main priority now is to get a journaling file system?
<gnucode>I honestly feel like what you are doing is more impressive. :) I'm just a dude who bought a cheap computer and set it up.
<janneke>hehe
<janneke>thanks
<janneke>i like to think that we're all in this together, and everyone does what they can
<gnucode>There is a bloke on here who is an undergraduate CS student, who was talking about setting up a logging filesystem for The Hurd. It would be nice to have a slightly better filesystem.
<gnucode>I appreciate that too.
<janneke>yeah
<janneke>i've got no idea how difficult it would be to include/merge/port ext3, or ext4
<gnucode>janneke: there was someone who apparently worked on an ext3 port for his Phd thesis I think...
<gnucode>So there was some work on that area.
<janneke>ahh, good
<janneke>so much going on these days on #hurd, amazing
<nikolar>janneke: that was me, the fs guy
<janneke>nikolar: ext3, how is that going?
<nikolar>i didn't mean ext3, but my own log structured fs
<nikolar>that i am writing for my disertation
<janneke>ACTION briefly looked at ext2/3/4 when making a patch to embed passive translators
<janneke>ah
<nikolar>though that's definitely an option to look into
<nikolar>i was mostly throwing out an idea because that's what i am working on at the moment
<gnucode>nikolar: link to what you have so far?
<nikolar>nothing specific yet
<nikolar>code is still pretty early
<gnucode>design document?
<gnucode>crayon drawon?
<gnucode>cave painting?
<gnucode>papyrus scroll?
<nikolar>kek, all in my head, sorry
<nikolar>i meant to write the code first
<nikolar>and then do all the documenting
<nikolar>gnucode: ^
<gnucode>nikolar: it'll be better than bcachefs ? :)
<nikolar>can't guarantee that
<nikolar>but considering how jank btrfs is, i could say it will be better than it in some regards lol
<azert>This is the ext3 patch from decades ago https://savannah.gnu.org/task/?5498
<gnucode>haha!
<gnucode>I wonder if we could add the ext3 as read-only ?
<azert>I think we will eventually need to catch up on ext3/4
<azert>The patch seems unfinished, from what I understood reading the mailing list part of this work  did contribute to the current ext2fs that supports >2gb volumes
<azert>I don’t know how much work would be to apply this patch, I should try when I find time
<Gooberpatrol66>just throwing this out there as maybe a way to make use of linux drivers/filesystems in hurd without constant maintenance: run the drivers in VMs
<Gooberpatrol66>linux has a framework called "virtio" where driver frontends and backends can run in different VMs
<Gooberpatrol66>it seems like spectrum-os and sel4 are both using it
<nikolar>redox also has plans to implement something like that
<nikolar>pretty usefull if you need hardware drivers, but i'd say it's a temporary solution for filesystems
<gnucode>bcachefs can apparently be run on on fuse. We could improve our FUSE implementation.
<nikolar>that would definitely help
<nikolar>also move to fuse 3 too
<Gooberpatrol66>the typical use case is the backend running on the host and frontend running on the guest, but it might be possible to run it "inside out" by passing the PCIe device to the guest with the IOMMU
<Gooberpatrol66>xen also supports it https://wiki.xenproject.org/wiki/Virtio_On_Xen
<nikolar>take a look at this: https://www.redox-os.org/news/rsoc-2023-eny-1/
<nikolar>pretty much the same idea
<nikolar>Gooberpatrol66: ^
<Gooberpatrol66>oh so they ARE running the backend in the guest
<Gooberpatrol66>fantastic, that's exactly what i was looking for
<nikolar>yup
<nikolar>not sure if they've done any work yet
<nikolar>or just plans
<azert>Gooberpatrol66: i think that’s quite similar to rumpdisk as an approach
<azert>the pci-arbiter was made to allow this kind of stuff
<nikolar>what's rumpdisk
<azert>It’s the Hurd sata driver
<nikolar>oh in what way is it similar
<azert>that it’s a stripped version of netbsd running in an userspace process
<nikolar>oh kind of cursed lol
<nikolar>was that easier than to write a sata driver
<azert>Much less cursed than running full virtual machines
<nikolar>guess so
<azert>Yes I think it was straightforward
<nikolar>well there should eventually be a native sata driver
<janneke>it's not gnu software, so getting it to cross-build was quite some work i believe
<nikolar>but good to know that there's already a way to piggyback on other kernels lol
<janneke>and it's still a pretty ugly hack
<nikolar>obviously
<janneke>obviously
<janneke>but it allowed me to install guix/hurd on my x60, so yeah
<janneke>pretty useful hack
<nikolar>as long as it works
<azert>I think rump the only way to get Hurd on real hw at this point , also rumpnet rumpusb
<janneke>yay, rumpnet!
<azert>No one is ever going to write native drivers for Hurd in the short timeframe
<nikolar>at least sata is a standard thing
<nikolar>unlike the thousands of network and usb cards/chipsets
<azert>Also usb is standard, but it is such a mess!!
<nikolar>is the programming interface to the hardware standard
<nikolar>for sata it is
<azert>Yeah I’m pretty sure Hurd will end up with a monolithic usb stack with all drivers compiled in
<nikolar>kind of hard to avoid
<azert>Same is rumpnet: a monolithic network driver modlukes
<azert>modules library
<nikolar>yeah got it
<azert>To avoid that one would need to define interfaces and rewrite all drivers on top of those interfaces
<azert>Huge work
<nikolar>i am all for rump like approach when there are many hardware implementations to support
<nikolar>but a sata driver should eventually be written for hurd
<nikolar>obviously when there's someone who volunteers to do it
<janneke>much better than trying to bribe someone into doing it by paying them money
<azert>Right
<nikolar>isn't that just called being payed
<azert>Yeah, by who?
<nikolar>well that's a different thing
<youpi>nikolar: why writing a sata driver when there is one available through rump?
<nikolar>well imaging what the overhead is with a whole another kernel running on your system just to server as a sata drive
<azert>I wouldn’t make assumptions about that
<nikolar>fair enough i gutss
<azert>I think ext3 would be more a win than another sata driver
<nikolar>that is true
<nikolar>i am not saying it's a priority
<nikolar>just something to do in the future
<youpi>nikolar: a driver would have the same overhead
<youpi>"a whole other kernel" doesn't imply overhead
<youpi>a kernel does not bring overhead per se
<youpi>what costs is adress space change, typically, and that's what the hurd is willing to pay anyway
<janneke>what if you weigh-in code review?
<youpi>what code review?
<janneke>every (useless/accidental) line of code has a cost
<youpi>no
<youpi>when you're not maitaining it, non
<janneke>every line of code you're compiling is a potential security problem
<janneke>in the end, you're responsible for all the source you compile
<janneke>so, less is better
<janneke>for now, i'm pretty happy with rump
<youpi>well, you can just delegate
<youpi>we don't implement openssl ourselves
<azert>Then the Hurd should be sold as the most secure os ever, it has a tiny code base, runs glibc
<youpi>etc.
<youpi>tiny doesn't mean secure
<youpi>and large doesn't mean insecure
<youpi>if you put large code in a sandbox, it can't do much
<janneke>sure, much things aren't black and white
<janneke>i just like less accidental complexity
<nikolar>it's absolutely better to have something rather than nothing
<youpi>security is not necessarily related to complexity
<youpi>but rather about making sure you have barriers where you want them
<youpi>and simplicity of the barriers
<youpi>(there the complexity does matter)
<youpi>but what's between barriers doesn't matter that much
<janneke>"<youpi> security is not necessarily related to complexity"
<janneke>sure, but unnecessarily adding complexity doesn't help either
<janneke>other than "getting stuff to work" right now
<youpi>depends whether somebody actually looks after the complexity
<janneke>and i'm all for cutting such corners initially
<youpi>when delegating the management of the complexity, you have shifted the problem, and voila
<youpi>really, if the complexity was not looked after, I would agree, but here we are talking about code that somebody does maintain
<youpi>and that somebody is not us, so that's all good
<janneke>:)
<janneke>sure, anything that works for us that we don't have to do ourselves right now, is good
<azert>Regarding the cost of additional address space switch and tlb flushes, on 64bit kernels where the address space is huge, could the hurd move to a single space configuration? Is this feasible on x64 hardware?
<youpi>there's better than that: use the cr3 tagging
<youpi>so that there's actually not tlb flush
<youpi>i386/intel/pmap.h:/* TODO: support PCID */