IRC channel logs

2023-07-21.log

back to list of logs

<damo22>solid_black: welcome back
<solid_black>hello!
<damo22>i think i have done gnumach io port locking, and pciaccess, but hurd part needs work and then to merge it needs a rebuild of glibc because of hurduser
<damo22>why cant libhurduser be part of the hurd package?
<solid_black>I don't think I understnad enough of this to do a review, but I'd still like to see the patch if it's available anywhere
<damo22>ok i can push to my repos
<solid_black>glibc needs to use the Hurd RPCs (and implement some, too), and glibc cannot depend on the Hurd package beucase the Hurd package depends on glibc
<damo22>lol ok
<solid_black>as things currently stand, glibc depends on the Hurd *headers* (including mig defs), but not any Hurd binaries
<solid_black>still, the cross build process is quite convoluted
<solid_black>I posted about it somwhere...
<damo22>yes
<damo22>its crazy
<solid_black> https://floss.social/@bugaevc/109383703992754691
<jpoiret>the manual patching of the build system that's needed to bootstrap everything is a bit suboptimal
<damo22>what if you guys submit patches upstream to glibc to add a build target to copy the headers or whatever is needed
<damo22>solid_black: see http://git.zammit.org/{libpciaccess.git,gnumach.git} on fix-ioperm branches
<Arsen>fwiw, it is possible to get a header-only build: https://gitweb.gentoo.org/repo/gentoo.git/tree/sys-libs/glibc/glibc-2.37-r3.ebuild#n1075
<Arsen>and there's an install-headers target: https://gitweb.gentoo.org/repo/gentoo.git/tree/sys-libs/glibc/glibc-2.37-r3.ebuild#n1496
<solid_black>damo22: yes, glibc already has 'make install-headers' (except that you still need to build and install more stuff from it manually, see my thread I linked above)
<jpoiret>I'm thinking more about Hurd's header generation
<jpoiret>you can't even pass configure without a toolchain
<solid_black>to generate MIG headers, you need MIG, sure
<solid_black>and to build MIG you need Mach headers
<solid_black>and so on
<damo22>but mig is invoked with gcc -xE or whatever
<jpoiret>well here it shouldn't, otherwise you need very ugly hacks to make that work when bootstrapping
<damo22>indeed
<solid_black>Arsen: for a second there I though you were talking about Gentoo on Hurd, and got excited :|
<Arsen>haven't had time to explore that yet
<Arsen>I'd like to do it one day (and I'm not alone) and I'd like to port glibc to yet another microkernel
<solid_black>I don't know much about Gentoo, but I now do have experience with cross-compiling Hurd (Mach, glibc, etc), so happy to help with Gentoo GNU/Hurd in any way I can
<solid_black>also if anyone wants to revive Arch Hurd, that'd be so cool too
<solid_black>we need more Hurd distros
<damo22>lets get the core working first with more drivers
<damo22>once it can boot on native hw im sure youll get people flocking in to help
<damo22>solid_black: im stuck with SMP, the locking in gnumach causes massive slowdown
<solid_black>the locking in gnumach is very inefficient, for one thing
<solid_black>like they lock/unlock just to set a single bit flag
<solid_black>which can be done with atomics
<solid_black>guess atomics weren't a thing when Mach was developed
<solid_black>(but then how did they implement locks?)
<damo22>double loop?
<solid_black>what's that?
<solid_black> http://git.zammit.org/gnumach.git 404s for me, am I looking in the wrong place?
<damo22>gnumach-sv.git
<solid_black>ah
<solid_black>/* Only ranges that are occupied by our task may be released */ return KERN_PROTECTION_FAILURE;
<solid_black>KERN_INVALID_ARGUMENT rather?
<damo22>#define _simple_lock(l) \
<damo22> ({ \
<damo22> while(_simple_lock_xchg_(l, 1)) \
<damo22> while (*(volatile int *)&(l)->lock_data) \
<damo22> cpu_pause(); \
<damo22> 0; \
<damo22> })
<solid_black>_simple_lock_xchg_ is the atomic there
<solid_black>also what? cpu_pause, so it's not even a sleeping lock?
<solid_black>no wonder everything's slow then
<damo22>cpu pause === asm("pause":::"memory")
<solid_black>yes, but it doesn't context-switch to another thread
<solid_black>i.e. it's a spinlock. not an actual sleeping lock
<damo22>yes
<solid_black>=> slow
<damo22>everything is a spinlock
<damo22>in mach
<solid_black>pci_device_hurd_open_device_io leaks io_perm if __i386_io_perm_modify fails
<solid_black>also you should not use the __ prefixed symbols from outside of glibc, they're glibc-private
<solid_black>so just i386_io_perm_modify
<damo22>solid_black: no it has problems if you define the same symbol
<damo22>we had a symbol clash in rumpdisk
<solid_black>yes, that is a problem
<damo22>its such a big library, we had to use the __ version
<solid_black>and glibc protects itself by using the __ symbols internally
<solid_black>but others don't get to
<damo22>too bad
<damo22>:P
<solid_black>:D
<solid_black>you're still forgetting to check mach_port_deallocate return value
<solid_black>and I guess that of other rpcs too
<damo22>youre reading the code too literally
<damo22>i mean to look at the overall idea
<damo22>i havent finished
<solid_black>I don't understand enough about the I/O ports or the overall idea, so I can only nitpick on the implementation :)
<damo22>what dont you understand?
<damo22>io ports are from 0 to 0xffff
<damo22>they are attached to certain hardware
<damo22>assigned by the bios i think
<Arsen>not even
<Arsen>they're a hack on the 8086 to cheaply double the address space
<Arsen>it's literally just a memory IO bus with no instructions to access it outside of in/out
<Arsen>they're assigned as wired
<solid_black>interesting; that I didn't know
<damo22>Arsen: are you sure? i think coreboot can remap io ports to anything
<Arsen>on modern hw, possibly
<Arsen>Linux can to, as well as any other OS, via the MMIO
<Arsen>too*
<damo22>ok
<damo22>so anyway
<damo22>you cant share the same io ports among more than one process or they will interfere with each others hardware access
<damo22>like one process might be doing an input or output while the other resets the device
<damo22>and all hell breaks loose
<damo22>i cant think of a way io ports could be shared among users
<damo22>Arsen: do you agree?
<Arsen>yeah, I don't think the devices on the other side would appreciate multiple writers
<solid_black>that all I mostly understand
<solid_black>and you want gnumach (and pci-arbiter?) to hand out I/O port ranges to whoever asks first
<damo22>yes
<solid_black>and deny getting the same range if it's already taken
<damo22>yes
<damo22>but it can be released
<damo22>if the device is no longer being used
<solid_black>is __pci_request_io_ports an RPC to pci-arbiter?
<damo22>yes
<solid_black>what are BARs?
<damo22>base address register
<solid_black>is that the same as an I/O port number?
<damo22>each pci device has up to 6 allocated memory io ranges
<damo22>no
<damo22>bar is a number from 0-5
<solid_black>so like x86 segment registers?
<damo22>if you do lspci -vv
<damo22>you should see the ranges
<solid_black>so is_legacy is whether the port range has been requested directly from Mach, or from the pci-arbiter?
<solid_black>what does this actually influence?
<damo22>is_legacy is a libpciaccess artifact
<solid_black>sergey@hurdle:~$ lspci -vv
<solid_black>sergey@hurdle:~$
<solid_black>northing :|
<damo22>you need to be root
<damo22>i think
<solid_black>uh
<solid_black>why doesn't it print an error then?
<damo22>dont know
<solid_black>oh yeah root works, pretty cool
<damo22>Region X: Memory at blah
<damo22>Region Y: I/O ports at blah
<damo22>these are mirrored at /servers/bus/pci/BUS/DEV/FUN/regionX
<damo22>in fact you can read/write the regions by opening the file nodes
<damo22>but io regions should not be exposed this way
<damo22>solid_black: why do we need to create directories and/or file nodes when installing translators? cant the caller create all the parent directories up to the node and then install the translator there?
<solid_black>damo22: what do you mean, in what context? are you talking about the new bootstrap process? just Hurd in general?
<solid_black>who's "we" and who's "the caller" in this context?
<damo22>both
<solid_black>are you just asking why settrans(1) doesn't run mkdir -p itself?
<damo22>maybe yes
<solid_black>well, uh, it just doesn't
<solid_black>none of the similar Unix tools do
<solid_black>for instance touch(1) doesn't create the whole directory stack below the file
<solid_black>nor does ln(1), or cp(1), or mv(1), ...
<damo22>fair
<solid_black>not even mkdir(1), unless you say -p
<damo22>but i guess the new bootstrap thingy could though
<solid_black>it would probably still require an explicit mkdir in the boot script, but maybe not
<solid_black>I'm not decided yet
<damo22>it would be good if the boot script had nothing in it except what was strictly required
<damo22>so short in fact, that you could remember it and type it if needed
<damo22>that was one reason why i started writing that hurdhelper thing, to shorten the boot script
<solid_black>it is supposed to be very hackable, unlike the current boot process
<solid_black>but keeping the script short is not an explicit goal that I have
<solid_black>that being said, maybe we could imply the mkdir's indeed
<damo22>cant it just boot by default with a sane set of servers?
<solid_black>but what's a sane set?
<solid_black>and where do you boot from?
<damo22>root=
<solid_black>I guess if you specify root=wd0s1 or something on the kernel cmdline, we could work it out
<solid_black>but imagine it's a system that boots from nfs, over network
<damo22>exacly
<damo22>i would consider that non-standard
<solid_black>how can you synthesize a sane configuration for that automatically?
<solid_black>or, I'd really like to get virtio + 9pfs working, to be able to use qemu's feature where it exposes a host directory as a 9p share
<solid_black>(speaking of which, do you happen to be hardware-savvy enough to hack on a virtiofs driver?)
<damo22>it used to work
<solid_black>and so it'd be nice to enable booting from 9pfs root
<solid_black>but again that's just not something that you can autodetect sanely
<damo22>virtio is supported in rump but something broke in the makefiles so we disabled it
<solid_black>as I undrestand it (but the usual disclaimer about me not being a hardware person applies), virtio is not a single device, it's more of a class of devices that are structured in a similar way
<solid_black>so you'd still have to write a driver for virtiofs specifically
<damo22>not really, the virtio disk driver can link to rumpdisk
<damo22>as a drop in replacement for ahci
<solid_black>it's not a disk driver that I'm talking about
<solid_black> https://wiki.qemu.org/Documentation/9psetup
<damo22>yes you boot qemu with a virtio disk
<damo22>and rumpdisk can detect it
<solid_black>no, I don't think you understand
<solid_black>this is not a virtio disk, as in /dev/vda
<solid_black>it's a different virtio device, more like a socket than a block device
<solid_black>that speaks the 9p protocol
<solid_black>and you can mount an fs from it
<damo22>why do you need that
<solid_black>beucase qemu has this neat feature (that I linked to above), where it exposes *hosts* directory as this virtio/9p mount to the guest
<solid_black>so you get shared folders
<solid_black>moreover, you could even try to *boot* from this 9p device as a root; this way you don't even have a separate disk for your Hurd VM at all, its contents are just a directory on the host
<solid_black>qemu has docs about doing this for linux: https://wiki.qemu.org/Documentation/9p_root_fs
<damo22>it is exposed as /dev/vda
<solid_black>no, that is a different thing
<damo22>see step 8 of the example
<damo22>-append "console=ttyS0 root=/dev/vda"
<solid_black>they're doing that to boot the installer (from disk)
<damo22>ah ok
<solid_black>-device virtio-9p-pci,id=fsRoot,fsdev=fsdev-fsRoot,mount_tag=fsRoot <--- this
<solid_black>-append 'root=fsRoot rw rootfstype=9p rootflags=trans=virtio,version=9p2000.L,msize=5000000,cache=mmap,posixacl console=ttyS0'
<solid_black>the Hurd version of this would be more like
<damo22>good luck i need to sleep
<solid_black>settrans /dev/virtio-9p /hurd/virtio-9p # fictional virtio-9p translator
<solid_black>settrans / /hurd/9pfs --aname fsRoot /dev/virtio-9p
<Guest60>sollid_black: you should try this with rump https://man.netbsd.org/vio9p.4
<Guest60> https://github.com/NetBSD/src/blob/trunk/sys/dev/pci/vio9p.c
<solid_black>Guest60: ah great, so it might just be a matter of enabling it in rump? cc damo22
<janneke>building gobject-introspection on Guix fails with this error
<janneke>(process:11919): GLib-ERROR **: 11:15:55.110: getauxval () failed: No such file or directory
<janneke>=> https://lists.gnu.org/archive/html/guix-patches/2023-07/msg00746.html
<janneke>rign any bells, any ideas?
<Guest60>maybe it should go with rumpdisk, maybe there should be a rumpvirtio since it’s a different controller, like for rumpusbdisk.