IRC channel logs

2025-12-19.log

back to list of logs

<gnu_srs1>damo22: What about the drm backend?
<gnu_srs1>The git directory https://git.zammit.org/hurd-sv.git/commit/?h=drm-server and likewise for ioctl seems to be the same.
<gnu_srs1>containing no code at all, only plenty of empty function definitions. What about starting smaller?
<gnu_srs1>And no code for a hurd drm translator??
<damo22>gnu_srs1: the first part is figuring out if the API is possible, no point writing a line of code for the translator if the API wont work
<damo22>it is an initial attempt at stubbing out the API, contributions welcome
<nexussfan>Surprising there's no rump kernel for that
<nexussfan> https://github.com/rumpkernel/wiki/wiki/Info%3A-Available-rump-kernel-components
<azert>nexussfan: I think we need someone to figure out how to rumpify netbsd modules
<sneek>Welcome back azert, you have 2 messages!
<sneek>azert, youpi says: alternatives wouldn't be enough currently, as you don't want to let both started, see the contributing page's item about competition for PCI cards
<sneek>azert, youpi says: the command line should rather be compatible yes
<azert>youpi: I thought the pci arbiter was preventing conflicts
<damo22>it probably should, i dont know if it does yet
<azert>the contributing page says mayhem happens
<azert>in the long run, we will need something like linux vfio, that literally transfer a whole pci/iommu group to a specific process
<azert>that’s a lot of work
<azert>back to netbsd modules, I’ve read that they are normal elf files and can be used by rumpkernels even in binary format
<youpi>the pci arbiter currently prevents concurrent access to the pci config space
<youpi>it doesn't currently prevent concurrent access to the pci resources (mmio registers etc.)
<azert>youpi: the pci arbiter could implement a way to share locks on pci groups between subhurds. At least to prevent mayhem between trusted servers
<youpi>yes
<youpi>that's what I mean in the contributing page
<youpi>groups, or cards
<youpi>(functions, actually)
<azert>I think that looking forward, you might want to adopt the iommu groups concept directly. There is a reason why iommu doesn’t work at the device level and that is that in pciexpress devices can directly speak to each other. It makes little sense to separate them at the driver level
<azert>but this is just a detail, in the sense that at this point any kind of locking would be an improvement of course
<Pellescours>sneek: later tell youpi hi youpi, on gnumach the commit 4e19e045e5c9e2207c6dd2965180d55d1a8c73e9 is preventing me to boot in a VM (tries to access to memory adress 0x000000). Revering this patch make hurd boot correctly
<sneek>Okay.
<azert>this is a funny video showing and explaining which kind of mess it can be in linux with pci groups https://m.youtube.com/watch?v=qQiMMeVNw-o
<azert>as long as the Hurd doesn’t implement any of this, locking can be at any level of convenience