IRC channel logs
2024-12-11.log
back to list of logs
<damo22>its fine if they all read the same memory <youpi>using a template that you copy over with rep movs <damo22>ok ive not tried that before, i'll look into it <Pellescours>I just noticed something strange, when I do showtrans /server/sockets/2 (it shows me /hurd/pfinet -6 /servers/socket/26) and when I do showtrans /server/sockets/26 (it shows me /hurd/pfinet -4 /servers/socket/2), it looks like an inversion <youpi>each point to the other, so that we have only one running <solid_black>so what else is missing before we can revert "smp: Create AP processor set and put all APs inside it" and give everyone full smp? <damo22>if we had a working rumpnet, we could replace netdde <damo22>then everything would pretty much work <damo22>or we might find some other race condition <damo22>if strncmp('netdde') or something <solid_black>it would assign itself to a pset containing a single cpu <damo22>master cpu is in default_pset and everything else is in slave pset <youpi>we'd probably want to support the *setaffinity functions in the end, netdde could explicitly call it in its main() <youpi>just a simple mask will be fine <damo22>solid_black: i have a branch in my git.zammit.org/hurd-sv.git <damo22>yeah but i have been changing it <damo22>i have a bunch of hot changes i havent committed yet because i was missing a library <damo22>i think what is in the branch compiles <damo22>solid_black: have you tested netdde with your irq patch and smp? <solid_black>but it might have been for reasons other than netdde <damo22>master does not have parallel smp init but it should still be functional because the logical id fix is in <damo22>perhaps we can test master on a few different configurations <damo22>solid_black: i just pushed my WIP for rumpnet <damo22>we need to implement send_packet and receive_packet using bpf <solid_black>I don't think I understand, why does all of this require bpf at all? <damo22>because AF_LINK does not support sending any actual packets <damo22>i asked there and they said nobody wants to touch the old networking code because it works <damo22>otherwise, we could just read() and write() from the AF_LINK socket <solid_black>I don't have enough understanding of what AF_LINK is <solid_black>oh so it's a way to represent link-layer / eth-level packets? <solid_black>makes me wonder why we have /dev/eth0 and not /servers/socket/link then <damo22>you configure the bpf socket to bind the interface you want <solid_black>I'd really like to see native and hurd-ish (i.e. not borrowed from linux or bsd) drivers for virtio-based devices, disk and network included <damo22>or if you were using just AF_LINK, you could send a SIOCSIFFLAGS and populate the name of the ethernet card <solid_black>(in the context of thinking about memory balooning,) I wonder if I even understand what a "memory object" is <damo22>but nobody has time to implement or maintain a new set of drivers <solid_black>in my mind it's "something that can provide, and eventually receive back, memory pages" <solid_black>but is it pages as in page contents, to be written onto physical pages allocated for that purpose by the kernel <solid_black>or is it "new" physical pages, like the device pager does <solid_black>maybe we could have virtio as a GSoC project or something? <damo22>i fail to see the benefit of virtio when we have things like ahcisata and rumpnet close to finished <damo22>do you want to be stuck in qemu forever? <damo22>i want to run hurd on my main driver machine one day <solid_black>I don't want to be stuck, but given then qemu is, in my estimation, what most people use, it makes sense to support it <damo22>its a development tool in my eyes <solid_black>I don't think these goals are in conflict; your awesome work on bringin up rump doesn't prevent someone else from implementing virtio natively <damo22>to be able to boot hurd quickly in a vm is great <damo22>but the end goal for me is to run it on metal <damo22>virtio doesnt give me anything i dont already have <solid_black>a (hopefully) reliable storage/network stack where we control and understand all of the code <solid_black>as opposed to a whole bunch of old linux/bsd code ported with a bunch of ductape that nobody understands and wants to touch <damo22>solid_black: can you boot virtio on metal? <solid_black>there exist physical devices that conform to virtio, yes <damo22>but a makefile got broken and i commented it out <damo22>and yes, we are ductaping everything together currently with BSD, because we can make use of many years of their driver code <damo22>its like transplanting delicate organs into our system <damo22>but how else you gonna get a set of lungs? <solid_black>please don't get me wrong; your work on rump is very much appreciated and needed and useful, and I'm not saying we shouldn't have rump, we should <solid_black>but it would be nice to have a simpler, more controllable, and transparent, native alternative for the now-common and standardized case of running in a VM w/ virtio devices <solid_black>of course nothing is going to come out of me wanting this unless someone volunteers to make it happen :) <damo22>its literally two stubs we need to implement for tx/rx using bpf, and tweak rumpkernel debian package to install rumpdev_bpf as part of rumpnet <damo22>but i cant work on it much this week because i have other commitments <damo22>librehawk: welcome, out of interest, what hardware are you using for Hurd? you mentioned you are pleased with it? <azert>i am maybe wrong, but the Ethernet driver on Hurd is supposed to talk tha bpf language <azert>because the Mach drivers were bsd drivers <azert>and the Hurd itself provides a libbpf, although the netbsd one is surely more developed <azert>in a way, rumpnet would be less duct taped then netdde <azert>the thing with virtio is that it’s not very interesting in a conceptual sense <azert>it’s a virtual interface, but not of the cool one that are used in fancy data centers where an infiniband interface is placed behind an iommu and does direct dma to a virtual machine <azert>it is a virtual interface as any other virtual interface, perhaps optimized in certain ways to play well with qemu implementation details <azert>but of course having it supported would be nice <azert>zero copy memory transfers between machines on a network <azert>with that, I don’t think that the Hurd can focus on targeting the fancy since the competition there is impossible. It can focus instead on being the most free solution in a free software sense <solid_black>but rather about it being standardized, open, widely used (in virtualization), and presumably simple to implement <azert>I’d be super curious to see if it is truly simpler to implement <Pellescours>I see that a lot of methods does not have documentation comments (explaining what they does, the parameters, …). If I wanted to add some (for hurd), is there a specific format to apply (doxygen, sphinx C Domain (Like linux), …)? <Pellescours>all the libraries API methods (like for example mach-defpager/file_io.h: page_read_file_direct), I don’t know if it’s better to document in the .h, the .c or both <youpi>it's usually documented in the .h file <youpi>here it looks quite internal <youpi>when it's really public, it's also documented in doc/hurd.texi <youpi>and in the case of RPCs, it's documented in the .defs file <Pellescours>I see, so nothing really tied to the code itself. Thanks <Pellescours>youhou I was finally able to put a breakpoint at moment where my VM start to freeze. It’s an exception with code 1 (KERN_INVALID_ADDRESS) in a rump thread