IRC channel logs

2024-12-11.log

back to list of logs

<damo22>its fine if they all read the same memory
<damo22>but not if they write something
<youpi>yes you can use an array
<youpi>using a template that you copy over with rep movs
<damo22>ok ive not tried that before, i'll look into it
<Pellescours>I just noticed something strange, when I do showtrans /server/sockets/2 (it shows me /hurd/pfinet -6 /servers/socket/26) and when I do showtrans /server/sockets/26 (it shows me /hurd/pfinet -4 /servers/socket/2), it looks like an inversion
<youpi>it's expected
<youpi>each point to the other, so that we have only one running
<Pellescours>Ahh
<solid_black>hi
<Pellescours>hi
<ZhaoM>hi
<damo22>hi
<solid_black>so what else is missing before we can revert "smp: Create AP processor set and put all APs inside it" and give everyone full smp?
<damo22>if we had a working rumpnet, we could replace netdde
<damo22>then everything would pretty much work
<damo22>or we might find some other race condition
<damo22>how did you go with user irq?
<solid_black>yesterday's fix is in master
<damo22>lovley
<damo22>lovely
<solid_black>again, could we just pun netdde to a single core?
<solid_black>leaving everything else to run on full smp
<damo22>yeah i guess but how
<damo22>a workaround in gnumach?
<damo22>it would be ugly
<damo22>if strncmp('netdde') or something
<solid_black>no, just in netdde's own startup sequence
<solid_black>it would assign itself to a pset containing a single cpu
<damo22>i see, but we dont have that
<damo22>master cpu is in default_pset and everything else is in slave pset
<youpi>we'd probably want to support the *setaffinity functions in the end, netdde could explicitly call it in its main()
<damo22>like cgroups?
<damo22>can we make it a simple mask?
<youpi>just a simple mask will be fine
<youpi>see pthread_setaffinity
<solid_black>how close is rumpnet to working?
<solid_black>and can I help with that?
<youpi>pthread_setaffinity_np
<damo22>solid_black: i have a branch in my git.zammit.org/hurd-sv.git
<damo22>i think its called rumpnet
<damo22>let me see if i pushed it
<solid_black>"rumpnet: Add device translator for (Intel) NICs"
<damo22>yeah but i have been changing it
<damo22>i have a bunch of hot changes i havent committed yet because i was missing a library
<damo22>i think what is in the branch compiles
<damo22>but it would not work
<damo22>we need netbsd's bpf
<damo22>solid_black: have you tested netdde with your irq patch and smp?
<damo22>maybe it fixed it?
<damo22>i havent tried yet
<solid_black>overall networking wasn't working for me
<solid_black>but it might have been for reasons other than netdde
<damo22>master does not have parallel smp init but it should still be functional because the logical id fix is in
<damo22>perhaps we can test master on a few different configurations
<damo22>solid_black: i just pushed my WIP for rumpnet
<damo22>we need to implement send_packet and receive_packet using bpf
<solid_black>ACTION looks
<solid_black>I don't think I understand, why does all of this require bpf at all?
<damo22>because AF_LINK does not support sending any actual packets
<damo22>its not implemented in netbsd
<damo22>i asked there and they said nobody wants to touch the old networking code because it works
<damo22>otherwise, we could just read() and write() from the AF_LINK socket
<solid_black>I don't have enough understanding of what AF_LINK is
<solid_black>ACTION searches online
<solid_black>oh so it's a way to represent link-layer / eth-level packets?
<damo22>yes
<solid_black>that makes sense
<solid_black>makes me wonder why we have /dev/eth0 and not /servers/socket/link then
<solid_black>because there may be multiple links? duh
<solid_black>how does that work then
<damo22>you configure the bpf socket to bind the interface you want
<damo22>i think something like that
<solid_black>I'd really like to see native and hurd-ish (i.e. not borrowed from linux or bsd) drivers for virtio-based devices, disk and network included
<solid_black>but then I see in https://www.gnu.org/software/hurd/open_issues/virtio.html youpi said that there's not much point to it
<damo22>or if you were using just AF_LINK, you could send a SIOCSIFFLAGS and populate the name of the ethernet card
<damo22>i think
<damo22>ifr_name
<damo22>solid_black: you are purist!
<solid_black>(in the context of thinking about memory balooning,) I wonder if I even understand what a "memory object" is
<damo22>but nobody has time to implement or maintain a new set of drivers
<solid_black>in my mind it's "something that can provide, and eventually receive back, memory pages"
<solid_black>but is it pages as in page contents, to be written onto physical pages allocated for that purpose by the kernel
<solid_black>or is it "new" physical pages, like the device pager does
<solid_black>maybe we could have virtio as a GSoC project or something?
<damo22>i fail to see the benefit of virtio when we have things like ahcisata and rumpnet close to finished
<solid_black>(what's ahcisata?)
<damo22>do you want to be stuck in qemu forever?
<damo22>i want to run hurd on my main driver machine one day
<solid_black>I don't want to be stuck, but given then qemu is, in my estimation, what most people use, it makes sense to support it
<damo22>its a development tool in my eyes
<solid_black>I don't think these goals are in conflict; your awesome work on bringin up rump doesn't prevent someone else from implementing virtio natively
<damo22>to be able to boot hurd quickly in a vm is great
<damo22>but the end goal for me is to run it on metal
<damo22>virtio doesnt give me anything i dont already have
<damo22>so im not motivated to fix that
<solid_black>a (hopefully) reliable storage/network stack where we control and understand all of the code
<solid_black>as opposed to a whole bunch of old linux/bsd code ported with a bunch of ductape that nobody understands and wants to touch
<solid_black>perhaps I'm spoiled by Serenity
<damo22>solid_black: can you boot virtio on metal?
<damo22>i think you need a host right?
<solid_black>there exist physical devices that conform to virtio, yes
<solid_black>but that wouldn't be the focus at all
<solid_black>again, these are not either/or
<solid_black>we could have rump *and* native virtio
<damo22>right
<damo22>rump has virtio disk
<damo22>we used to compile that
<damo22>but a makefile got broken and i commented it out
<damo22>im pretty sure it used to work
<damo22>check rumpkernel git histroy
<damo22>and yes, we are ductaping everything together currently with BSD, because we can make use of many years of their driver code
<damo22>its like transplanting delicate organs into our system
<damo22>but how else you gonna get a set of lungs?
<solid_black>please don't get me wrong; your work on rump is very much appreciated and needed and useful, and I'm not saying we shouldn't have rump, we should
<solid_black>but it would be nice to have a simpler, more controllable, and transparent, native alternative for the now-common and standardized case of running in a VM w/ virtio devices
<damo22>ok
<solid_black>of course nothing is going to come out of me wanting this unless someone volunteers to make it happen :)
<solid_black>so let's push rump forward for now indeed
<damo22>its literally two stubs we need to implement for tx/rx using bpf, and tweak rumpkernel debian package to install rumpdev_bpf as part of rumpnet
<damo22>but i cant work on it much this week because i have other commitments
<damo22>librehawk: welcome, out of interest, what hardware are you using for Hurd? you mentioned you are pleased with it?
<librehawk>32 Bit Debian Hurd Old HP intel
<damo22>cool
<librehawk>Waiting for x64 to stabilize
<damo22>yes, it will soon i think
<azert>damo22, solid_black: inam
<azert>i am maybe wrong, but the Ethernet driver on Hurd is supposed to talk tha bpf language
<azert>because the Mach drivers were bsd drivers
<azert>and the Hurd itself provides a libbpf, although the netbsd one is surely more developed
<azert>in a way, rumpnet would be less duct taped then netdde
<azert>the thing with virtio is that it’s not very interesting in a conceptual sense
<azert>it’s a virtual interface, but not of the cool one that are used in fancy data centers where an infiniband interface is placed behind an iommu and does direct dma to a virtual machine
<azert>it is a virtual interface as any other virtual interface, perhaps optimized in certain ways to play well with qemu implementation details
<azert>but of course having it supported would be nice
<azert>In the fancy world there is even a thing called rdma https://en.m.wikipedia.org/wiki/Remote_direct_memory_access
<azert>zero copy memory transfers between machines on a network
<azert>with that, I don’t think that the Hurd can focus on targeting the fancy since the competition there is impossible. It can focus instead on being the most free solution in a free software sense
<solid_black>I don't really care about speed that much even
<solid_black>but rather about it being standardized, open, widely used (in virtualization), and presumably simple to implement
<azert>I’d be super curious to see if it is truly simpler to implement
<azert>not convinced yet
<Pellescours>I see that a lot of methods does not have documentation comments (explaining what they does, the parameters, …). If I wanted to add some (for hurd), is there a specific format to apply (doxygen, sphinx C Domain (Like linux), …)?
<youpi>what do you call "method" ?
<Pellescours>all the libraries API methods (like for example mach-defpager/file_io.h: page_read_file_direct), I don’t know if it’s better to document in the .h, the .c or both
<youpi>it's usually documented in the .h file
<youpi>here it looks quite internal
<youpi>when it's really public, it's also documented in doc/hurd.texi
<Pellescours>yeah, the example I gave in not a good example
<youpi>and in the case of RPCs, it's documented in the .defs file
<Pellescours>I see, so nothing really tied to the code itself. Thanks
<Pellescours>youhou I was finally able to put a breakpoint at moment where my VM start to freeze. It’s an exception with code 1 (KERN_INVALID_ADDRESS) in a rump thread