IRC channel logs
2025-06-10.log
back to list of logs
<diegonc>I was just wondering whether it makes sense to place mkfs and friends in /usr/sbin or not since the non root user is allowed to make a filesystem out a file he owns <diegonc>not something critical, I can run them with the full path :-) <damo22>azert: if the driver code allocates a dma buffer to hold the packets, the userspace code could read the packet straight from dma buffer, but how would it know where to find the address? thats why i call rump_sys_read() <damo22>youpi: do i need to do a trick with faulting in the memory pages? <azert>damo22: did you try to just rump_sys_read the packet? <azert>there is a good chance that is your code zeroing the buffer <azert>I’m not sure you can do what you want to do, since that would likely require hooking into the bpf callbacks that imho is a bad idea <damo22>azert that is what i am doing, i am reading the packet buffer but you cant just read one packet <damo22>bpf forces you to read the entire bpf buffer <damo22>which may contain more than one packet <damo22>so i thought i could allocate a contiguous region of memory for the entire bpf buffer, and read it entirely, but copy each packet out of there as a received frame one by one, reusing both buffers for each read <azeem>does the Hurd do something different with fcntl(fd, F_GETFL) for directories compared to Linux? <azeem>would it make sense to test for O_RDONLY instead of 0 here? <youpi>the standard says that you'd have to & O_ACCMODE to get the access mode <youpi>and then you can compare against O_RDWR and against O_WRONLY <youpi>that'll work both with 0, 1, 2 (linux) and 1, 2, 3 (hurd) <azeem>it is only for assert-enabled builds so went unnoticed until now cause the Debian package builds aren't usually assert-enabled <azeem>btw, the Postgres test suite crashes my vm every once in a while <youpi>no, better compare desc_flag with O_RDWR and O_WRONLY <youpi>Assert(desc_flags != O_RDWR && desc_flags != O_WRONLY) <youpi>(and Assert(desc_flags == O_RDONLY) ) <youpi>bleh, why did they go such an odd way when the standard doesn't say it's a bitfield <youpi>Assert(desc_flags == O_RDONLY) and Assert(desc_flags != O_RDONLY) ) should be fine actually <damo22>i force pushed to my hurd-sv rumpnet with some changes but it still receives zero filled packets <azeem>More generally, one of the Postgres hackers asked "I wonder if the pthreads implementation is still unfinished as hinted at there too" -- they are considering moving to threads at some point, I guess it will depend on what eexactly they come up with but one could assume that our pthred implementation is reasonably complete nowadays right? <youpi>have you tried printing right after rump_sys_read()? <youpi>to determine if it's before or after that call that things go wrong <youpi>(rather than hardcoding SIZEOF_BPF_HDR to 18, you can use sizeof(struct bpf_hdr)) <youpi>have you printed the bpf header fields, to make sure they do make sense? <youpi>bp->bh_datalen - bp->bh_caplen <youpi>I'm not sure this really is what you want <youpi>aiui, bh_datalen is the total size of the packet and bh_caplen is the size that is actually captured by bpf <youpi>you'd hope these to be equal, i.e. you capture complete packets <damo22>bh_datalen is the length of the original packet, bh_caplen is the actual bpf length captured <damo22>yes they can be truncated if you hit the end of the kernel buffer <damo22>so you drop that packet and get it next time <youpi>are you getting it completely "next time", or just the rest? <youpi>(anyway, that's for beyond the 512K buffer that you are not getting anyway) <damo22>yeh i havent even seen more than one packet anyway yet <damo22>i can try printing after rump_sys_read <damo22>cf 17 48 68 4e 1d 0b 00 40 00 00 00 40 00 00 00 12 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 rawlen(82) <damo22>that was the response from an ARP <youpi>si there is no point looking at your own code, the problem is before rump_sys_read <youpi>I mean, within rump_sys_read <azeem>what's the resolution of clock_gettime() on the Hurd? <youpi>it used to be 10ms, it's getting down to hpet resolution <azeem>ok, I'm still on the 2023 vm image <youpi>I don't remember if the latest gnumach package has the hpet fix <youpi>ah, wait, I backported the patch <azeem>I'll try adding some pg_sleeop() calls into the regression tests first <azeem>but there are other "SELECT pg_sleep(0.1);" calls sprinkled among that file, so I hope adding two more will be acceptable, unless I'm missing the point of the check