IRC channel logs
2025-06-09.log
back to list of logs
<damo22>root@zamhurd:~# ping -c1 -r 10.0.2.2 <damo22>PING 10.0.2.2 (10.0.2.2): 56 data bytes <damo22>sending packet: ff ff ff ff ff ff 52 54 00 12 34 56 08 06 00 01 08 00 06 04 00 01 52 54 00 12 34 56 0a 00 02 0f 00 00 00 00 00 00 0a 00 02 02 [ OK ] 42 <damo22>00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 rcvd a packet : [ OK ] 64 <damo22>sending packet: ff ff ff ff ff ff 52 54 00 12 34 56 08 06 00 01 08 00 06 04 00 01 52 54 00 12 34 56 0a 00 02 0f 00 00 00 00 00 00 0a 00 02 02 [ OK ] 42 <damo22>sending packet: ff ff ff ff ff ff 52 54 00 12 34 56 08 06 00 01 08 00 06 04 00 01 52 54 00 12 34 56 0a 00 02 0f 00 00 00 00 00 00 0a 00 02 02 [ OK ] 42 <damo22>00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 rcvd a packet : [ OK ] 64 <damo22>112 bytes from 10.0.2.15: Destination Host Unreachable <damo22>--- 10.0.2.2 ping statistics --- <damo22>1 packets transmitted, 0 packets received, 100% packet loss <damo22>something is getting messed up with the memory pages? <damo22>youpi: does vm_allocate_contiguous() allocate memory starting on a physical page boundary? <damo22>the received bpf packet has the correct bpf header but the payload of the eth frame is all zeroes <damo22>maybe its a mbuf thing in rumpnet <damo22>could it be that a blocking read on bpf fd does not wait for packet to actually be in the buffer so reads zeroes? <youpi>damo22: did you tell bpf to snap the whole packet? in pfinet's ethernet.c we have {BPF_RET|BPF_K, 0, 0, 1500}, /* And return 1500 bytes */ <youpi>and indeed, by default e.g. tcpdump doesn't snap the whole packet, only the header <youpi>yes, allocate contiguous allocates aligned on a page boundary, there's no reason why it would drop a part of it <damo22>struct bpf_insn bpf_allow_all[] = { <damo22> BPF_STMT(BPF_RET+BPF_K, BPF_WHOLEPACKET), /* accept */ <damo22>you can only read the entire buffer with bpf <damo22>if the read() is not exactly the same length as the "kernel" buffer size, the read will return EINVAL <damo22>but if you truncate to a single packet, you lose the rest of the packets in the buffer? <damo22> mlelstv: basically, the packets up to snap length plus the bpf header are copied into the internal buffer (that's your half MB). When the buffer is full, it switches to the previous buffer if that already has been read. If the previous buffer is not yet read out, the packet is dropped. <youpi>damo22: I have updated rumpkernel to support non-page-aligned in the vm_pages_phys call. the fallback on mach_vm_object_pages_phys was however already supporting non-page-aligned calls <youpi>so probably not the reason you were getting issues <azert>damo22: I’ve checked you code <azert>I’m not sure I totally understand it <azert>But it seems weird that you allocate the dma buffer and then you rump_read into it <azert>Like, I’d assume that the driver allocate the dma buffer using the netbsd machine independent dma framework, for which rump probably implements a backend? <azert>and rump_sys_read is just a shim to have an high level interface resulting in an useless memcpy, right? <azert>I don’t think that tcpdump on netbsd needs to know about dma <azert>Anyway, I suppose you already figured this out and I’m just lost in the code and need to do more homework. Rumpdisk wouldn’t work if dma wasn’t fixed <azert>let’s check bus_dmamem_alloc <azert>perhaps the Linux rumpkernel backend implements things that are NYI in the Hurd kernel backend? <azert>Still worried that you rump_sys_read into a buffer that you thought was the one data gets dma into