IRC channel logs
2025-06-14.log
back to list of logs
<damo22>i dont know if this is true but if you have an ext4 filesystem you could probably mount it as ext2 without journaling <damo22>(12:13:01 PM) damo22: i am concerned about these three usages of vtophys: <damo22>(12:13:01 PM) damo22: kern/subr_pool.c: *pap = POOL_VTOPHYS(object); <damo22>(12:13:01 PM) damo22: kern/uipc_mbuf.c: m->m_paddr = POOL_VTOPHYS(m); <damo22>(12:13:01 PM) damo22: rump/dev/lib/libpci/rumpdev_bus_dma.c:#ifdef POOL_VTOPHYS <damo22>I think i need to disable that macro in rump builds <damo22>(01:13:23 PM) mlelstv: it takes "physical address" = "virtual address" (which isn't really physical), and busdma then does the translation to "bus address" for e.g. DMA usage. <damo22>youpi: in rumpdisk you had to separate out the pages because rumpcomp_pci_virt_to_mach() was not being called per page? <damo22>(12:59:08 PM) mlelstv: mbufs are in a pool, and somehow busdma needs to find the DMA capable physical address for the mbuf, same for mbuf clusters <damo22>00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 rcvd a packet : [ OK ] 64 <damo22>Thread 30 hit Breakpoint 1, receive_packet (pkt_size=<optimized out>, <damo22> nd=<optimized out>, buf=<optimized out>) at ../../rumpnet/net-rump.c:572 <damo22>how did it print out the packet before it received packet? <damo22>ah it was already waiting for a packet, received it, and then called receive_packet again <crupest>It seems that Debian is undergoing a heavy work about how to package rust cargo packages. <crupest>I see that tree-sitter (a dep of neovim) has totally changed the way to package from 20.xx (bookworm) to 22.xx (sid). <crupest>Rust libs are splitted into separate packages. This is different from rust static link way. <aaabbb>crupest: are they dynamically loaded? <crupest>As I remember, no. When you `cargo build`, all dep crates will be static linked into one binary. <aaabbb>that's the main thing keeping me from enjoying rust <aaabbb>for a typical server or desktop use case, almost all bad. might be nice for something simple and portable that you can just plop in another system without having dependencies i guess <aaabbb>but when it's maintained by apt or dnf, that benefit goes away and all the downsides remain (larger binary size, much slower loading times, necessity to update every package when one single library changes, increased memory and disk requirements) <crupest>Yes. I once saw someone said that prevents the programs to enjoy security updates of libs like libssl. <aaabbb>that's more of an issue for statically compiled c programs. although technically the same applies to rust, its memory safety means that that is not as severe an issue <crupest>Not all security bugs are caused by memory safety. <aaabbb>sure, but it's not as severe a problem. still a problem though <crupest>Maybe I'm wrong about libssl things mentioned above. Some libs like libc are still dynamically linked, but deo crates (packages of rust) are not. <crupest>Whatever, I'm reading the wiki about debian rust packaging wiki for now. <damo22>im pretty stuck with this zero filled packet issue <damo22>i tried making it O_NONBLOCK for reads on the bpf <damo22>i tried disabling the POOL_VTOPHYS pooling <damo22>youpi: could it be an interrupt problem, there seem to be 2 unacked interrupts when i kill the rumpnet translator <damo22>probably not, because i am getting some packets <youpi>damo22: in rumpdisk the buffer you get from the RPC is not necessarily contiguous <damo22>dm_segs[0].ds_addr = (va=0x0x20164000,pa=0x0) <damo22>ok so when rumpnet starts up, the ds_addr are all 0x0 or 0x800, but then when i receive a packet, i get a reasonable physical address <damo22>i dumped the physical addresses in rumpdev_bus_dma.c <damo22>[ 3.5100050] map->dm_segs[0].ds_addr = (va=0x0x20168800,pa=0x800) <damo22>[ 3.5100050] map->dm_segs[0].ds_addr = (va=0x0x20168000,pa=0x0) <damo22>[ 3.5100050] map->dm_segs[0].ds_addr = (va=0x0x20166800,pa=0x800) <damo22>[ 3.5100050] map->dm_segs[0].ds_addr = (va=0x0x20166000,pa=0x0) <damo22>youpi: vm_pages_phys must be returning 0 for the initial dma that gets allocated <damo22> if (!vm_map_lookup_entry(cmap, address, &entry)) { <damo22>so the entry can be missing and then paddr will return as 0 <damo22>shouldnt the return value indicate that, not KERN_SUCCESS? <damo22>uhhh, the page is not mapped to physical memory... <damo22>wire_task_self() was all that was needed <azeem>the postgres testsuite seems to run much quicker with the default fsync=on compared to fsync=off <azeem>ah no, I take that back - it is no fsync=on/off that changes the runtime, but an additional debug_parallel_query=on - that one's the thing that makes it take longer and eventually kill it <damo22>azeem: parallel queries wont work too well on UP ? <azeem>I think that parameter forces it to run everything with multiple processes (Postgres is not thread-based) <azeem>(it's already running several tests in parallel) <damo22>it could be running out of memory? <azeem>no, see my mail; I'm running free in a pretty tight loop and never saw it swapping <azeem>load goes to 6-7 with peaks to 12 <azeem>after removing debug_paralllel_query=on from the config it seems to run much more stable, and with the apic kernel, the regression tests now pass <azeem>still got that ext2fs assertion though, maybe because disk space is running low, I should extend the vm image <azert>btw you shouldn’t need to wire the whole task for a few dma buffers, you probably found a bug <damo22>smp stress -c 7 uses 600% cpu on the host.... <damo22>it must be that good that it has 1 whole cpu free <damo22>youpi: i booted hurd with smp and no slave pset with networking <damo22>hmm with smp i am getting errors with net: ERROR: rcv mach_msg returned: 0x10000004 <damo22>ok now with full smp boot, stress -c 7 uses 700% cpu on the host <damo22>vm_allocate_contiguous() used for dma requests ought to be then wired down? <azert>damo22: looking at the source seems like that should be already wired <azert>either it’s not working as expected, or those buffers gets allocated by something else <p4r4D0xum>sneek later tell gnucode Yeah that's what I meant, you need to add -pthread -rt flags or it wont compile, thanks I will check it out later on