***DiffieHellman_ is now known as DiffieHellman
<damo22>mmap uses MAP_HUGETLB in some cases in linux-uio version of pci-userspace <damo22>if you step through one of those areas in page size steps, you get consecutive physical and virtual addresses. <Pellescours>here too, the doc says VM_MAP_NULL but it does a return NULL <youpi>VM_PAGE_NULL is equal to NULL <youpi>event if rumpcomp_pci_dmalloc manages to allocate huge pages, the generic dma engine shouldn't be assuming that we always manage to allocate huge pages, at least since some archs don't have any huge page <damo22>youpi: is the problem that vm_allocate_contiguous() allocates a contiguous physical range but the virtual addresses of each page are not necessarily consecutive? <damo22>i agree MAP_HUGETLB is a hack to make the linux port work, and netbsd are happy to change the interface if its not adequate <damo22>they said the rump code is mostly a proof-of-concept <damo22>i have a contact email for someone there who is interested in this too <damo22>(11:19:02) bad: but yeah, an interface closer to bus_dma(9) might be able to relax the requirements. <Pellescours>is the shutdown procedure stops boot task in reverse order of the boot order ? I mean if to boot I start pci_userspace then rimp, is the shutdown order rump then pci-userspace ? <damo22>yes pci-arbiter shuts down last afaik <youpi>damo22: no, vm_allocate_contiguous does allocate contiguously both in physical and virtual memory. <youpi>really when I tell to check something, do it :) rump does *not* use a bounce buffer allocated with vm_allocate_contiguous, it just takes the pointer passed to rump_ns_pread, so _bus_dmamap_load_buffer calls rumpcomp_pci_virt_to_mach on that, which is good thing since that means one copy less. <youpi>but then the memory is not physically consecutive <youpi>in practice the change I made is fine enough since for now ext2fs only device/read_write one page at a time <youpi>but with multi-page read/writes we'd want a dma scatter/gather <youpi>(and yes in the end we'd like to use an io mmu)