IRC channel logs

2023-03-18.log

back to list of logs

<damo22>i cant find a simple timestamp i can use in gnumach to measure elapsed time
<damo22>there is current_timer[cpu_number()] but it hangs when i try to use it
<damo22>i did some measurements, looks like on -smp 2, vm_page_queue_lock() takes 5610ms of time
<damo22>between start of gnumach and failing to find disk
<damo22>between start of gnumach timer and failing to find disk
<damo22>there are 54 call sites for that lock
<damo22>its the one taking most of the time
<damo22>youpi: my understanding of PMAP is that it handles contention on memory between cpus by locking a memory map object to a cpu. If this is correct, why do we need to lock so much of vm_page_queue_lock?
<damo22>shouldnt it mostly be lock-free?
<damo22>if the pmap is already locked and we are doing something with kernel memory?
<youpi>damo22: see the comment of vm_page_queue_lock, it protects the active and inactive page queues, which are shared
<youpi>One is however not supposed to take it for very long, so it shouldn't be that contended. Probably there is some caller that monopolizes it for long periods of time
<youpi>just to make sure: are you booting with the linux drivers, or rumpdisk drivers?
<youpi>I don't trust the glue code at all :)
<youpi>that being said, normally it would not make boot time slower, just not faster than -smp 1. Possibly it's the idle thread that for whatever reason monopolizes the lock. Perhaps they're keeping running ASTs ?
<youpi>concerning the show all slocks command, I guess we'd be able to get the function names and source lines, like trace does