IRC channel logs

2022-09-02.log

back to list of logs

***damo22_ is now known as damo22
<damo22>youpi: what does RPT mean in a rpc def?
<damo22>routine dir_readdir (
<damo22> dir: file_t;
<damo22> RPT
<damo22> out data: data_t, dealloc[];
<damo22>...
<damo22>is it a placeholder for the datacnt?
<youpi>damo22: iirc it's a macro
<youpi>to get the reply port when we want the stub to get it
<damo22>$ cat /proc/route
<damo22>Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT
<damo22>/dev/eth0 10.0.2.0 0.0.0.0 0001 0 0 0 255.255.255.0 0 0 0
<damo22>/dev/eth0 0.0.0.0 10.0.2.2 0003 0 0 0 0.0.0.0 0 0 0
<damo22>/dev/eth0 0.0.0.0 0.0.0.0 0001 0 0 0 0.0.0.0 0 0 0
<damo22>\o/
<Curiosa>luckyluke: how to prevent a thread from keeping the cpu indefinitely? If there are no free core, interrupt are still served in real time, fully, by interrupting whatever is running. If a thread doesn't do any io, it's more probable that he will never get run than it will take over a cpu indefinitedly
<Curiosa>of course that situation (they will never get run) is irrealistic since waiting is so much slower than computing
<damo22>arent hardware interrupts locked to one cpu or can be?
<damo22>so the handler will only run on that cpu
<Curiosa>damo22 I'm not sure how it is on x86 but a part the implementation details the kernel could dispatch the work on any cpu
<damo22>you cant dispatch very easily an interrupt to a different cpu core
<damo22>unless you configure the cpu to handle interrupts on any core
<damo22>but then it will probably be randomly chosen
<Curiosa>damo22 the best would be that the interrupts is handled randomly between a list of free cpu, but this might not be possible
<damo22>Curiosa: i think x86 lets you configure the APIC to target the interrupts to a particular set of cores
<damo22>im not sure how well it can dynamically set them
<Curiosa>let's say that you cannot or it is too slow, then the scheduler should do some job to make it work
<Curiosa>like you always interrupt one core, but then you can reschedule the interrupted job on another core and interrupt something else at random (if the other core is occupied)
<damo22>so far i have hardcoded it to one core in ioapic.c
<Curiosa>this has the advantage that jobs that keep busy waiting will always run on the same core as interrupts
<Curiosa>damo22 seems like a sensible option to start
<damo22>but no one is using that code yet
<damo22>we're still on PIC
<damo22>i cleaned up shutdown translator to call acpi translator, it is working
<biblio>damo22: :)
<damo22>actually that should fix the problem on real hardware that the sleep type is not known
<damo22>since acpi works it all out itself
<damo22>but i need to make acpi into a mach device
<damo22>using libmachdev
<damo22>youpi: do you want a preliminary working version of acpi + shutdown without libmachdev, or do you prefer if i complete that part too first?
<damo22>we'll need to make libacpica a build dep for hurd...
<youpi>it's fine to use /servers/acpi for a start
<youpi>I guess it'll be needed as a mach dev for rumpdisk to get irq numbering?
<youpi>that can wait indeed, step by step :)
<luckyluke>biblio: the branch on gitlab is not up to date, it's better if you take the patches I sent to bug-hurd
<biblio>luckyluke: ok noted. I will build the patches from bug-hurd.
<luckyluke>Sorry for that, I forgot to push there. I guess it could be easier to just pull from there, for testing, although I usually rewrite history in my wip branches
<luckyluke>Especially when preparing patches for submission ;)
<biblio>luckyluke: ok no prob. I am currently reading doc for x86_64. I was able to debug using gdb. setting breakpoint in boothdr.S did not work at first. Just I need to stop the debugger after Grub boot and I can stop in boothdr.S as workaround.
<luckyluke>I have some testing script, kind of unit tests, I may have pushed some it gitlab already. I never had issues with breakpoints so far, I usually use both -s and -S with qemu
<biblio>luckyluke: I will also check the script. I was using -s and -S together for debugging.
***mbanck_ is now known as azeem