IRC channel logs

2026-04-13.log

back to list of logs

<diegonc>well, it's also in quemu output :/ the only extraneous log is "irqhelp: tried acpi to get pci gsi and failed for 00:01.1" in rumpdisk output. but disk works so not relevant, I guess
<azert>youpi: I sent two patches on the mailing list, please consider them an RFC don't apply them yet, I'd like to do some testing
<jab`>azert: You can add "--rfc" to git format-patch or git send-email to mark patches an RFC. :)
<azert>thanks
<jab`>afternoon friends!
<Gooberpatrol66>hi
<jab`>Gooberpatrol66: what's going on ?
<Gooberpatrol66>been investigating rump & mig
<Gooberpatrol66>rump has a build flag to turn on zfs, i'll investigate that later
<Gooberpatrol66>mig defs convert to c files
<Gooberpatrol66>i've been researching rpc frameworks with c bindings
<Gooberpatrol66>capnproto has c++ bindings
<Gooberpatrol66>there's also an ASN.1 to c compiler
<Gooberpatrol66> https://copy.sh/v86/?profile=archhurd
<Gooberpatrol66>arch hurd in the web browser
<reb>Gooberpatrol66: There's always Google protobufs, supported by many different implementations.
<Gooberpatrol66>yeah capnproto is a "successor" to protofubs
<Gooberpatrol66>bufs lol
<azert>youpi: are you available for a chat? I am a bit worried about the back and forth that glibc will have to do with gnumach to get proper semantics for thread_set/get_affinity_np, and would appreciate your feedback
<azert>first of all, while it is quite convenient to use mach ports for processors, for pthread it would be maybe better to have just a flavour of thread_set_state and thread_get_state
<azert>that would avoid a rpc just to get the processor ports
<azert>second, it seems to me that the whole has_affinity logic has to be replicated glibc side just because thread creation in gnumach is not the posix clone syscall
<azert>so one needs to keep a copy of the kernel state in glibc
<azert>one is tempted just to not implement the posix side at all, while instead adding task thread affinities and following mach semantics to clone thread affinity from their task object
<youpi>it's fine to get the self affinity from the kernel on thread creation, and apply it to the child, to get the posix behavior
<youpi>thread creation is already not that cheap anyway
<azert>right now the way to get unprivileged port to processors is through processor_set_processors, that I recently implemented. We need to make a choice
<azert>either we mask any affinity bit that are not in the processor set, when calling thread_set_affinity from glibc, or we implement an unprivileged version of host_processors
<youpi>thread_set_affinity can refuse any affinity bit that is not in the pset
<azert>that's more or less equivalent to option 1
<youpi>not exactly, the user is notified of the failure
<azert>I think that linux allows to add affinity for processors that are not plugged in yet. Since some applications allegedly does that according to the man page
<youpi>plugged in is different from allowed
<youpi>can't the pset include non-plugged in processors?
<azert>in gnumach it cannot
<azert>when a processor is shutdown it is removed from the pset
<youpi>that being said, I'm surprised that linux would allow non-plugged processors
<azert>my guess is that it costs nothing to support that, just some bits in the bitmask
<youpi>but that's lying
<youpi>which is the worst
<azert>ok, then I think that the choice is made
<youpi>./test
<youpi>setaffinity: Invalid argument
<youpi>I'm getting that on setting a mask that is outside the available PUs
<youpi>let me try to switch one of
<youpi>./test
<youpi>setaffinity: Invalid argument
<youpi>there, it's refused
<youpi>and putting back online, it's allowed
<youpi>so applications won't be surprised not to be able to bind to an offline processor, linux does that too
<azert>I got the wrong idea from "man CPU_SET"
<youpi>CPU_SET only manipulates a mask, it doesn't care about the meaning
<azert>what happens in linux when one put a processor offline and put it back online? Does it sticks in the affinity mask or it gets removed?
<youpi>apparently it does stick
<azert>ok, then we do the same
<youpi>but I don't think applications really hope that this happens
<youpi>few applications actually care about binding, and even fewer care about offline/online, it's rather admins that manipulate that
<youpi>so we can probably do what is meaningful here
<azert>anyhow, we will be vulnerable to the following scenario: an application is given a certain affinity, then a processor is shutdown exactly between the time which a thread gets its state and when it copies it to a new thread, this fails. I don't think this is a realistic danger but I don't see way around this
<youpi>it can retry
<azert>ok
<youpi>(just once, we don't want to enter a loop if something is wrong there)
<youpi>(and an admin wouldn't switch off/on several times, so it's fine to fail in such hypothetical case)