IRC channel logs

2024-07-13.log

back to list of logs

<youpi>luckyluke42: looking at the generated asm, save_data is held in the xmm0 register
<youpi>possibly the lower part of it gets thrashed somehow
<youpi>by a signal handler or such
<youpi>(or the management of)
<youpi>I'll try to make it a volatile
<youpi>(not a proper fix of course, other xmm-using code would have the same problem)
<youpi>(I'm still surprised, though, since `post_signal` does save the fp registers, which includes xmm0)
<youpi>ah but the thread_get_state call only gets the basic state
<youpi>_hurd_setup_sighandler does save it though
<youpi>not sure where it's getting restored though :)
<youpi>sigreturn doesn't restore them
<youpi>confirmed by a small testcase, it's not restored indeed
<youpi>luckyluke42: yes, adding volatile has avoided the issue in my testcase, it's really probably about restoring floating-point registers
<youpi>it was evoked on the mailing list but not through this scenario :)
<youpi>luckyluke42: so what's missing is an i386x_float_state and i386_XFLOAT_STATE thread status
<youpi>that glibc would be able to use along i386_REGS_SEGS_STATE and i386_FLOAT_STATE in _hurd_setup_sighandler and sigtreturn.c
<youpi>the stucture would contain the fp_save_kind for glibc to know how to restore it
<youpi>would you be happy to work on this?
<youpi>(that'll be needed both on i386 and x86_64 actually)
<youpi>that'll allow me to unlock the hurd-amd64 buildd :)
<azert>etno: looks like a lua-luv testing suite bug
<azert>Shouldn’t crash if the apis are there as a stub, it should fail gracefully. Unless the crash is in libuv code
<etno>sneek: later tell azert: the crash seems to come from bad handling of lua exceptions in lua-luv. I concentrated on the implementation of threads prio in libuv1 and got lua-luv tests to pass. My current patch set touches only libuv1, but I don't know if it is acceptable.
<sneek>Okay.
<etno>It fixes libuv1 build as a side effect...