IRC channel logs

2023-10-09.log

back to list of logs

<almuhs>debugging thread_info(), i've found that, when i execute ps, the thread's flavor is THREAD_BASIC_INFO, meanwhile the "last_processor" field is in THREAD_SCHED_INFO flavor
<almuhs>debugging thread_info(), i've found that, when i execute ps, the thread's flavor is THREAD_BASIC_INFO, meanwhile the "last_processor" field is in THREAD_SCHED_INFO flavor
<almuhs>debugging the same with "top", i've found a THREAD_SCHED_INFO flavor, and sched_info->last_processor takes the correct value
<almuhs>sched_info->last_processor definitely takes the correct value. Then the problem must be, or in proc or procfs, or in top or ps implementation
<almuhs>damo22: could you debug these translators? The files which was modified in this commit https://git.savannah.gnu.org/cgit/hurd/hurd.git/commit/?id=d8671bc2a0fead7655b9e80736db33d84f14025c
<damo22>i hope you figure it out almu
<damo22>almuhs: is the correct value appearing in /proc/PID/stat ?
<almuhs>damo22: no, the /proc/PID/stat always set last_processor to 0
<almuhs>my proc and procfs patches might be fill this data, but it seems doesn't fill the field by unknown reason
<almuhs>**might fill this data
<almuhs>maybe the if-else condition is not correct
<almuhs>by this reason i asked you to debug these
<almuhs>debug this block can be a good start
<almuhs>+#ifdef HAVE_STRUCT_THREAD_SCHED_INFO_LAST_PROCESSOR+ /* If the structure read doesn't include last_processor field, assume+ CPU 0. */+ if (thcount < 8)+ thds[i]->last_processor = 0;+#endif
<almuhs>+#ifdef HAVE_STRUCT_THREAD_SCHED_INFO_LAST_PROCESSOR
<almuhs>+ /* If the structure read doesn't include last_processor field, assume
<almuhs>+ CPU 0. */
<almuhs>+ if (thcount < 8)
<almuhs>+ thds[i]->last_processor = 0;
<almuhs>+#endif
<almuhs>i have to go sleep
<almuhs>bye
<damo22>does it do thds[i]->last_processor = last_processor
<almuhs>this was fixed
<almuhs> /* If the structure read doesn't include last_processor field, assume
<almuhs> CPU 0. */
<almuhs> if (thcount < 8)
<almuhs>- thds[i]->last_processor = 0;
<almuhs>+ pi->threadinfos[i].pis_si.last_processor = 0;
<almuhs> #endif
<almuhs>
<almuhs> }
<almuhs>but yes
<almuhs>this is the block
<almuhs>check if proc translator reach this line
<damo22>its not setting the value
<damo22>its setting to 0
<almuhs>i know
<damo22>so thats the problem ?
<almuhs>wait
<almuhs>if the last_processor field exists, thcount must be >= 8
<almuhs>so it must be continue the code
<almuhs>ok, i'm seeing that the field is fill in procfs/process.c
<almuhs>in process_file_gc_stat()
<almuhs>+
<almuhs>+ long unsigned last_processor;
<almuhs>+
<almuhs>+#ifdef HAVE_STRUCT_THREAD_SCHED_INFO_LAST_PROCESSOR
<almuhs>+ last_processor = thsi->last_processor;
<almuhs>+#else
<almuhs>+ last_processor = 0;
<almuhs>+#endif
<almuhs>+
<damo22>but what is value of thsi->last_processor
<damo22>its probably 0
<damo22>because its not set
<almuhs>maybe is this the problem, yes
<damo22>ok go to sleep
<damo22>we can look properly another time
<almuhs>try to find where is sent the properly value
<almuhs>i go to sleep
<solid_black>youpi: would you like to be listed in CODEOWNERS for glib's Hurd-specific code, instead of myself?
<youpi>if I can delegate that, I'd rather do so ;)
<solid_black>ok :)
<solid_black>I just don't want to take too much responsibility
<solid_black>because I'm just an occasional contributor
<solid_black>but if you're happy with me representing Hurd among glib/gtk people, cool
<youpi>yes, please do ;)
<solid_black>in other news, since you're apparently not logging onto Mastodon too often: see this thread of mine: https://floss.social/@bugaevc/111194642983349800
<solid_black>I've (almost?) completed the epoll rework that I was designing for years, and now timeouts work
<solid_black>and I've updated the Wayland port
<solid_black>and made sure that its tests pass and Owl / gtk4 / wl-clipboard can run against each other, on the Hurd
<solid_black>how should we upstream this?
<youpi>yay for wayland :) It's coming more and more as a dependency for applications
<youpi>well, submit patch series ? :)
<solid_black>the issue is that my patches to wayland are somewhat too much
<solid_black>in particular they remove public APIs of libwayland-server, ones that depend on signalfd/timerfd
<solid_black>Owl does not use them, so it's not an issue for Owl
<solid_black>but the Wayland upstream might not be happy with this
<solid_black> https://github.com/owl-compositor/wayland/commit/bc49b0159b7e358e1bd3d52c2646c51700b9a084 to give you an idea, although that's an older iteration
<youpi>for the not-for-upstream parts, we can include them as patches in distribution for the time being, for instance
<solid_black>well, can we do that? will other Debian platforms be happy with these patches being applied?
<solid_black>they're not exactly supposed to make things worse on GNU/Linux, but still
<youpi>I meant: in the unreleased part
<youpi>which is not shared among ports
<solid_black>epoll, I will publish my GitHub repo, and then ask you to take a look
<solid_black>oh, can we do that? great then
<youpi>better send patche series on the mailing lists, so that discussions can happen there
<solid_black>then if / once you're happy with what it looks like, I could try to convert it to a series of patches against glibc/hurd
<youpi>IIRC you mentioned an RPC extension?
<youpi>apart from that it's essentially implementing an existing interface, so there's not much to discussion on the high level
<solid_black>an RPC extension is what we could possibly think about, if we wanted to also support epollet
<solid_black>but I had very specific constraints when designing this
<solid_black>I wanted to port Wayland, so I wanted to support the specific things that Wayland needs (no EPOLLET fortunately), and I could not do any glibc / hurd changes, only add servers / libraries
<solid_black>(that being said, I personally do think that edge-triggered is superior to level-triggered; but that's not what libwayland uses, and it's not implementable with mere io_select)
<solid_black>there is certainly a discussion to be held on a high level, not about the interface since that's copied from Linux, but about how the implementation is designed / structured
<solid_black>I'm not sure if you're going to like some parts of it, like how the epoll server is single-threaded and uses a custom event loop (that still integrates with libports)
<solid_black>can we 100% guarantee that Mach must support protected payloads? or do we have to have to hashtable fallback?
<solid_black>we already require gnumach in glibc / gsync, but do we require new enough gnumach?
<solid_black>I guess we do, with those memory proxies changes
<solid_black>or: is there any case where even though Mach supports protected payloads, it will still return a message with just a port name?
<azert>Hi solid_black, can you explain why is it harder to implement epollet?
<solid_black>hi
<solid_black>well, simply because io_select, which is what select/poll/epoll are all implemented on top of, is level-triggered
<solid_black>edge-triggered means: even though this socket is writable now, don't notify me about it now, but do notify me if it becomes non-writable, and then writable again
<solid_black>but there's no way for the epoll server to know when a socket becomes non-writable and then writable again
<azert>So you’d need to implement an edge triggered io_select?
<solid_black>yes
<solid_black>also I haven't though at all about how this would affect the internal structure of the epoll server
<solid_black>s/though/thought/
<azert>to implement this new io_select you probably can extend the current rpc
<solid_black>there's already been one extension, io_select_timeout
<solid_black>so we'd probably add io_select_et (that would have a timeout too)
<azert>Yes thinking how this affect your new server is maybe more hard/ important
<isf>welcom Inline
<isf>welcome*
<Inline>hello
<Inline>is it normal for pfinet components in hurd source for i686 to require executable stacks ?
<solid_black>yes
<Inline>i'm using the cross-hurd github scripts but it does not build
<solid_black>practically everything in the Hurd reuiqres executable stacks, because of nested functions
<Inline>the minimal-system does not get built
<Inline>so shall i just disable them via execstack -c ?
<Inline>oh because of nested functions ok
<solid_black>it would probably be a good idea security-wise to gte rid of them
<Inline>well i passed LDFLAGS="$LDFLAGS -z execstack" to no avail too
<Inline>not sure why it won't build
<Inline>i also tried the same with -z noexecstack but like i told previously the build just stops at some point
<Inline>maybe i have to fiddle with some Makefile again
<Inline>meh
<Inline>aren't there any x86_64 ports yet to hurd ?
<solid_black>there have been an x86_64 port :)
<solid_black>you should be able to build it, and it works somewhat
<Inline>i got a debian image also i686 which works
<Inline>on my x86_64 multilib system
<Inline>it even has a DE, icewm or so, an emacs a terminal, a prebuilt settrans for ftp.gnu.org
<Inline>but apart from that nothing else
<Inline>hmmmm
<Inline>solid_black, where do i find that port ?
<solid_black>it's all upstream, just pass --target/--host=x86_64-gnu as appropriate
<Inline>ok thank you
<Inline>i'm just using the cross-hurd github scripts as of now
<Inline>i also installed the crosshurd package on ubuntu but it only worked partially, i couldn't get the grub to boot into the image properly
<Inline>it hangs somewhere
<Inline>also it won't give me the x86_64 CPU build anyway
<Inline>are nested functions a prerequisite for hurd ? or is it just that the code is written in that way and could be rewritten ?
<solid_black>the latter, but I think there were some APIs that take callbacks without a void *data pointer, so they're not super useful without nested functions
<Inline>thank you
<youpi>concerning the 64bit port, see the 64bit faq page
<youpi>there is even the section about using crosshurd to bootstrap an image
<Inline>ok, thank you
<almuhs>i'm searching the reason because /proc/PID/stat put last_processor field to 0. I'm experimenting with hurd/proc/info.c, disabling the preprocessor condition here and putting last_processor to 1 if(thcount < 8) and putting to 7 in other case.
<almuhs> https://git.savannah.gnu.org/cgit/hurd/hurd.git/tree/proc/info.c#n730
<almuhs>But, even with that, the stat file continue putting 0 in the file
<almuhs>even i disabled this preprocessor condition, keeping only the last_processor = thsi->last_processor;
<almuhs> https://git.savannah.gnu.org/cgit/hurd/hurd.git/tree/procfs/process.c#n237
<almuhs>But no success
<almuhs>so, it seems that thsi->last_processor doesn't receives the properly value
<almuhs>gnumach fills correctly this data, i checked this yesterday, so i don't know where can be the problem