IRC channel logs

2023-10-07.log

back to list of logs

<almuhs>how can i know if proc and procfs are built with the "last_processor" supports?
<almuhs>to have this field, it's needed that these are compiled with this version of gnumach headers https://git.savannah.gnu.org/cgit/hurd/gnumach.git/commit/?id=75267dd103637d38fa95ecdee0eedb16ba0f662c
<almuhs>but i don't remember how to test it
<damo22>look for the AC_CHECK thing?
<almuhs>i need to check if the thread_info.h version has last_processor field
<almuhs>+AC_CHECK_MEMBERS([struct thread_sched_info.last_processor],,,+ [#include <mach/thread_info.h>])
<almuhs>but... where is mach/thread_info.h ?
<damo22>/usr/include/i386-gnu/mach/..
<damo22>that AC thing will define HAVE_STRUCT_THREAD_SCHED...
<almuhs>yes
<almuhs>this is set if the thread_info header has last_processor fiel
<almuhs>*field
<almuhs>i added this field at 2019
<damo22>ok
<almuhs>pruebas@debian-hurd:~/hurd/proc$ grep last_processor /usr/include/i386-gnu/mach/thread_info.h
<almuhs> integer_t last_processor; /* last processor used by the thread */
<almuhs>ok, here it is
<damo22>if thcount > 8
<damo22>or something
<almuhs>yes, this is a field counter
<almuhs>this is a ugly way to check if a field exists or not
<almuhs>because an old program could have a version of this header which not includes this field
<almuhs>but C doesn't take any way to check if a field exists or not in a struct, unless a compiler error
<almuhs>so, to avoid compile error, we check that with a counter
<damo22>azert: https://lists.gnu.org/archive/html/bug-hurd/2020-07/msg00033.html at the bottom of this message is Paul's idea
<almuhs>this field is set to 0 if NCPUS == 1, or set to thread->last_processor (from the scheduler) if NCPUS > 1
<almuhs>the problem is that i don't find the last_processor field in /proc/PID/status or similar
<almuhs>i need a way to access to this field
<damo22>libps?
<almuhs>maybe lower level
<almuhs>this is strange
<almuhs>#if NCPUS > 1
<almuhs> if (thread->last_processor)
<almuhs> sched_info->last_processor = thread->last_processor->slot_num;
<almuhs> else
<almuhs>#endif
<almuhs> sched_info->last_processor = 0;
<almuhs>the #endif must be below
<damo22>haha
<damo22>its clobbering it with 0?
<almuhs>only if NCPUS == 1
<almuhs>or if NCPUS > 1 and thread->last_processor
<azert>damo22: nice email, you could start drafting an audio.defs and/or a midi.defs based on that, and then post it as an RFC
<damo22>but if NCPUS >1 it will do the first block within the #if and then clobber the value with 0
<damo22>it looks like a bug
<almuhs>ok, by this reason is the else
<damo22>oh i didnt see the lese
<damo22>else*
<almuhs>i see a lot of TODO like this
<almuhs>#if NCPUS > 1
<almuhs> /* thread_template.last_processor (later) */
<almuhs>#endif /* NCPUS > 1 */
<damo22>thats not a todo
<damo22>it is saying it will populate the value later
<damo22>in a different code path
<almuhs>ok
<almuhs>the proc data are showed in process_file_gc_stat (struct proc_stat *ps, char **contents)
<almuhs>in procfs
<almuhs>but i don't know who ask to this function
<damo22>azert: "signal routing" in the driver doesnt make sense to me, wouldnt a sound driver just expose a framebuffer per channel per capture/playback
<almuhs> https://git.savannah.gnu.org/cgit/hurd/hurd.git/tree/procfs/process.c#n258
<damo22>ie, you only connect one "client" to the framebuffers like jackd
<damo22>or should it allow mixing in the driver
<damo22>i guess you could allow the driver to have multiple clients with different QoS
<damo22>eg, a low latency client for pro audio and a large buffer one for desktop audio
<almuhs>ok, i think that the results of this function is in /proc/PID/stat, and last_processor is the 4th starting by the end
<Gooberpatrol_66>pipewire has dynamic buffer size/latency like that
<almuhs>(cross-talks)
<damo22>almuhs: looks like /proc/PID/stat does show last_processor, you just probably need to make ps read it
<almuhs>yes, i'just told it
<almuhs>but i don't know how to make ps read it
<damo22>i dont know, find the source of ps
<almuhs>Gooberpatron_66: there are so many pendant projects: smp, x86_64, USB... it's prefer not open so much new projects, please
<azert>damo22 : yeah signal routing seems overkill for a driver.
<almuhs>the filesystem project is more interesting to me than the audio, as a priority
<damo22>audio is something i want to do soon
<azert>Also mixing and qos. I’d let other piece of software to take care of that.
<almuhs>the audio is a curious issue, because there are a pulseaudio implementation, but lacks the back side, the driver
<damo22>almuhs: we discussed a lot yesterday on it
<damo22>i am planning to write a new driver
<almuhs>if you starts a DE in Debian GNU/Hurd, you can noticed that there are pulseaudio
<almuhs>it could be a good idea solve the driver issue and try to explode the pulseaudio support, before start a new migration to pipewire
<damo22>im not touching pulseaudio or pipewire
<damo22>i will make something with a jack compatible api
<almuhs>i read a conversation about migrate pipewire to the hurd, but i think that this is not the priority in audio side
<damo22>or something similar
<almuhs>yes, i read it
<damo22>so that a jack backend can be trivially implemented on hurd sound
<almuhs>yes
<almuhs>has rump a audio driver?
<damo22>yes, but its based on SunOS
<damo22>and does not have many cards
<almuhs>ok
<damo22>i need to figure out how to send the audio data from the driver to another process without using mach msg
<almuhs>and how to get the driver
<damo22>ALSA has low level functions that implement all the features of cards
<damo22>it is well organised into a set of functions
<almuhs>it's useful
<damo22>so i need to implement a pcm streaming driver that calls into ALSA low level functions
<almuhs>good idea
<damo22>it can be GPLv2
<almuhs>yes
<azert>You can send data using ring buffers
<almuhs>pruebas@debian-hurd:~$ cat /proc/754/stat
<almuhs>754 (stress) R 752 752 723 0 0 0 0 0 0 0 12677 10 0 0 20 0 2 0 169663519517 155095040 109 0 4096 23908 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
<almuhs>753 (stress) R 752 752 723 0 0 0 0 0 0 0 15346 5 0 0 20 0 2 0 169663519517 155095040 109 0 4096 23908 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
<almuhs>with stress -c 2
<almuhs>in -smp 2
<almuhs>this is with upstream's gnumach, without the damo22 latest patch
<almuhs>i can try to compile proc and procfs from upstream, to be sure that last_processor field is fill correctly
<gnucode>I hear we want to set up a audio technical interview/discussion...
<gnucode>who am I emailing now?
<damo22>gnucode: no we dont
<gnucode>keeping up with the trend, should we schedule a call with Wim Taymans? :D
<gnucode>maybe that was more of a joke.
<damo22>id rather speak with Paul Davis
<damo22>but there is no need
<damo22>he already gave me a lot of ideas
<gnucode>I haven't actually heard of paul davis before...
<damo22>he is the author of Ardour DAW
<damo22>and JACK
<damo22>without him, there would be very little or no pro-audio on linux basically
<damo22>i spoke with Paul Davis, he suggested i write a new jack backend directly with low level driver code, and hack on the driver code until it is working... then see what will be needed to define an api to move the driver level code into a separate process (like a proper hurd translator)
<danmorg>dumb question. what is translator. a way to pass messages between the driver and user code and the microkernel of the Hurd?
<damo22>danmorg: a translator attaches to a file node and lets another process open that file and execute remote procedures from it
<damo22>so basically yes
<damo22>its a way to do inter-process communication
<wleslie>a translator lives in the filesystem, but instead of a static file, directory or named pipe, it can implement custom behaviour e.g. show different data each time
<wleslie>it's basically a building block of the virtual file system
<wleslie>did we ever get the posix shmem patches merged? JACK is quite shmem heavy
<youpi>wleslie: no, see the TODOs in the file
<youpi>(patch in debian, or tg tree)
<azert>damo22: I like the plan!
<azert>Very pragmatic
<azert>But I don’t really see why you would move the driver out of the backend at all? Well, we will see
<azert>Looking at pipewire. The people at Linux desktop they are really into a containers frenzy
<azert>My experience is that containers gives less security
<azert>It’s absurd
<azert>You know, I work in a field where I had to compile lots of code from source to do different tasks
<azert>Now you get a docket image
<azert>Docker
<azert>Then you run your docker image, and you realize
<azert>that suddenly your file system mounts read only
<azert>You go to check, your iommu logged some error
<azert>Ohoho, these should be my competitors trying to steal my data
<azert>Let’s disable the iommu
<azert>Everything NOW work as expected
<azert>Lol, I hate American corporations so much it’s unreal
<azert>The thing is that having no boundaries (like Hurd?), or a boundary that is very broad and well defined like in Unix, is better than having boundaries that aren’t under your control, constantly change, are bad to get the work done