IRC channel logs

2023-08-26.log

back to list of logs

<damo22>how do i write a macro that writes a struct member to gs:0 + offset of(member) ?
<gfleury>gnucode: for development. I work on glibc to move some htl symbols from pthread into libc
<gfleury>gfleury: I saw right now that the error message is in french
<damo22>does a pointer to memory located on GS segment area resolve without mov gs:X
<damo22>like, if i read the pointer from the gs area can i just use it in C?
<damo22>basically i want each cpu to have a GDT entry that points to its own area, but have it mapped in such a way that i can read the memory just by knowing a pointer to the right area
<damo22>which is stored in GS:0
<damo22>ok i have percpu region working
<azert>damo22: cool! If it works you can first just try storing the CPU_NUMBER in this area and see if it already improves performances!
<damo22>azert: Your paper you recommended for timers was very interesting
<damo22>azert: i implemented it in a branch but its still broken
<damo22>it does make it faster
<damo22>the cpu number thing
<azert>Great!
<damo22>i moved the processor_array into the percpu area
<damo22>and apic_id
<damo22>i can probably make the cpu number raw in there
<azert>Yea I would do that
<azert>No need for the provessor_array
<damo22>yes, the whole processor array
<damo22>instead of looking it up with cpu_number
<damo22>it can just be a struct processor
<azert>Ah ok i understand
<damo22>in percpu area
<azert>Makes sense it will be faster to access it directly instead of first doing cpu_number just for that!
<damo22>yes
<azert>were you/youpi mentioning that some servers needs to be tied to a single cpu because of deadlocks? Is it possible to do that for each server that needs that?
<damo22>i tried tying everything to cpu0
<damo22>it works but slow
<azert>Slower than single cpu?
<damo22>yes
<damo22>int cpu_number(void)
<damo22>{
<damo22> return *((int *)percpu_ptr(int, cpu_id));
<damo22>lets see if this goes faster
<azert>Makes more sense that servers with unfixable deadlocks tell the scheduler to tie themself and all their threads to a single cpu
<damo22>or fix the races
<azert>Yeah, I guess it’s just bugs
<damo22>i got to INIT with smp 6
<damo22>cant get a shell et
<damo22>yet
<damo22> 18 f67d7da0 (ext2fs) [0]
<damo22>looks like ext2fs is blocking it with a strange thread
<damo22>i mailed in a patch for comments
<janneke>damo22: nice
<damo22>looks like smp 1 is usable
<damo22>maybe 1/2 as fast as UP
<azert>damo22: wow that’s a great improvement already
<azert>It’s quite surprising that the kernel is still using more cpu then all the other processes taken together
<jab>morning hurd people!
<jab>I will probably make a cool Hurd video today!
<AwesomeAdam54321>jab: cool, can you share it in this channel when it's published?
<jab>sure. I would be happy to do so.
<jab>I think the goal for today will be to try to get sergey's ssh program working
<jab>I have the program build I believe...
<jab>so I have sergey's mdns responder running, which is rice.
<jab>nice.
<jab>But I'm not sure how to connect to the hurd machine... I was using "ssh joshua@<IP address>", but apparently you can do something like "ssh joshua@hurd-local"...
<jab> https://mail.gnu.org/archive/html/bug-hurd/2023-03/msg00021.html
<jab>well let me go read up on mdns ...
<jab>not the coolest video, but here it is:
<jab> https://video.hardlimit.com/w/6BQmBus4f8SS51vqGjn3RT