IRC channel logs


back to list of logs

<damo22>i wonder if i can set up another gitlab runner for gnome
<damo22>so sergey's glib2 changes can be CI'd
<damo22>hmm gnome glib CI runs in docker
<youpi>damo22: it doesn't need to be another runner, you can register the same runner several times
<damo22>youpi: you mean if its the same gitlab instance?
<youpi>when you register you provide the gitlab url
<damo22>ah ok
<damo22>anyway, gnome has two problems, 1. i need their token and 2. it requires the CI to run inside docker
<youpi>it needs to run inside a docker *imag*
<youpi>we can implement that as a chroot / subhurd / whatever
<youpi>at worse by following the docker recipe by hand to build the image
<youpi>(but I guess some dockerfile player exists to do that)
<damo22>but docker needs to run it on their end without a gnumach kernel
<youpi>? is that really on their end?
<damo22>or our end needs to execute docker
<youpi>where do you see "need"?
<youpi>I see "docker image"
<youpi>not "docker"
<youpi>where "DOCKER" can be podman, docker, sudo docker, or we could make it something else
<youpi>possibly podman could be ported, even
<damo22>that could be fun
<damo22>how would you get a virtual network?
<damo22>i think netbsd implemented docker
<youpi>a virtual network is just another pfinet instance
<damo22>but do we have a virtual nic it can connect to?
<youpi>depends what you call exactly "nic"
<youpi>there's the tunnel option for instance
<damo22>like a real nic with no wires, but can be bridged to a real nic
<youpi>you can connect several pfinets on the same eth0 for instance
<youpi>they'd just need to have different IP addresses
<damo22>so each user could have their own stack?
<youpi>of course
<damo22>sharing the same nic
<youpi>I talked about that on some fosdem talk
<youpi>when talking about user-started openvpn
<damo22>thats awesome
<youpi>it's not very efficient since each pfinet instance gets all packets, but it does work
<damo22>so one user could sniff all traffic
<youpi>you give it all trafic yes :)
<youpi>you can interpose a firewall, though
<damo22>in that case, a sound driver should allow multiple connections and mix the audio frames together?
<youpi>that's what you usually want from a sound stack
<youpi>though "audio" is not necessarily something you want to share
<youpi>you can have one person with a headset, not necessarily wanting others to play something
<youpi>you can have several people in the same room listening to a speaker
<youpi>then you may want to share
<youpi>but for a given user, the different sound sources should be mixed, sure
<youpi>that's basically what pipewire etc. do
<youpi>drivers don't necessarily support that, that's why the alsa dmix component
<youpi>it's not clear to me that it should be the driver's duty
<damo22>i want to reuse ALSA low level code for a sound driver because i have invested number of years supporting sound cards in there
<damo22>but its difficult to break out
<damo22>maybe just start with pci
<damo22>i got a sound card in qemu with -soundhw ac97
<damo22>i think linux/sound/pci/intel8x0.c supports that
<PotentialUser-7>Can someone help me diagnose wifi disconnects I am having? It seems to be related to hurd as I have no issues on Guix's livecd which uses linux-libre
<PotentialUser-7>I'm using networkmanager & dhclient to connect, but every 2-5 minutes i have to run dhclient again to restore connection. my mac address changes each time but these seems to be normal behaviour of network manager. i can't seem to find much documentation for hurd, the fixes i've found for my network card require creating a ath9k.conf file in
<PotentialUser-7>/etc/modprobe.d/ does this work the same in a hurd system?
<solid_black>damo22: I don't think it's a requirement to use Docker / a Docker image
<solid_black>they use Docker on GNU/Linux, fair enough
<solid_black>obviously they don't on macOS -- Docker itself can run on macOS, by running a GNU/Linux VM, but that'd defy the point of doing CI on macOS
<solid_black>please see
<solid_black>and we need to start by talking to the glib/gtk people about this
<solid_black>as in, join their chat, or open an issue on the tracker, and tell them that we'd like to offer a CI runner, and ask them how they'd like this to happen
<solid_black>(I could talk to them about it, if you'd like me to, or you or youpi could do it, or all of us)
<solid_black>PotentialUser-7: I haven't followed the discussion, but there are no "kernel modules" in the Linux sense on the Hurd, so modprobe doesn't make sense
<solid_black>also /me needs to find & watch that fosdem talk that youpi keeps mentioning :)
<PotentialUser-7>solid_black okay that makes sense, are there log files I can look at to determine what is going on with my wifi card? I'm not really sure where to go from here but I'd rather avoid changing to linux-libre kernel if I can avoid it
<solid_black>I've never tried a Wi-Fi card on the Hurd tbh, I had no idea that it even worked
<solid_black>how are you driving it? netdde? some rump thing?
<solid_black>as always, I would expect that a million fixes has gone into the Linux driver since it has been imported :(
<PotentialUser-7>solid_black i could be misunderstanding, I read a post saying GUIX deprecated linux kernel and defaults to hurd and I am using pretty standard GUIX config so I assume I am using Hurd. My uname-r shows 6.0.10-gnu.
<solid_black>that post was an April 1st :)
<solid_black>6.0.10-gnu is linux-libre
<PotentialUser-7>i really need to check the dates of posts more often
<PotentialUser-7>thanks for clearing up my confusion lmao
<youpi>solid_black: feel free to discuss with gtk/glib people about CI, thanks!
<youpi>the hurd does not have support for wifi
<solid_black>Wayland tests all pass 🎉️
<almuhs>i've just created this code to check which cpu execute each thread, creating 3 pthread and calling to CPUID to extract the APIC ID.
<almuhs>after run this Debian GNU/Hurd and an smp gnumach, with -smp 4, i check that the threads are executing in many cpus
<almuhs>most times, the threads are using cpu 0, but sometimes i see other cpus. This is very noticeable reducing the time before each check
<almuhs>with -smp 8 and 16 pthread units
<almuhs>the number indicates the APIC ID of the cpu which execute the code
<azert>almuhs: try replacing the usleep with a tight loop
<almuhs>ok. But the point is that the scheduler is able to distribute threads between all cpus, although most of the time are assigned to cpu 0
<azert>void __attribute__ ((noinline)) busy_loop(unsigned max) {
<azert>    for (unsigned i = 0; i < max; i++) {
<azert>        __asm__ volatile("" : "+g" (i) : :);
<azert>    }
<azert>It’s normal since everyone sleeps
<azert>Make them do work and they’ll distribute
<almuhs>the ASM is a call to CPUID, to get the APIC ID
<azert>That’s all right
<almuhs>it's the best way to be sure that which cpu are executing this thread
<azert>i am arguing for replacing the sleep with a call to busy_loop
<almuhs>yes, i know
<azert>but of course you will have to try different max value
<almuhs>what max?
<azert>The one in busy_loop
<almuhs>i can use gettimeofaday() or similar
<almuhs>if i want to be precise
<azert>Anything that syscalls is bad since it has the potential to call in the scheduler
<azert>You could run some molecular dynamics code or similar
<almuhs>ok, then i will put a simple while loop. But the counter must be very very big
<azert>Yes the counter must be bif
<azert>And you need to check it doesn’t get optimized our
<almuhs>after replacing usleep with a loop, the concurrency increase a lot
<almuhs>thanks azert
<almuhs>damo22: now we can sure that all cpus are working, and the threads are distributed between all of them
<almuhs>as a interesting data, /proc doesn't distinct the pthreads of my test process
<almuhs>and its stat file shows cpu 0
<almuhs>i've just check that "top -H" doesn't shows the threads in Hurd
<almuhs>and "ps" command lacks of most output options, like "lastcpu"