IRC channel logs

2025-12-05.log

back to list of logs

<azert>that should never happen
<damo22>really? ok
<azert>I mean, you can prevent this from happening, right?
<damo22>so one timer is only set once at any given time?
<azert>you can kalloc and set a new timout
<azert>i think so
<azert>Im pretty sure its a requirement
<damo22>using kalloc to set a new timeout is bad because the original memory backing of the timeout becomes useless state
<damo22>and you cant look up the new timeout using that
<azert>how is this not happening? Did you find the code that set the same timeout twice?
<damo22>no i just ran with gdb and found a bunch of timeouts with the same memory address in the wheel
<azert>damo22: did you check this: http://fxr.watson.org/fxr/source/kern/kern_timeout.c?v=FREEBSD-4-STABLE;im=1#L227
<damo22>yes
<azert>if you use this you can reuse the same memory
<azert>I think you need to figure out wha are those timeouts
<azert>and fix the code that sets them
<damo22>yeah
<damo22>queue_enter(&tmp->chain, t, timeout_t, chain); this does not do what i expect, to insert at tail
<damo22>is the head element supposed to be separate from the first element?
<damo22>it appears to set the head->prev to the first inserted elem
<jab>sneek later tell saravia DO NOT USE libreboot or coreboot with the Hurd. The Hurd can run via libreboot (and/or coreboot), but you will not be able to run X. Also, I would highly recommend that you buy a machine that supports 8+ GB, like the T400. 32 bit hurd is the most stable, but hopefully soon (maybe 1 - 2 years ? ), we'll start recommending people use the 64 bit hurd. A T60 only supports 3 GB of memory max.
<sneek>Okay.
<jab>damo22: may I ask why you are pursing this "call wheel" ? I know it helps with faster boot time. Is there another benefit that I'm missing ?
<damo22>it runs faster in general not just boot time
<damo22>theres nothing wrong with running coreboot with hurd, just that X wont work
<jab>damo22: I'm assuming most Hurd users will want to run X.
<nexussfan>jab: What are some things that 64 bit hurd still cannot do? For me it works fine???
<jab>nexussfan: are you using it in real hardware ?
<nexussfan>Yes.
<jab>nexussfan: and don't quote me...but a week ago or so, I asked Samuel if I should start recommending people to use the 64 bit hurd on the wiki. He said said not yet. Something about rump having some minor issues...
<nexussfan>rump has no issues. Rumpdisk and rumpnet work perfectly
<jab>nexussfan: remind me again, on what laptop are you running the 64 bit hurd and how hard was it to install ?
<nexussfan>ThinkPad T420
<nexussfan>I didn't really install it
<nexussfan>I just used the preinstalled image
<nexussfan>and put it onto the disk
<jab>I am having some issues getting installing the Hurd. I need to try putting the image on the disk.
<jab>Did you download the 5GB 64 bit image, increase it to say 100GB, and then put it on the machine ?
<nexussfan>Yeah I downloaded the 5GB 64 bit image then put it on the disk
<jab>and have you tried using the new hurd journal yet on real hard ware ?
<nexussfan>Then resized it
<jab>ok.
<nexussfan>jab: No.
<jab>nexussfan: samuel's response to me https://mail.gnu.org/archive/html/bug-hurd/2025-11/msg00034.html
<nexussfan>Actually, the opposite.
<nexussfan>I wasn't able to get 32 bit hurd booting
<jab>maybe mention that to samuel. idk
<nexussfan>It was a rumpdisk error IIRC
<jab>what's the max ram that a T420 supports ?
<jab>I'm actually curious what hurd user has a 64 bit hurd with 8+ GB of ram. which hurd machine (running on real hardware, with working internet), has the most amount of ram.
<nexussfan>hmm
<nexussfan>It has 7.6Gi "total" according to Hurd
<jab>that's pretty cool!
<azert>damo22: I think queue_enter is fine since prev in the head element should be always the last element
<azert>in gnumach tail queue design
<azert>that’s why an empty queue points to itself both prev as well as next
<azert>this invariant should also be left holding
<azert>so you shouldn’t set prev to NULL ever
<youpi>rump has the issue of eating a lot of memory
<damo22>my problem is some of the timeout elements are getting reused more than once in the wheel and the pointers get messed up
<damo22>i need to put asserts in that tell me exactly when this is happeneing
<damo22>azert: does tail->next always point to head?
<azert>damo22: yes
<azert>you could implement a linear tail queue if using this gives you so much issues
<azert>I think what you are doing can be implemented either ways, but a linear tail queue would make more sense because of the pointer in the callwheel
<azert>if you are careful in respecting all the invariants, gnumach queue implementation should be all fine
<azert>in regards to the issue of more then once element getting reused, I think it’s a big in what adds the element
<azert>if you check what is the timeout function, you can understand what is doing that
<damo22>yeah this queue.h is confusing, i think it expects a separate head element not part of the queue
<damo22>ie, a head queue_entry_t that points to the first elem with head->next ?
<damo22>but what i really want to do is put a head in an array
<damo22>ie have a circular array of head elements
<azert>I think having a pointer to the head element is the most sane approach
<azert>although it takes some care
<azert>reason is that the call wheel needs to be tiny and hot
<azert>most optimization over here comes not really the from the new fancy softclock. It’s faster since you can easily check for null in the hardclock
<azert>and you don’t need to take a lock for that!
<azert>actually, softclock is specifically designed to allow hardclock to do that efficiently, and potentially starting another softclock “in parallel”
<azert>I wonder why they didn’t implement the callwheel as a bitmap instead of an array of pointers? Could be even faster perhaps (but slightly more complex)
<azert>I just had an idea. What if instead of porting drm/kms to the Hurd, someone of us with good organizational skills make a project to bring drm/kms to unix in userspace instead?
<azert>And tries to get on board Unix and Linux people instead
<azert>I mean, all major operating systems are moving graphical stuff to userspace, including windows nt where it was in kernel space for version 4 and 5
<azert>Only Linux people has the idea of bringing it back in the kernel
<nikolar>who's bringing it back into the kernel
<azert>DRM and KMS
<nikolar>drm and kms don't really do much
<nikolar>the actual heavy lifting is in the userspace (mesa and friends9
<azert>then why not kick also that much out to userspace?
<azert>I understand that the kernel want to be the authority over monitor
<azert>But why not an user service running under root?
<nikolar>because the kernel is the only entity that can actually do io with the gpu
<azert>Would make headless stuff easier
<nikolar>which is more or less all that drm does
<azert>pretty much everything can make io nowadays , think about what has neem
<nikolar>what
<azert>been developed for qemu
<azert>qemu-kms
<nikolar>you are not going to map the gpu's register into a process' address space
<nikolar>so it has to go through the kernel
<azert>you can pass through a voi to qemu
<nikolar>which is what the drm does
<azert>a gpu
<nikolar>yea, through a kernel driver
<nikolar>the gpu gets bound to vfio or whatever it was called
<nikolar>and you pass that to qemu
<azert>no you can do real pass through
<azert>im sure about that
<nikolar>That's real passthrough
<azert> https://github.com/bryansteiner/gpu-passthrough-tutorial
<azert>With iommu also for protection
<nikolar>(I know because I've used it to run macos in a VM)
<azert>if qemu can get a gpu, anyone can
<nikolar>Yeah, through vfio
<azert>Yes
<azert>I think a project for a userspace drm/kms would use vfio on Linux
<azert>And that would fit what we do on Hurd pretty well, with extra protection we don’t currently have
<nikolar>But also there's no reason to do that on Linux
<nikolar>Because DRM already exists and basically just passes through requests from the userspace to the GPU
<azert>It would still remove lot of surface from the Linux kernel
<nikolar>Moving that into the user space wouldn't change anything really
<nikolar>And it's *a lot* of work
<azert>reducing attack surface for malicious software
<azert>Im not saying it’s easy, I think it’s about as hard as porting drm to the hurd
<azert>Im saying it could be spinned into something that some Linux kernel developer would approve
<nikolar>I'm saying that there's basically no interest for that on the Linux side
<nikolar>I mean, try convincing amd and Intel and nvidia
<azert>apparently the amd gpu driver is 15% of Linux in lines of code
<azert>I understand that it’s going to stay there since amd people will not sacrifice performances and most Linux people prefer to keep in-kernel drivers
<azert>most of it is gpu registers header files
<yang>jab: I have bought Thinkpad T60, the seller said it has a replaced SSD drive inside
<yang>I got the docking station with it
<jab>yang: that sounds pretty awesome!
<jab>let me know how installing the Hurd goes.
<jab>you will most likely run into some issues. Feel free to ask for help here. :)
<yang>jab: thanks
<jab>:)
<yang3>jab hm, the LCD matrix seems to be really dark, I tried setting LCD to brighter contract with Fn+HOME or Fn+End
<yang3>probably worn out
<yang>The Debian Hurd debian-hurd-i386.img is 4 GB in size, so I guess I need a DVD for that to burn it on, it's not a CD-ROM image for CD.
<yang>ah ok, clarification from FAQ - In Debian, we use the term CD image as a common way to describe a range of things, many of which don't even fit on CD! The name is old, but it has stuck. We regularly build multiple different types of image: