IRC channel logs
2026-01-13.log
back to list of logs
<damo22>#0 task_terminate (task=0xffffffffdc0dec18) at ../kern/task.c:291 <damo22>#1 0xffffffff81088cf8 in exception_no_server () at ../kern/exception.c:260 <damo22>#2 0xffffffff81089c46 in exception_try_task (_exception=_exception@entry=0x1, code=<optimized out>, subcode=<optimized out>) at ../kern/exception.c:177 <damo22>#3 0xffffffff81089d48 in exception (_exception=_exception@entry=0x1, code=code@entry=0x1, subcode=subcode@entry=0x7ffffffdfff8) at ../kern/exception.c:107 <damo22>#4 0xffffffff81072bae in i386_exception (exc=exc@entry=0x1, code=0x1, subcode=0x7ffffffdfff8) at ../i386/i386/trap.c:649 <damo22>#5 0xffffffff81073065 in user_page_fault_continue (kr=kr@entry=0x1) at ../i386/i386/trap.c:122 <damo22>#6 0xffffffff81069eee in vm_fault (map=<optimized out>, vaddr=vaddr@entry=0x7ffffffdf000, fault_type=0x3, change_wiring=change_wiring@entry=0x0, resume=resume@entry=0x0, continuation=continuation@entry=0xffffffff81072fc0 <user_page_fault_continue>) at ../vm/vm_fault.c:1486 <damo22>#7 0xffffffff81072def in user_trap (regs=0xffffffffdc0d0c18) at ../i386/i386/trap.c:525 <damo22>#8 0xffffffff8103dcea in _take_trap () at ../x86_64/locore.S:691 <damo22>page fault in userspace with address 0x7ffffffdfff8 <damo22>$2 = {r15 = 0x0, r14 = 0x0, r13 = 0x8, r12 = 0x30, r11 = 0x246, r10 = 0xffffffff, r9 = 0x0, r8 = 0x8, edi = 0x7ffffffe0040, <damo22> esi = 0x3, ebp = 0x7ffffffe0020, cr2 = 0x7ffffffdfff8, ebx = 0x7, edx = 0x50, ecx = 0x30, eax = 0xc8800000000, trapno = 0xe, <damo22> err = 0x6, eip = 0x4344a5, cs = 0x1f, efl = 0x10246, uesp = 0x7ffffffe0000, ss = 0x17} <damo22>0x7ffffffdfff8: Cannot access memory at address 0x7ffffffdfff8 <damo22>is the stack in the wrong place in userspace? <damo22>maybe the problem is ebp is set past the stack? <damo22>stack goes from 0x7ffffffe0000 -> 0x7fffffffffff <damo22>i think pci-arbiter ran out of userspace stack? <damo22>that is exactly the last address in the stack so you cannot push more <damo22>looks like userspace stack overflow? <Pellescours>is there a recursive call in pci-arbiter? (I don’t think userspace is the problem as it worked perfectly before, but just in case) <Pellescours>damo22: when pci-arbiter start what is the stack value? (breakpoint in main if you can) <damo22>how do i break into a user thread from bootstrap? <Pellescours>break in the kernel when the task spawn and check it’s values? <damo22>im on * 2665a507 (HEAD, origin/smp64-upstream) x86_64: Remove PERCPU_DS selector settings <Pellescours>I’ll try that after I finished my night, going back to bed. But nice progresses btw <Pellescours>(your last commit is very similar to gnumach master) <damo22>i am trying to see if i can port one pci ALSA driver as a JACK backend <damo22>if that works okay, it may be worth investing time to port the rest <damo22>i chose ALSA because i contributed there a fair bit <damo22>it has the largest range of sound card support in the free world <damo22>well that is ok for playing a queue of songs <damo22>but im more interested in pro audio recording <nexussfan>But for playing audio it's a nice hurdish way <damo22>jack-sparrow: hurd has a concept called translators you should read about <damo22>when you open() read() write() a filesystem node, you can have installed a translator on that node that runs to provide the translation of those calls <jack-sparrow>i will be try to ported hurd on gentoo follow my way on #gentoo-hurd <jack-sparrow>damo22: thank i will see and thank to mistral.ai for translation it's pratice <damo22>any user program can be written and executed to provide the implementation of accessing a filesystem node <damo22>thats what makes hurd infinitely extensible and interesting <damo22>you know how Linux kernel usually provides nodes such as /dev/* ? <damo22>in hurd these are just normal filesystem nodes, and translators are used in userspace to provide the hardware access <jack-sparrow>Can two or more Hurd kernels communicate with each other over the network with a translator? <damo22>but i dont see why more cant be written <nexussfan>Where different parts of the Hurd are ran on different machines; on the network for example <damo22>you could probably write a network pager to allow memory to be shared over the network but it would be slow <nexussfan>Then you could have a super-hurd computer like boinc <damo22>you could probably write a filesytem based package browser for .deb <damo22>so you just ls debian/testing/blah <nexussfan>It's funny how there's a perl library for HURD translators but no python one <damo22>but you dont want to fetch the package list live every time <damo22>it overloads the traffic of the server <damo22>so you want to make your filesystem deb browser use your local cache <jack-sparrow>imagine computer use hurd can share ressouce for AI system no need big DC <nexussfan>> It looks like nothing was found at this location. Maybe try a search? <jack-sparrow>gentoo/hurd is tring with a poor men and arch/hurd no maj since 2019 <damo22>should i just try increasing the user stack size? <azert>damo22: check if anything uses recursion <azert>stack exhaustion is often a result of an infinite recursive loop <azert>perhaps some error handling path? <damo22>its a bit difficult to know if there is any recursion in a translator just by reading the source, there are many functions, i need a backtrace <damo22>maybe i can break on task_terminate, and then dig into the user stack? <jab>If you're interested in alpine Linux, then you might talk to sergey. He wanted to start an alpine Hurd distro at some point. <sobkas>#define _IOT_sync_merge_data _IOT(_IOTS(struct sync_merge_data), 1, 0, 0, 0, 0) <jab>jack-sparrow: best of luck! <sobkas>I have made definitions of _IOT_ and after an install it started to behave strangely, ie. glxgears window become full screen, but after a reboot it works ok, even a bit faster... ??? <jab>sobkas: sounds pretty cool! <sobkas>so I have problem with nfs translator <sobkas>settrans: /hurd/nfs: Translator died <sobkas>settrans: /hurd/nfs: Translator died <sobkas>settrans: /hurd/nfs: Translator died <azert>aculnaig: what do you think about the httpfs translator? <aculnaig>azert: it has some potential but needs a deep rewrite. <azert>is it true that in order to open a directory, it assumes that none of the up directories in the full path can be 404? <aculnaig>azert: to answer to your first question, it does do a traversal up to the file, checking every parent, and if it encounters a 404, it returns an error <aculnaig>azert: to answer to your second question, no, it does not support WebDAV. <azert58>I think it needs changing to libnetfs <aculnaig>if you want to look at it together we can pair this friday <aculnaig>at the moment i am writing a fix to let stat works correctly <aculnaig>the last patch i am writing from the mailing list of today <azert58>I think it’s better that you just keep doing what you feel like <jab>aculnaig: it would be cool if httpfs supported sitemaps. I believe WordPress supports that...probably a fair number of other sites. <azert>I think that there are too many ways one could write an httpfs translator <azert>depending on how one want to use it <jab>azert: my biggest issue with network translators (https is probably one of the worst), is that if <jab>I have httpfs set up on ~/gnucode.org/, then cd ~/gnucode.org; ls; <jab>hangs my system for 30+ seconds. <jab>hangs my shell rather. <azert>but nfs has that issue on Linux too, you just need a good server and a decent connection <azert>jab: I am convinced that the “right” way to do networking filesystems is that Dropbox way <azert>and both MacOS as well as Windows partially moved in that direction <azert>that is that you want a translator that uses the parent filesystem as a cache, and has a distributed network backhand that store chunk of data by their hash <azert>I think Cephfs in Linux is the professional version of this concept <jab>hmm. networking is something I want to learn more about. :)