IRC channel logs

2023-07-19.log

back to list of logs

<damo22>its not quite a ramdisk, its more a netfs translator that creates a temporary /
<damo22>its a statically linked binary, i dont think it differs from a multiboot module
<damo22>i had a similar idea a couple of years ago but i had thought it could be part of mach, having the translators attach themselves temporarily to the kernel
<damo22>but this bootstrap translator seems like a better idea
<damo22>solid_black: how are the device nodes on the bootstrap netfs attached to each translator?
<damo22>and how does the first non-bootstrap task get invoked?
<damo22>does bootstrap resume it?
<Guest60>solid_black: maybe one could use a ram disk instead? What’s wrong with the ram disk? One could stick an unionfs on top of it to load the rest of the system after bootstrap. Wouldn’t that be very simple?
<damo22>from what i have seen with libmachdev, its a mess currently and this idea would clean it up a lot
<luckyluke>damo22: it looks similar to a ramdisk in principle, i.e. it exposes a fs which lives only in ram
<Guest60>I don’t see how his idea is better than a ram disk. Gnumach has access to ram before any device, if the first server is the RAM disk file system, then all he want to do will be achieved automatically
<damo22>yes but this bootstrap idea addresses the problem with early bootstrap that there are no signals or console flowing
<damo22>and chaining from one server to the next via a bootstrap port is a kludge at best
<damo22>how many times have you seen the bootstrap process hang and just sit there
<damo22>this idea would solve that
<damo22>also, it would allow subhurds to be full hurds without special casing each task with bootstrap code
<Guest60>The RAM disk (netfs) and what takes care of setting up a console and signals could be two different servers
<Guest60>Giving more flexibility
<damo22>but if its a ramdisk essentially you have to provide it with a tar image
<Guest60>That’s an option
<damo22>having it live inside a bootstrap task only is preferable
<damo22>also the task could even exit when its done
<damo22>whether you use an actual ramdisk or not, you still need to write the task that boots the system
<Guest60>Yes
<damo22>that is different than how it works currently
<damo22>but ramdisk is implemented in mach, we want to move things out of mach
<luckyluke>So it would be an empty fs dynamically populated during bootstrap
<luckyluke>gnumach actually only creates a block device with a given content, and it could be anything, even a bootstrap script
<Guest60>It seems to me like he plans to implement emulation for the proc and exec servers in the bootstrap translator, that’s a slippery slope that ends in reimplementing them for no good readons
<Guest60>Wouldn’t make more sense the following boot order: initrd, proc, exec, acpi, pci, drivers, unionfs+fs
<Guest60>With every server executable included in the initrd tarball
<azeem>(maybe you should set your name to something else than "Guest60")
<solid_black>hello luckyluke damo22
<solid_black>the bootstrap task will be loaded as the first multiboot module, yes
<solid_black>by grub
<solid_black>it's not a ramdisk, because a ramdisk has to contain some fs image (with data), and we'd need to parse that format
<solid_black>it might make sense to steer it more into that direction (and Samuel seems to have preferred it) because there could potentially be some config files, or other files that the servers may need to run
<damo22>hi solid_black
<solid_black>but I'm not super fond of that idea, I'd prefer the bootstrap fs to be just a place where ports (translators) can be placed and looked up
<damo22>solid_black: i almost cleaned up acpi and pci-arbiter but realised they are missing the shutdown notification when i strip out libmachdev
<solid_black>actually in my current code it doesn't even use netfs, it just implements the RPCs directly
<solid_black>I'll possibly switch to netfs later, or maybe not
<solid_black>damo22: "how are the device nodes on the bootstrap netfs attached to each translator?" -- I don't think I understand the question, please clarify
<damo22>i was wondering if the new bootstrap process can resume a fs task and have all the previous translators wake up and serve their rpcs
<damo22>without needing to resume them
<damo22>we have a problem with the current design, if you implement what we discussed yesterday, the IO ports wont work because they are not exposed by pci-arbiter yet
<damo22>i am working on it, but its not ready
<damo22>who is Guest60?
<damo22>haha
<solid_black>I still don't understand the problem
<solid_black>the bootstrap task resumes others in order
<solid_black>the root fs task too, eventually, but not before everything that hash to come up before the root fs task is ready
<damo22>i dont think it needs to be a disk
<damo22>literally a trivfs is enough
<solid_black>why are I/O ports not exposed by pci-arbiter? why isn't that in issue with how it works currently then?
<damo22>solid_black: we are using ioperm() in userspace but i want to refactor the io port usage to be granularly accessed
<damo22>so one day gnumach can store a bitmap of all io ports and reject more than one range that overlaps ports that are in use
<damo22>since only one user of any port at any time is allowed
<damo22>i dont know if that will allow users to share the same io ports, but at least it will prevent users from clobbering each others hw access
<solid_black>Guest60: I don't want to exactly reimplement full proc and exec servers in the bootstrap task, it's more of providing very minimal emulation of some of their functions
<solid_black>like I want to implement the two RPCs from the proc interface, one to give a task the privileged ports on request and one to let the task give me its msg port
<solid_black>is that too much?
<azeem>(Guest60 left)
<solid_black>I see, but perhaps they'll be reading this via the channel logs later
<solid_black>"Wouldn’t make more sense the following boot order: initrd, proc, exec, acpi, pci, drivers, unionfs+fs" -- I don't see how that's better, but you would be able to try something like that with my plan too
<solid_black>see, the issue is not only getting some servers started, it's also integrating them into the eventual full hurd system later when the rest of the system is up
<solid_black>when early servers start, they're running on bare Mach, there are no processes, no auth, no files or file descriptors, etc
<solid_black>what I want is: have files avaiable immediately (if not the real fs), and make things progressively more "real" as servers start up
<solid_black>when we start the root fs, we send everyone their new root dir port
<solid_black>when we start proc, we send everyone their new proc port
<solid_black>and so on
<solid_black>at the end, all those tasks we have started in early boot are full real hurd proceses that are not any different to the ones you start later
<solid_black>except that they're statically linked
<solid_black>and not actually io_map'ed from the root fs, but loaded by Mach/grub into wired memory
<solid_black>ok, now that I'm done replying to that, let me try to understand what damo22 is saying again :)
<solid_black>damo22: (again, sorry for not understanding the hardware details), so what would be the issue? when the pci arbiter starts, doesn't it do all the things it has to do with the I/O ports?
<damo22>solid_black: io ports are only accessed in raw method now
<damo22>any user can do ioperm(0, 0xffff, 1) and get access to all of them
<solid_black>doesn't that require host priv or something like that?
<damo22>yeh probably
<damo22>maybe only root cna
<damo22>can
<damo22>but i want to allow unprivileged users to access io ports by requesting exclusive access to a range
<solid_black>I see that ioperm () in glibc uses the device master port, so yeah, root-only (good)
<damo22>first in locks the port range
<solid_black>but you're saying that there's someting about these I/O ports that works today, but would break if we implemented what we discucussed yeasterday? what is it, and why?
<damo22>well it might still work
<damo22>but theres a lot of changes to be done in general
<solid_black>I'm really not an idiot, I just don't know enough about hardware, be patient with me :)
<damo22>i dont think youre an idiot
<damo22>you certainly know more about hurd in general than me
<solid_black>let me try to ask it in a different way then
<damo22>i just know a few of the specifics because i worked on them
<solid_black>as I understand it, you're saying that 1: currently any root process can request access to any range of I/O ports, and you also want to allow *unprivileged* processes to get access to ranges of I/O ports, via a new API of the PCI arbiter
<damo22>yes
<solid_black>(but this is not implemented yet, right?)
<damo22>correct
<solid_black>2: you're saying that something about this would break / be different in the new scheme, compared to the current scheme
<solid_black>i don't understand the 2, and the relation between 1 and 2
<damo22>2 not really, i may have been mistaken
<damo22>it probably will continue working fine
<damo22>until i try to implement 1
<damo22>ioperm calls i386_io_perm_create and i386_io_perm_modify in the same system call
<damo22>i want to seperate these into the arbiter so the request goes into pci-arbiter and if it succeeds, then the port is returned to the caller and the caller can change the port access
<solid_black>yes, so what about 2 will break 1 when you try to implement it?
<damo22>with your new bootstrap, we need i386_io_perm_* to be accessible
<damo22>im not sure how
<damo22>is that a mach rpc?
<solid_black>these are mach rpcs
<damo22>ok
<solid_black>i386_io_perm_create is an rpc that you do on device master
<damo22>should be ok then
<solid_black>i386_io_perm_modify you do on you task port
<solid_black>yes, I don't see how this would be problematic