IRC channel logs

2024-11-19.log

back to list of logs

<gnucode>sneek later tell azert I think you were asking Sergey why his 9p filesystem was not using libnetfs. I am of the opinion that we should rename libnetfs to libdirfs, because libnetfs has absolutely no logic to deal with networking. It is only meant to provide local virtual directories.
<sneek>Will do.
<Pellescours>hello
<Pellescours>I just tried to cross-compile hurd (with flavio’s script), and the configure fails if "mach_ncpus" is not defined.
<Pellescours>I don’t know what should define it by default but currently if you don’t define it, it’s not defined at all
<Pellescours>(maybe it’s a problem on my side, but I just cleaned everything and rerun the bootstap script and got this error)
<youpi>configure.ac is already defining it
<youpi>the default being 1
<Pellescours>not in my case: ../configure: line 7566: [: -gt: unary operator expected
<Pellescours>I’m using archlinux, with all packages up to date, I did a "reconfigure -fi" before
<Pellescours>Ah I don’t know if it’s a configure generation bug or whatever but I have a usage before definition in my configure script
<Pellescours>line 7566: usage of $mach_ncpus, but definition of mach_ncpus is line 7808
<Pellescours>For some reason, in my setup, in the configure script, I have the content of i386/configfrag.ac appear *before* the root configfrag.ac
<Pellescours>Ah it’s normal, https://gitlab.com/gnu-hurd/gnumach/-/blob/master/configure.ac?ref_type=heads#L170, the order of the general options appear after the specific ones. So if a variable is defined in the root configfrag.ac, it can’t be used in the arch specific ones because it won’t be defined at this point
<Pellescours>mach_ncpu definition should be moved to configfrag-first if it needs to be a global definition
<Pellescours>or in the configure.ac directly
<Pellescours>And actually it’s already appear in current builds (just that we didn’t noticed it because it was silenced) : https://github.com/flavioc/cross-hurd/actions/runs/11639891161/job/32416611687#step:12:9469
<solid_black>hi
<solid_black>azert: indeed, though neither the netfs version nor the newer no-netfs versions are complete
<solid_black>the thing about netfs is, it's not designed for over-the-network file systems at all
<solid_black>it's "designed" for pseudo file systems, like Linux sysfs
<solid_black>err, the quotes were meant to go around "pseudo"
<solid_black>the most prominent example of this is path resolution
<solid_black>in network file systems, much like in any kind of RPC, roundtrip latency matters
<solid_black>and the Hurd fs protocols are actually designed to reduce path resultion latency
<solid_black>in that you send a whole path to a server in dir_lookup, and the server looks up as many path components as possible, in a single request
<solid_black>as opposed to you having to issue separate requests to traverse individual directories
<solid_black>the same is true of network fs protocols -- at least of 9p
<solid_black>but libnetfs is designed so that it drives the lookup, and invokes your callback for each path component once
<solid_black>which means that you have to do the roundtrip over a network each time
<solid_black>it's true that you could replace netfs_S_dir_lookup with your own implementation though
<solid_black>by the way, the intermediary path components don't even have to *exist*, for some kinds of filesystems
<solid_black>they do for 9p, but think of httpfs
<solid_black>you're looking up http://example.com/foo/bar.png, which exists, but http://example.com/foo may very well give you a 404 (or 403, or something)
<solid_black>and your callback that netfs invokes doesn't even know if the path component it looks up is final or intermediary, so it cannot implement a mixed strategy where it pretends the intermediary components exist without actually doing the request and mapping 404 to ENOENT
<solid_black>the other thing is peropen data
<solid_black>and here I don't quite remember the details
<solid_black>but I think it was that netfs wants you to implement ops on *nodes*, and you can attach your custom data to nodes
<solid_black>but it doesn't let you associate your custom data with peropens, which is what you actually want (IIRC)
<solid_black>because if the node "actually exists" on a remote host, the translator has to "open" it on the remote, on behalf of the client
<solid_black>and you likely want to have separate handles to the remote node, corresponding to your peropens
<solid_black>for instance in httpfs, perhaps different clients require sending different cookies, or different http basic auth
<solid_black>(I don't think the actual Hurd httpfs supports either of those, but just as an example)
<solid_black>in 9pfs, I actually wanted to put 'fid's into the protid
<solid_black>right, so I probably meant protid rather than peropen in the above
<solid_black>but even peropen would still be better than the node
<solid_black>in summary: if what you want is implementing a "pseudo" file system that otherwise behaves like a normal fs, and want the mundane details to be taken care of for you by the framework, by all means, use netfs
<solid_black>but if what you want is to essentially forward the Hurd fs protocol over a network (while translating it into a different fs protocol such as 9p), you need to do things differently than what netfs does
<azert>solid_black: thank you this is very interesting
<sneek>Welcome back azert, you have 1 message!
<sneek>azert, gnucode says: I think you were asking Sergey why his 9p filesystem was not using libnetfs. I am of the opinion that we should rename libnetfs to libdirfs, because libnetfs has absolutely no logic to deal with networking. It is only meant to provide local virtual directories.
<azert>Did you also consider caching? In the sense that you might think of a network filesystem as a filesystem on a very slow shared support that also pose cache consistency concerns
<solid_black>yes, caching is an entire question of its own
<azert>That’s where I was thinking that starting from libdiskfs might be the best option
<solid_black>I, ugh, haven't yet seen why there have to be multiple fs frameworks (diskfs, netfs, treefs, trivfs) in the first place
<solid_black>I understand that "diskfs is for on-disk filesystems" and the like, but that doesn't sound like a strong enough justification
<solid_black>a lot of the code is the same / duplicated between diskfs/netfs for example
<azert>I’m really not sure about the whole reasoning
<azert>Behind these different framework
<azert>For instance, is tmpfs an on disk fs??
<solid_black>so it would make sense to me if there was a single framework that scales from the trivfs use case (single node) to both actual network fs and on-disk fs use cases (e.g. by offering optional block caching)
<azert>It’s a ram fs! All fs are ram fs if you put caching in the picture
<solid_black>tmpfs is weird in how it interacts with a pager
<solid_black>instead of just allocating memory and using that
<solid_black>but then again, I don't fully understand diskfs
<solid_black>and me wishing things would be structured better isn't going to change anything
<solid_black>we have what we have
<solid_black>but if, say, we wrote that hypothetical new async fs framework in Rust, we could try to apply the lessons learned from lib*fs, and make it suitable for all the relevant usecases
<azert>Sure! But that would need someone to do research on the different framework presenting them in a coherent way. And I think that the current docs help up to a certain level of details and insights
<azert>In fact, I’m not sure that the docs were written by the designers..
<solid_black>that someone could be you :)
<azert>Ahah
<Pellescours>youpi: what solution you think is cleaner, moving the mac_ncpu definition to the configure.ac or in the configfrag-first ?
<youpi>better move it in configfrag-first
<gfleury>youpi: patch for pthread_setcancelstate sent