IRC channel logs

2023-09-27.log

back to list of logs

<gnucode`>janneke: can you send me your email address. I will send you the invite to the conference
<gnucode`>nikolar: the same goes for you.
<nikolar>nikola@radojevic.rs
<gnucode`>And Gooberpatrol66 if you want in, please send me your email too.
<gnucode`>You can send me a private irc message if you would like. "/msg gnucode` <emailaddress>"
<Arsen>heh, another serb, :-)
<nikolar>Arsen hello
<Arsen>o/
<xelxebar>Wait... there's a hurd conference planned?!
<gnucode`>not exactly a hurd conference. discussion.
<gnucode`>do you want to listen in?
<xelxebar>gnucode`: Definitely interested!
<solid_black>hello :)
<solid_black>libfuse as shipped as Debian doesn't seem very functional, I can't even build a simple program against it: 'i386-gnu/libfuse.so: undefined reference to `assert''
<solid_black>(assert is of course a macro in glibc)
<solid_black>and it segfaults in fuse_main_real
<solid_black>lowleve fuse ops do seem to map to netfs concept nicely, as far as I can see so far
<solid_black>and (again, so far) I don't see any asynchrony in how bcachefs uses fuse, i.e. they always fuse_reply() inside the method implementation
<solid_black>but if we had to implement low-level fuse API, this would be an issue
<solid_black>because netfs is syncronous
<solid_black>this is again a place where I don't think netfs is actually that useful
<solid_black>libfuse should be its own standalone tranlator library, a peer to lib{disk,net,triv}fs
<solid_black>yell at me if you disagree
<youpi>or perhaps make it use libdiskfs ?
<youpi>there's significant code in libdiskfs that you'd probably not want to reimplement in libfuse
<solid_black>like what?
<youpi>starting a translator
<youpi>all the posix semantic bits
<solid_black>(this is another thing, I don't believe there is a significant difference that explains libdiskfs and libnetfs being two separate libraries. but it's too late to merge them, and I'm not an fs dev)
<solid_black>starting a translator is abstracted into libfshelp specifically so it can be easily reused?
<solid_black>is libdiskfs synchronous?
<youpi>I'm just saying things out of my memory
<solid_black>scratch that, diskfs does not work like that at all
<youpi>piece of it is in fshelp yes
<solid_black>it works on pagers, always
<youpi>but significant pieces are in libdiskfs too
<youpi>and you are saying you are not an FS person :)
<youpi>you do know libdiskfs etc. well beyond the average
<youpi>perhaps not the ext2 FS structure, but that's not really important here
<youpi>see e.g. the short-circuits in file-get-trans.c
<solid_black>I may understand how the Hurd's translator libraries work, somewhat better than the avergae person :)
<youpi>and the code around fshelp_fetch_root
<solid_black>but I don't know about how filesystems are actually organized, on-disk (beyond the basics that there any inodes and superblocks and journaled writes and btrees etc)
<youpi>you don't really need to know more about that
<solid_black>nor do I know the million little things about how filesystem code should be written to be robust and performant
<solid_black>yeah so as I was saying, libdiskfs expects files to be mappable (diskfs_get_filemap_pager_struct), and then all I/O is implemented on top of that
<solid_black>e.g. to read, libdiskfs queries that pager from the impl, maps it into memory, and copies data from there to the reply message
<solid_black>I must have mentioned that already, I'd like to rewrite that code path some day to do less copying
<solid_black>I imagine this might speed up I/O heavy workloads
<youpi>? it doesn't copy into the reply
<youpi>it transfers map
<solid_black>it does, let me find the code
<youpi>in some corner cases yes
<youpi>but not normal case
<youpi> https://darnassus.sceen.net/~hurd-web/hurd/io_path/
<solid_black>libdiskfs/rdwr-internal.c, it does pager_memcpy, which is a glorified memcpy + fault handling
<solid_black>don't trust that wiki page
<youpi>why not ?
<youpi>not, pager_memcpy is not just a memcpy
<youpi>it's using vm_copy whenever it can
<youpi>i.e. map transfer
<solid_black>well yes, but doesn't the regular memcpy also attempt to do that?
<youpi>it happens to do so indeed
<youpi>but that' doesn't matter: I do mean it's trying *not* copying
<youpi>by going through the mm
<youpi>note: if a wiki page is bogus, propose a fix
<solid_black>I think there was another copy on the path somewhere (in the server, there's yet another in the client of course), but I can't quite remember where
<solid_black>and I wouldn't rely on that vm_copy optimization
<solid_black>it's may be useful when it working, but we have to design for there to not be a need to make a copy in the first place
<solid_black>ah well, pager_read_page does the other copy
<youpi>when things are not aligned etC. you'll have to do a copy anyway
<solid_black>but then again, this is all my idle observations, I'm not an fs person, I haven't done any profiling, and perhaps indeed all these copies are optimized away with vm_copy
<youpi>where in pager_read_page do you see a copy?
<youpi>it should be doing a store_read
<youpi>passing the pointer to the driver
<solid_black>ext2fs/pager.c:file_pager_read_page (at line 220 here, but I haven't pulled in a while)
<solid_black>it does do a store_read, and that returns a buffer, and then it may have to copy that into the buffer it's trying to return
<solid_black>though in the common case hopefully it'll read everything in a single read op
<youpi>it's in the new_buf != *buf + offs case
<youpi>which is not supposed to be the usual case
<solid_black>but now imagine how much overhead this all is
<youpi>what? the ifs?
<solid_black>we're inside io_read, we already have a buffer where we should put the data into
<youpi>I have to go give a course, gotta go
<solid_black>we could just device_read() into there
<youpi>you also want to use a cache
<youpi>otherwise it'll be the disk that'll kill yiour performance
<youpi>so at some point you do have to copy from the cache to the application
<youpi>that's unavoidable
<youpi>or if it's large, you can vm_copy + copy-on-write
<youpi>but basically, the presence of the cache means you can have to do copies
<youpi>and that's far less costly than re-reading from the disk
<solid_black>why can't you return the cache page directly from io_read RPC?
<youpi>that's vm_copy, yes
<youpi>but then if the app modifies the piece, you have to copy-on-write
<youpi>anywauy, really gottago
<solid_black>that part is handled by Mach
<solid_black>right, so once you're back: my conclusion from looking at libfuse is that it should be rewritten, and should not be using netfs (nor diskfs), but be its own independent translator framework
<solid_black>and it just sounds like I'm going to be the one who is going to do it
<solid_black>and we could indeed use bcachefs as a testbed for the low level api, and darling-dmg for the high level api
<solid_black>I installed avfs from Debian (one of the few packages that depend on libfuse), and sure enough: avfs: symbol lookup error: /lib/i386-gnu/libfuse.so.1: undefined symbol: assert_perror
<solid_black>upstream fuse is built with Meson 🤩️
<solid_black>I'm wondering whether this would be better done as a port in the upstream libfuse, or as a Hurd-specific libfuse lookalike that borrows some code from the upstream one (as now)
<damo22>solid_black: what is your argument to rewrite a translator framework for fuse?
<damo22>i dont understand
<solid_black>hi
<damo22>hi
<solid_black>basically, 1. while the concepts of libfuse *lowlevel* api seem to match that of hurd / netfs, they seem sufficiently different to not be easily implementable on top of netfs
<solid_black>particularly, the async-ness of it, while netfs expects you to do everything synchronously
<damo22>is that a bug in netfs?
<solid_black>this could be maybe made to work, by putting the netfs thread doing the request to sleep on a condition variable that would get signalled once the answer is provided via the fuse api... but I don't think that's going to be any nicer than designing for the asynchrony from the start
<solid_black>it's not a bug, it's just a design decision, most Hurd tranalators are structured that way
<damo22>maybe you can rewrite netfs to be asynchronous and replace it
<solid_black>i.e.: it's rare that translators use MIG_NO_REPLY + explicit reply, it's much more common to just block the thread
<solid_black>2. the current state is not "somewhat working", it's "clearly broken"
<damo22>why not start by trying to implement rumpdisk async
<damo22>and see what parts are missing
<solid_black>wdym rumpdisk async?
<damo22>rumpdisk has a todo to make it asynchronous
<damo22>let me find the stub
<damo22>* FIXME:
<damo22> * Long term strategy:
<damo22> *
<damo22> * Call rump_sys_aio_read/write and return MIG_NO_REPLY from
<damo22> * device_read/write, and send the mig reply once the aio request has
<damo22> * completed. That way, only the aio request will be kept in rumpdisk
<damo22> * memory instead of a whole thread structure.
<solid_black>ah right, that reminds me: we still don't have proper mig support for returning errors asynchronously
<damo22>if the disk driver is not asynchronous, what is the point of making the filesystem asynchronous?
<solid_black>the way this works, being asynchronous or not is an implementatin detail of a server
<solid_black>it doesn't matter to others, the RPC format is the same
<solid_black>there's probably not much point in asynchrony for a real disk fs like bcachefs, which must be why they don't use it and reply immediately
<solid_black>but imagine you're implementing an over-the-network fs with fuse, then you'd want asynchrony
<damo22>what is your goal here? do you want to fix libfuse?
<solid_black>I don't know
<solid_black>I'm preparing for the call with Kent
<solid_black>but it looks like I'm going to have to rewrite libfuse, yes
<damo22>possibly the caching is important
<damo22>ie, where does it happen
<solid_black>maybe, yes
<solid_black>does fuse support mmap?
<damo22>idk
<damo22>good q for kent
<solid_black>one essential fs property is coherence between mmap and r/w
<solid_black>so it you change a byte in an mmaped file area, a read() of that byte after that should already return the new value
<solid_black>same for write() + read from memory
<solid_black>this is why libdiskfs insists on reading/writing files via the pager and not via callbacks
<solid_black>I wonder how fuse deals with this
<damo22>good point, no idea
<solid_black>does fuse really make the kernel handle O_CREAT / O_EXCL? I can't imagine how that would work without racing
<solid_black>guess it could be done by trying opening/creating in a loop, if creation itself is atomic, but this is not nice
<damo22>something is still slowing down smp
<damo22>it cant possibly be executing as fast as possible on all cores
<damo22>if more cores are available to run threads, it should boot faster not slower
<azert>Hi damo22, your reasoning would hold if the kernel wouldn’t be “wasting” most of its time running in kernel mode tasks
<azert>If replacing CPU_NUMBER by a better implementation gave you a two digits improvement, that kind of implies that the kernel is indeed taking most of the cpu
<damo22>yes i mean, something in the kernel is slowing down smp
<azert>What about vm_map and all thread tasks synchronization
<azert>?
<damo22>i dont understand how the scheduler can halt the APs in machine_idle() and not end up wasting time
<damo22>how does anything ever run after HLT
<damo22>in that code path
<damo22>if the idle thread halts the processor the only way it can wake up is with an interrupt
<damo22>but then, does MARK_CPU_ACTIVE() ever run?
<damo22>hmm it does
<azert>I think that normally the cpu would be running scheduler code and get a thread by itself.
<damo22>thats not how it works
<damo22>most of the cpus are in idle_continue
<damo22>then on a clock interrupt or ast interrupt, they are woken to choose a thread i think
<damo22>s/choose/run
<azert>If they are in cpu_idle then that’s what happens, yea
<azert>But normally they wouldn’t be in cpu idle but running the schedule and just a thread on their own
<azert>Cpu_idle basically turns off the cpu
<azert>To save power
<damo22>every time i interrupt the kernel debugger, its in cpu-idle
<damo22>i dont know if it waits until it is in that state so maybe thats why
<azert>That means that there is nothing to schedule
<azert>Or yea that’s another explanation
<damo22>yes, exactly i think it is seemingly running out of threads to schedule
<azert>A bug in the debugger
<damo22>i need to print the number of threads in the queue
<youpi>adding a show subcommand for the scheduler state would probably be useful
<youpi>solid_black: btw, about copies, there's a todo in rumpdisk's rumpdisk_device_read : /* directly write at *data when it is aligned */
<solid_black>youpi: indeed, that looks relevant, and wouldn't be hard to do
<solid_black>ideally, it should all be zero-copy (or: minimal number of copies), from the device buffer (DMA? idk how this works, can dma pages be then used as regular vm pages?) all the way to the data a unix process receives from read() or something like that
<solid_black>without "slow" memcpies, and ideally with little vm_copies too, though transferring ages in Mach messages is ok
<solid_black>s/ages/pages/
<solid_black>read() requires ones copy purely because it writes into the provided buffer (and not returns a new one), and we don't have mach_msg_overwrite
<solid_black>though again one would hope vm_copy would help there
<solid_black>...I do think that it'd be easier to port bcachefs over to netfs than to rewrite libfuse though
<solid_black>but then nothing is going to motivate me to work on libfuse
<azert>solid_black: I never work on things that don’t motivate me somehow
<azert>Btw, if you want zerocopy for IO, I think you need to do asynchronous io
<azert>At least that’s the only way for me to make sense of zerocopy
<solid_black>I don't think sync vs async has much to do with zero-copy-ness? why?
<solid_black>let me bring something else up, by the way
<solid_black>many of us here are excited about Rust :)
<azert>Let’s say that you ask the kernel to dma from your your buffer to an hard disk
<azert>That will be slow, unless it’s asynchronous then you don’t care
<solid_black>what do you think the future looks like regarding the Hurd and Rust?
<solid_black>should we just have Rust programs running? should we make it possible to write Hurd translators in Rust?
<solid_black>would there be bindings for existing Hurd libraries, or would there be whole separate libraries for Rust?
<azert>I am very skeptic of rust, I am waiting for an hardened version of c++
<solid_black>'cause in Rust, we could leverage language support for async-await
<azert>But sure, if someone goes through all the work of supporting rust, then I’d be happy to see that
<solid_black>azert: but write() never blocks on waiting for the data to reach disk, only fsync does
<solid_black>guess this is more of a question for youpi then
<azert>Write let you overwrite the buffer
<azert>So does copying by definition
<solid_black>no, because CoW
<azert>Then you need to be aware since cow is basically a  surprise
<azert>Not the cleanest design
<solid_black>and CoW is already in Mach, a lot of lines of code and complexity in the kernel (that could be implemented as a userspace server... but don't get me started), so that we don't have to think about this in userland
<solid_black>once you send off a page via Mach IPC, you can mutate it if you need to, yet unless you do, there won't be ac copy
<solid_black>it's not a surprise, it's the expectation
<solid_black>memory transferred over Mach IPC always works that way
<azert>If you write you pay the price of copying
<azert>So basically you shouldn’t write
<azert>Was better to agree not to write and get a notification when dma is over
<solid_black>ok, let me ask a somewhat different question
<solid_black>if I make libfuse-as-an-independent-translator-framework happen, would anybody use it? will anybody be excited about it?
<solid_black>is there some fuse fs everyone always wanted to use under the Hurd, but it was never possible?
<solid_black>sshfs?
<solid_black>does GNOME's gio / gvfs thing work via fuse? how does it work under GNU/Hurd currently, if at all?
<youpi>solid_black: there's indeed more in libfuse than just bcachefs, there are a lot more FS that can be used through libfuse
<solid_black>that's what I'm asking, would that unblock something that people actually wanted to use in practice?
<gnucode>howdy all
<solid_black>hi gnucode!!
<gnucode>looks like you're doing a deep dive on filesystem stuff! Seems like a lot of fun learning!
<solid_black>"deep dive" meaning I read through fuse.h, but sure :)
<solid_black>I got -- what seems to be -- the invitation letter, but unlike the usual Google Calendar invitations, there's no button to confirm my participation, or to join, or to view the even in any way
<solid_black>gmail originally classified it as spam, even
<solid_black>I checked source html, and there doesn't seem to be a hidden link to the event anywhere either
<gnucode>great.
<gnucode>I've never actually used gmail calendar before. :(
<gnucode>worst case scenerio, I will be on #hurd on the day of. Just message me.
<solid_black>(also isn't it ironic how you were planning to use an fsf-approved free software-based conferencing app, and we ended up with google's proprietary services instead...)
<gnucode>solid_black: it is pretty funny.
<solid_black>I'm sure me and you will figure it out one way or another; but what if everybody else also got the same thing? what if Kent did?
<gnucode>I did an interview with Luke from libre-soc.org before...
<gnucode>I thought I had all of the interview software stuff figured out before the call began...
<gnucode>I had to spend the first 10 mintues of the call figuring out how to have our confersation...
<gnucode>it'll all work out bro. :)
<gnucode>but yes. I will try to resend your the gmail calendar stuff.
<solid_black>should I send you screenshots of what this looks like for me vs what the usual caendar invitation looks like?
<gnucode>yes please.
<solid_black>while I'm at it: gnucode: would you use libfuse on the Hurd if it worked much better?
<solid_black>and for what?
<gnucode>I might use sshfs.
<gnucode>but to be fair, I think you are the more technical of us. :)
<gnucode>I am very excited to hear your thoughts too.
<gnucode>I will do by best to make sure everyone knows how to connect to the chat.
<gnucode>solid_black: I am really glad that you are going to be there on the discussion with me.
<gnucode>It'll be nice whan I say things like "The Hurd is based on the Windows operating system design and is written in C++"....
<gnucode>you'll be able to correct me. :)
<solid_black>:D :D
<fury999io>let's complete hurd development by gnu 50
<solid_black>that makes me actually wonder how much Kent knows about the Hurd
<solid_black>and how much he *wants* to know
<gnucode>me too. it'll be fun to find out. maybe he's somewhat excited about other OSes using the bcachefs.
<solid_black>could be!
<solid_black>this reminds me of how ReactOS uses btrfs
<gnucode>what!? I didn't know that!
<solid_black>like you'd think, it's a windown 2003 server clone, right, it wouldn't support or promote something like btrfs
<solid_black>we could kind of make the same leap, from ext2 to bcachefs
<gnucode>it would certainly be pretty interesting! We'd get lots of more people interested in using the Hurd.
<gnucode>I gotta get going. Should be back on in a few.
<jab>hello again
<gnucode>xelxebar: send me your email and I will send you an invite to the discussion
<solid_black>I did get the new invitation!
<solid_black>though it keeps insiting you're an "unknown sender" and we've never been in contact before
<gnucode>hmmm.
<gnucode>weird.
<gnucode>does it look better now?
<solid_black>it does look better, and I was able to add the event to my calendar
<solid_black>18:00 - 19:00 msk time, cool
<gnucode>nice!
<gnucode>I will resend the invites then.
<gnucode>solid_black: by the way...
<gnucode>where is your for 9pfs ?
<solid_black>unpublished :D
<solid_black>in ~/9pfs :D
<solid_black>same as epoll
<solid_black>and x11fs
<gnucode>may I encourage you to publish it? I know someone worked on porting the wayland to the hurd to.
<gnucode>too.
<solid_black>and who remembers how many more Hurd-related projects
<solid_black>that someone might have been me ;)
<gnucode>I bet those projects are already really useful.
<gnucode>hahaha!
<solid_black>wayland is actually published, if unannounced
<solid_black>but it needs epoll, which is not
<gnucode>I've already had a lot of use out of the terrible-mdns-responder. janneke didn't actually know that avahi definitely does not work on the hurd.
<solid_black>I would not terribly (hah!) mind if someone packages it for Guix ;)
<gnucode>I encourage janneke to do so, if he so chose.
<gnucode>chooses*
<solid_black>the state of 9pfs is the netfs branch worked ok, but rather slowly because of (what to me seem to be) inherent netfs limitations
<solid_black>the no-netfs rewrite started to gain feature parity somewhat, and was much faster
<gnucode>solid_black: have you heard of x15 ?
<solid_black>but then I context-switched to other projects
<solid_black>richard's kernel? of course I ahve
<gnucode>did you know that christine lemmer webber didn't know about it...until I mentioned it. :)
<gnucode>It's a great project that few people know about. :)
<solid_black>(but why would you expect her to? is she into microkernels?)
<gnucode>that's fair.
<gnucode>I am sure that whatever projects you work on, they will be super awesome.
<gnucode>but I hope they don't get lost. :)
<solid_black>I wouldn't make such a blanket statement, and don't put me on a pedestal either, I'm just a random hacker guy
<solid_black>but thanks :)
<gnucode>and I would like to hear your thoughts on why libnetfs has some limitations, especially in filesystems that require sometype of networking.
<gnucode>Your explanation that you should think of libnetfs as "libdirfs" really helped me understand it more conceptually.
<solid_black>the project I'm currently hacking on most is the GTK 4 version of the Ladybird browser
<solid_black>which, btw, it'd be great if you wanted to try out
<gnucode>I would definitely want to try it out!
<gnucode>I use openBSD, the hurd, and guix system.
<gnucode>which one works well with ladybird?
<gnucode>I am currently using netsurf on the Hurd.
<solid_black>I have done some Hurd porting of Lagom, so Ladybird should be buildable on teh Hurd :)
<gnucode>My T43 can compile netsurf in like 5-10 minutes!
<gnucode>that's pretty cool!
<solid_black>(I haven't tried that myself yet though)
<gnucode>are you running serenity OS ?
<solid_black>my irc client I'm connecting through right now is running on GNU/Linux, if that's what you're asking, but I'm one of the SerenityOS developers and I run it regularly
<gnucode>oh. I think I knew that you messed with SerenityOS too...but that's pretty cool.
<Gooberpatrol66>solid_black: regarding your question about the usage of fuse: https://github.com/koding/awesome-fuse-fs
<Gooberpatrol66>generally, fuse is used for weird experimental filesystems. like there are a bajillion distributed filesystems and most of them aren't mainstream enough to mainline a client into the linux kernel so they write fuse clients
<Gooberpatrol66>there was some cool experimental fs based on CRDTs i saw once that used fuse
<solid_black>see, I'm not just asking what cool thinks could be done with fuse, I'm asking if someone actively wants to use them on the Hurd, and would be excited for that becoming possible
<Gooberpatrol66>there's fuse versions of some (most?) linux filesystems, so that would give you compatibility with linux
<Gooberpatrol66>i mainly just want bcachefs, but for other users that could be important
<Gooberpatrol66>also, some translators could be re-based on fuse maybe (such as isofs) to outsource the work of maintaining them
<Gooberpatrol66>i'd really like nfs support in hurd, and there's solid fuse drivers for it, and i'm not sure how good the hurd translators for it are
<solid_black>hmm, so the point here being I guess that while on Linux fuse-based filesystems are always second-calss citizens, for the Hurd we could try to make fuse (almost?) as good and as native as the other translator frameworks?
<Gooberpatrol66>if a thing's written for fuse, it works on hurd and linux, so you get the developer base of both OSes
<solid_black>and through that, gain (almost) first-class support for many popular (on-disk, Linux) filesystems though their fuse implementations
<Gooberpatrol66>yes
<solid_black>right?
<solid_black>that does sound attractive and worthwhile indeed
<solid_black>thanks!
<Gooberpatrol66>*thumbs up emoji*
<solid_black>this kind of sounds like the rump of filesystems now
<gnucode>I have to leave to go to work soon. but I'll be on later.