IRC channel logs


back to list of logs

***Server sets mode: +nt
<heth>Gooberpatrol66, thanks!
<heth>still did anyone compare anything, like sorting on 1core
***raghavgururajan is now known as RaghavGururajan
***RaghavGururajan is now known as raghavgururajan
<damo22>AlmuHS: hey how are you
<AlmuHS>I'm working now
<damo22>i finished my work for today
<AlmuHS>10:05 PM there is not?
<AlmuHS>, is not?
<AlmuHS>12:05 midday here
<damo22>what kind of work are you doing
<AlmuHS>free software migration
<damo22>i found a possible bug in ext2fs
<AlmuHS>youpi said some days ago that ext2fs is not smp-safe
<AlmuHS>what is the bug?
<damo22>linux creates symlinks that are broken in hurd
<damo22>its reproducible here
<AlmuHS>ls -s ?
<damo22>ln -s
<AlmuHS>ln -s ?
<AlmuHS>yes, a typo
<AlmuHS>ln -s don't works in hurd?
<damo22>yeah if you do that on linux ext2fs and mount it in hurd all symlinks will be broken
<AlmuHS>It's sad
<damo22>but hurd creates valid symlinks in hurd
<damo22>and hurd created symlinks are fine on linux
<damo22>i can probably fix it but i dont know the spec
<AlmuHS>ext2 spec?
<AlmuHS>a quick search
<AlmuHS>and this stack overflow explain more info:
<damo22>i_blocks 32-bit value representing the total number of 512-bytes blocks reserved to contain the data of this inode, regardless if these blocks are used or not.
<damo22>so if the inode contains a symlink, it should not matter if the i_blocks contains a value >0
<AlmuHS>yes, each inode contains a fix number of direct pointers to blocks
<damo22>but currently if the i_blocks > 0 hurd's ext2fs tries to read the blocks
<damo22>which breaks a symlink which has i_blocks set to non-zero even though it has no blocks
<AlmuHS>soft links has its own inode
<AlmuHS>a soft link, as I remember, is only a file which includes the path of the real file
<AlmuHS>the problem could be in the hard link
<AlmuHS>is not?
<damo22>no its in ln -s
<damo22>i checked
<AlmuHS>File ACL: 0 Translator: 0 ?
<AlmuHS>Links: 1 Blockcount: 0
<AlmuHS>is this the problem?
<damo22>the top one is a working symlink created in hurd
<damo22>the bottom one is created in linux on the same fs and is broken in hurd
<AlmuHS>yes, I'm seeing It
<damo22>its the blockcount
<damo22>blockcount should b e0
<damo22>or it should not matter to hurd that blockcount !=0
<AlmuHS>in linux is not 0
<AlmuHS>Links: 1 Blockcount: 8
<damo22>but when hurd's ext2fs sees blockcount != 0 i assume it thinks there are blocks to read
<AlmuHS>really, the symlink is a file, so It might have a block
<AlmuHS>but, the own symlink is a file, so this might fill at least a block, is not?
<damo22>no the inode itself holds the symlink data
<damo22>since its less than 60 chars
<AlmuHS>oh, ok
<AlmuHS>yes, I'm remembering now
<AlmuHS>the inode contains some data blocks
<damo22>maybe linux should not be setting blockcount = 8 but we cant change that
<damo22>we could just make hurd not care about blockcount != 0 for a small symlink
<AlmuHS>but, how hurd knows that this is a symlink? is It specified in inode data?
<AlmuHS>oh, ok
<AlmuHS>then yes, this solution could works
<damo22>youpi: do you think that is the right solution?
<damo22>the ext2 spec says that i_blocks is a 32-bit value representing the total number of 512-bytes blocks reserved to contain the data of this inode, **regardless if these blocks are used or not**.
<damo22>how do i dump the blocks in use by the data of the inode?
<AlmuHS>I suppose that you have to check all pointers, to find the pointers which is not NULL
<AlmuHS>although, I remember the inode stores the number of block used by the file. Maybe you can use this data to limit the search
<damo22>debugfs: block_dump -f linuxsymlink 0
<damo22>block_dump: Invalid argument while reading block 1685222760
<damo22>it has no blocks but reports an invalid i_blocks count
<damo22>maybe its a linux bug
<AlmuHS>It's possible
<youpi>AlmuHS: I didn't say that ext2fs is not smp-safe. I said it should be bound to cpu0. That's because the mach drivers for the disk are not SMP-safe, and thus they should be used only from cpu0
<AlmuHS>youpi: oh, ok. Then I understood It bad
<youpi>damo22: I'm surprised that linux would not set the number of blocks to 0
<youpi>but yes for small symlinks it's inlined in the inode
<damo22>youpi: it seems 4k is the minimum size
<AlmuHS>but, how can I set that these drivers only runs in cpu0? Checking the name of the thread before assign It to the queue?
<youpi>it'd be good to check that the ext2fs format asserts that
<youpi>and then hurd's ext2fs should indeed ignore the block count if the file length fits in the inode
<damo22>so it allocates 8 blocks even for a small inlineable symlink
<youpi>AlmuHS: by just binding the ext2fs process to cpu0
<youpi>and we want to do that for everybody for a start anyway
<AlmuHS>but, how can I bind a process?
<youpi>and only stop doing i for a few processes first
<youpi>AlmuHS: remember, the bound_processor field
<AlmuHS>and, how can I know the name of these threads?
<AlmuHS>or any data to find the correct structure
<AlmuHS>oh, sorry. You only said ext2fs, not the drivers
<damo22>youpi: possibly the extended attributes costs an extra 4k block?
<youpi>I guess it shouldn't if there is no xattr
<youpi>I don't know the details
<youpi>but that's probably worth checking that things are being done properly
<damo22>but if there is xattr in linux, we still dont want it to break in hurd
<damo22>ive investigated this issue to determine the problem but i dont know how to follow up and check the proper way of doing it
<AlmuHS>youpi: something like this?
<AlmuHS>oops, this patch has some garbage
<youpi>that could be something like this yes
<youpi>that'll be temporary only anyway
<AlmuHS>when It might be disabled?
<youpi>when we get rid of linux drivers from gnumach
<AlmuHS>when is this?
<youpi>damo22: I guess the "proper way" is what Linux does + possibly some conservatism
<AlmuHS>mmm... I've just tested my gnumach smp with ext2fs bind to cpu0 (but without disabled the bind after). But this continues freezing
<youpi>strictly speaking, what you added for ext2fs doesn't change anything, since it's already bound to cpu0 for all threads by the first change
<youpi>which is what we want for a start anyway for now
<youpi>so it's no wonder that you still get a hang, that's to be solved first
<youpi>about ext2fs I just meant that as long as gnumach has linux drivers, ext2fs should be kept bound, even if we let other processes go around
<AlmuHS>then, why hangs?
<youpi>I don't know, see what we are after the other day when we were trying to debug it
<youpi>and the mail I sent a bit later
<gnu_srs1>cd ..
<gnu_srs1>youpi: You built unbound 1.9.4-2+hurd.4 on -ports, closing bugs #905961, #905963, but they are not closed according to
<scovit>Hi, NetBSD have an half backed MACH_COMPAT layer with partial support for Mach ports. Could the HURD be implemented under NetBSD one day?
<scovit>apparently in 2008 people worked on this, but found it too difficult
<scovit>wow, pleople still work in GNUmach
<gnu_srs1>(17:36:53) srs: youpi: You built unbound 1.9.4-2+hurd.4 on -ports, closing bugs #905961, #905963, but they are not closed according to
<scovit>Hi AlmuHS, may I ask you what do you think about GNUmach?
<AlmuHS>scovit: what do you refers?
<scovit>Is it well written, organized
<AlmuHS>the code of original Mach is a few dirty, but the files modified by GNU are a bit cleaner
<AlmuHS>the organization is a few chaotic. Theorically, gnumach might be independent if the architecture, and the only dependant code might be stored under i386/ directory
<AlmuHS>but, really, many code stored under kern/ is i386 dependant
<AlmuHS>who was scovit?
***Emulatorman__ is now known as Emulatorman
<damo22>if the inode has an ACL, it takes up s_blocksize number of blocks even for a fast symlink
<damo22>so linux tests if this is the case and subtracts i_blocks - s_blocksize / 512
<damo22>to see if that is 0
<damo22>(s_blocksize / 512) number of blocks*
<damo22>i get hang in qemu for network :(
<damo22>it was working yesterday
<damo22>i installed libparted-dev and it broke somehow
<damo22>ls -la /dev/eth0 hangs
<damo22>in particular dir_lookup ("dev/eth0") hangs
<AlmuHS>did you try to restart pfinet?
<damo22>hmm there was a bogus port in ext2fs
<damo22>tracing back to pci-arbiter