IRC channel logs
2024-02-27.log
back to list of logs
<biblio>hi damo22, solid_black: I am trying to run hurd 32-bit by using cross-hurd script. I was able to build image. But after booting I am getting "piixide0:0:0: lost interrupt" https://paste.debian.net/1308740/ Do you have anyother custom build script or how are you building hurd from sources. <biblio>solid_black: yes. it has rumpdisk. <solid_black>on i386 when building from source, I'm either building with in-Mach disk drivers, or ramdisk, or using Debian's rumpdisk <biblio>solid_black: oh ok. I will keep in mind. Thank for the tips. <biblio>solid_black: not yet. I read few architecutre guides and I am trying to learn more about hurd 32-bit startup process. will continue with risc-v. <damo22>i think you can change disk controllers to be ahci by default with -M q35 <solid_black>biblio: do you mean *Hurd* (i.e. userland) startup, or Mach startup? <solid_black>as for the latter (Mach startup), I can tell you about hardware-independent things (or aarch64 specifics ;), damo22 and Samuel would be the ones to ask about x86 specifics <biblio>solid_black: I think it is Hurd startup issue. Sure, I will ask. I built create-initrd.sh (3 modules loaded but not starting) - trying to figure it out (for learning only.) <damo22>biblio: maybe you are missing a task-resume <solid_black>can you post the code? (create-initrd.sh?) also if this is related to pci / rump, then that too would be damo22's territory <biblio>solid_black: risc-v devs actually submitted code in linux kernel without discussing much details in document like x86. <biblio>damo22: I did task-resume. But i will check, I think I missed something. <solid_black>this is an ok example, yes, but note that it's using ramdisk, not rumpdisk, and also ramdisk support is not in gnumach upstream, it's a Debian downstream patch <solid_black>I've got a WIP "vm-object-create" patchset that achieves the same in a less hacky / much better way <solid_black>biblio: but the log you posted definetely indicates you're running rumpdisk, not ramdisk? <biblio>solid_black: there are two. one - create-image.sh with uses rump disk. It worked after I added -M q35. <biblio>solid_black: another one create-initrd.sh - i am just runing to learn about interneals without rumpdisk. <biblio>solid_black: rumpdisk issue solved. Now, just trying to figure out why - i the task is not starting after I boot with img created with initrd. let me reproduce now. <biblio>damo22: that is used for create-image.sh with rump it worked 100% <biblio>damo22: create-inird.sh uses grub.initrd* <solid_black>biblio: unfortunately, the current boot process has many issues, one of them being that if things go wrong, there's not a lot of (read: none) info on *what* went wrong <damo22>that should probably be -T device:rd0 <solid_black>"-T device" means "type is device", "rd0" is the actual spec of what to open (device named rd0, in this case) <biblio>solid_black: damo22: but in general cross-hurd helped me a lot to understand internals. <solid_black>cross-hurd helped me a lot too, to figure out how to bootstrap and cross-compile things <damo22>biblio: it has prompt-task-resume <biblio>solid_black: damo22: I also tried to task-resume but same output. tasks are not loaded. <biblio>damo22: I did but nothing happening. <damo22>its truncated from a com port console <biblio>damo22: solid_black: I am using a remote machine to run. so I am running console=com0 <solid_black>biblio: does it hang just there at "3 multiboot modules"? <solid_black>yeah from that vm_page output, looks like you only have very little RAM (31M?) <solid_black>that being said, we should try to handle this better; for one thing Mach could try to map the GRUB-loaded ELFs into the bootstrap tasks directly, without doing a copy <solid_black>this would both be faster (no need to copy), and handle low-mem situation better <damo22>solid_black: how is your vm patch <biblio>solid_black: task loaded: ext2fs --multiboot-command-line=console=com0 worked :D <biblio>solid_black: damo22: thanks for both of your help. <damo22>solid_black: what was wrong with the commit? just the commit message? <solid_black>the commit message should have given a long-ish context / explanation of what's going on, but also I kind of changed my mind about only supporting a single entry <solid_black>because it should be possible to support multiple entries without using alloca <solid_black>by only copying out the first and the last entries; no "well-behaved" code should touch the entries in the middle <damo22>you could just submit it as is with a simple explanation in the message right? <solid_black>no point in submitting it while Samuel is afk anyway <damo22>then you can follow up with more if you wish <damo22>isnt that exciting for you anymore? <solid_black>it's possibly the most practical improvement happing in the Hurd-land in a long time <solid_black>x86_64 didn't change that much, no matter how exciting that wat; yes we now use a different instruction set, so what <damo22>the sooner we get this to boot with smp the sooner we can make it default <solid_black>SMP will magically make everything like 8 times as fast <solid_black>yes, sure -- so what's the current blocker about SMP boot? <damo22>im not sure, something in netdde and possibly another vm bug <damo22>the nic doesnt get assigned an IP and hangs <damo22>if i ctrl-z it kills that and boots to a shell mostly <damo22>but if i revert pset pinning and apply my old scheduler patch, i get a vm fault <solid_black>wait, what you just said about it mostly booting to the shell -- was that not without pset pinning? <damo22>that was without pset pinning too yes <damo22>i meant, without pset pinning AND apply my old patch .... <damo22>it removes dispatching direct to idle processors <damo22>forces things to go back on a runq <damo22>because i was seeing runqs getting starved of threads <damo22>so i thought dont worry about scheduling to idle processors, just put more threads on runqs <biblio>damo22: recently - i learned about x86 architecture. Now, I have theoritical knowledge of x86. But it is hard to understand where fix is needed for x86. <damo22>biblio: x86 has support in gnumach, we have done a lot of work in the last 12 months to make more modern features just work <biblio>damo22: thats nice. Just asking is there anything pending. like writing test code, etc. <solid_black>are you looking for gnumach tasks specifically, or anywhere in the Hurd? <damo22>biblio: currently, we have issues with smp because the boot process hangs if you run a full SMP system <biblio>solid_black: mainly gnumach but if needed hurd will be also fine. <damo22>but we have been able to isolate the master cpu to a separate processor set, and put all the remaining cpus into a different set <biblio>damo22: I can use your SMP branch to reproduce it ? or do you have any document to test. <damo22>biblio: yes, i can send you my branch link <damo22>its basically master plus a few patches, and the last patch reverts the processor set pinning <damo22>so you can test without that patch <biblio>damo22: should I test by building and replacing gnumach in exiting i386 or I should use cross-hurd and build everything ? How are you testing locally. <damo22>and try to boot with and without pset pinning (top patch) <biblio>damo22: -kernel option in qemu ? <damo22>because we are trying to debug what is wrong with boot <damo22>i have a debian install on a separate disk and use -hda /dev/sdd for example <biblio>damo22: "Installing Debian/Hurd with QEMU using the Debian installer" <damo22>biblio: you can download eg the latest disk image from the topic <damo22>then install gnumach.gz into /boot and update-grub <damo22>i usually develop everything inside my hurd install <damo22>so im not leaving much except to reboot and test <biblio>damo22: ok. I want to be clear. I could not use cross-compiler to compile (in Debian Linux) due to old gcc (gcc without hurd patch). So, I use cross-hurd to build everything to compile. <biblio>damo22: are you compiling inside hurd or using cross-compiler ? <solid_black>to cross-compile from my host (x86_64 GNU/Linux) to {i386,x86_64,aarch64}-gnu <damo22>biblio: you need to make a build dir and then enter it and ../configure --enable-ncpus=8 --enable-apic --enable-kdb --disable-linux-groups <damo22>then when you reboot into your smp system you need to add "-smp X" where X is the number of cores you want to enable <damo22>if you use my fix-smp branch as is, it will hang at network <damo22>you can ctrl-z to skip networking if its hung <biblio>damo22: noted. It would be great if you make a wiki page so others can also test :) <damo22>but it would be better to unify the docs <biblio>damo22: or a README in your branch. which you can skip while submitting as patch. <damo22>etno: web$ cat user/zhengda/howto.mdwn this explains how to start a subhurd and route net packets to the host hurd <etno>damo22: I shall have a look ! <biblio>solid_black: just one question "cross-toolchain" are you applying patches for gcc hurd specific patches manually ? <solid_black>I don't have any hurd-specific gcc patches, other than for aarch64 <biblio>solid_black: no for cross-hurd I saw - it applied several pactches before compiling. So, I was just wondering if you also apply these patches manually. <solid_black>I think at least some of those patches are already upstream <solid_black>and I am using some Hurd-specific patches for GCC in the Alpine-based distro <azert>I’m probably going to say something inappropriate.. damo22: sometime with Debian Hurd it hangs at networking just due to ext2 corruption <azert>I fix that by rebooting into the installer and running e2fsck <azert>Probably not what you are experiencing but worth mentioning, since looking at the logs it’s not obvious at all that it’s an easy fix when that happens <biblio>damo22: -smp 4 error I pasted. -smp 2 is showing "start acpi acpi pci rumpdisk..."