IRC channel logs


back to list of logs

***Server sets mode: +nt
<damo22>yay guix hurd vm:)
<damo22>so does this mean there is a guix recipe now for cross building a hurd vm?
<civodul>hi damo22!
***rekado_ is now known as rekado
<damo22>civodul: cool! i saw that
<damo22>i have never used guix but im wondering if it would be easier to develop hurd using guix
<damo22>instead of using a real disk with a hurd image
<damo22>i have 4 cores but i can only use 1 core when i boot hurd
<damo22>if i change one line of code, how much time would it take to rebuild a working image?
<damo22>if it was cached
<civodul>depends on which line of code
<civodul>if you're working on a translator, for instance, you'd rather have a regular dev env on native Hurd
<civodul>but the Guix machinery is great to do "integration testing" or "system testing"
<damo22>ok thanks for info
<damo22>do you know any CI that can run guix
<damo22>for example, do you think it would be possible to install guix + hurd cross image on something like "travis" attached to a github repo
<damo22>do you have an estimate of the total build time for -j2 on an x86_64 machine for the hurd hello world image?
<damo22>i wonder how difficult it would be to host a guix CI
<Gooberpatrol66>any guix-hurd image freezes for me at "start ext2fs: Hurd server bootstrap: ext2fs[device:hd0s2] exec"
<Gooberpatrol66>the provided guix-hurd-20200401.img and one i built using janneke's instructions
<Gooberpatrol66>tried booting in qemu on a guix machine and a gentoo machine
<rekado>Gooberpatrol66: the guix-hurd-20200401.img is really just the latest Debian GNU/Hurd with an extra shell script and GNU hello installed.
<rekado>Gooberpatrol66: is KVM enabled?
<Gooberpatrol66>i used this command qemu-system-i386 -enable-kvm -drive file=guix-hurd-20200401.img,cache=writeback -m 1G
<civodul>damo22: when pre-built binaries are available, Guix would simply download them instead of compiling
<civodul>otherwise, building libc, cross-GCC, etc. can take a lot of time with -j2
<Gooberpatrol66>derp, apparently they're aware of the issue:
<civodul>of which issue?
<civodul>"they" are right here :-)
<wleslie>the image provided is cross-compiled to x86 correctly
<Gooberpatrol66>janneke, in that article
<wleslie>the problem described there was fixed before the image was released
<Gooberpatrol66>oh? then it must be something else
<wleslie>how old is your qemu?
*janneke is confused; we did not release a qemu image yesterday, only the build recipe
<Gooberpatrol66>the one on april 1st
<janneke>Gooberpatrol66: that's just a slightly modified version of the image in $topic
<Gooberpatrol66>how do I check that? I did a guix package -u on the 1st. I thought it was before booting the image but maybe i'm misremembering
<Gooberpatrol66>either way my install isn't very old at all
<rekado>Gooberpatrol66: is your user account a member of the kvm group? What is the ownership of /dev/kvm?
<rekado>(just to be sure that KVM actually *does* work and you’re not just seeing the effects of very slow emulation.)
<Gooberpatrol66>ok, i remember now, i had an error and had to edit config.scm and reconfigure the system before qemu would work
<Gooberpatrol66>*edit to add myself to kvm group
<Gooberpatrol66>so yes, i am a member of kvm
<Gooberpatrol66>./pre-inst-env guix build -f gnu/system/hurd.scm fails with "Unrecognized keyword: #:file-system-options"
<damo22>assuming the cross toolchain does not need to change and you can get substitutes for the cross-toolchain, it would be nice to know an estimate of how long the rest of the build for the qemu image would take on 2 cores... ie building gnumach, hurd and libs etc because we should probably work out a way to auto-build hurd in a CI and pass/fail upstream commits :D
<damo22>ie a hello world boot test
<civodul>Gooberpatrol66: are you on wip-hurd-vm? #:file-system-options appeared on that branch
<civodul>perhaps you need a rebuild or something
<Gooberpatrol66>git branch shows i'm on wip-hurd-vm
<rekado>damo22: we’ll soon bulid this on once the branch is merged into either core-updates or master.
<rekado>“this” = all the components of the image, but likely not the image itself.
<damo22>rekado: thats interesting, does that mean it will substitute all components that are identical to previous time it was built locally?
<damo22>has anyone thought of a way to store a guix "cache" in a docker hub image or something like that so you can do a guix build, push the cache up inside a gratis CI and next time you build the same image to test it, you restore the old cache so you can get most of the components back... there should be demand for such a thing for CI tests
<Gooberpatrol66>i did git clone, git checkout wip-hurd-vm, guix environment guix --pure --ad-hoc qemu./bootstrap, ./configure --localstatedir=/var, qemu-system-i386 -enable-kvm -m 512 -snapshot -hda $(./pre-inst-env guix build -f gnu/system/hurd.scm)
<Gooberpatrol66>*qemu, ./bootstrap
<damo22>Gooberpatrol66: check your git repo doesnt have a wip-hurd-vm local branch that differs from upstream... it is possible your branch is not on the same commit even if your branch name is the same
<rekado>damo22: yes, you would get substitutes from for all Hurd components then.
<damo22>eg if someone force pushed to that branch yours could be out of date
<damo22>Gooberpatrol66: try git log --graph --decorate --pretty=oneline --abbrev-commit --all
<Gooberpatrol66>git pull says up to date
<rekado>Gooberpatrol66: just to be sure: can you tell us what commit “git show” says you’re on?
<Gooberpatrol66>commit 1a98789ce7059be42f07a18ba0b369d500b5fe6a (HEAD -> wip-hurd-vm, origin/wip-hurd-vm)
<rekado>ok, I have the same
<Gooberpatrol66>rekado: does qemu-system-i386 -enable-kvm -m 512 -snapshot -hda \ $(./pre-inst-env guix build -f gnu/system/hurd.scm) run differently for you?
<damo22>i wonder if gitlab could add a feature to gitlab-runner that caches guix
<damo22>a gitlab-guix-runner
<damo22>do .scm recipes have built-in tests?
<Pellescours>When looking at configure options I saw something strange: --enable-kmsg -- disable use of kmsg device
<Pellescours>does it enable or disable kmsg ?
<jrtc27>probably someone changed an AC_ARG_DISABLE to an AC_ARG_ENABLE and didn't update the description
<Pellescours>And also when trying to build x86_64, using the --enable-pae make a config error (configure: error: can only enable the `PAE' feature on ix86.)
<Pellescours>pae is automaticaly enabled for x86_64 build but i wanted to try
<janneke>hmm, i can ssh out of the guix vm, but not ssh into it
<janneke>in fact i tried netcat and that sees nothing, i wonder what i'm missing
<janneke>i am starting with: -device rtl8139,netdev=net0 -netdev user,id=net0,hostfwd=tcp:
<janneke>so, it's not a problem with our ssh daemon (yet); it now makes sense that it was completely silent
<civodul>did you try to just "telnet localhost 2228" from the outside?
<janneke>yeah, qemu always accepts
<janneke>nothing to see from the inside
<civodul>so you don't get the greeting from sshd?
<janneke>silence, even `sshd -ddd'
<civodul>could sshd be listening on instead of
<janneke>civodul: no, i tried -o listenaddress= too; and also netcat ...
<gnu_srs1>janneke: -net nic,model=e1000<whatever> -net user,hostfwd=tcp:<whatever>:5577<whatever>-:22
<gnu_srs1>and you have sshd running on the Hurd image?
<janneke>gnu_srs1: well, i'm starting it (or netcat even) by hand to see no incoming
<janneke>what you paste, that works for you? i'm not sure if our gnumach supports e1000; i looked for that configure option
*janneke tries gnu_srs1's command
<gnu_srs1>e1000 is from netdde. I don't remember which nic's gnumch supports
<janneke>gnu_srs1: right, we don't use/have netdde right now
<janneke>so yeah, e1000 is qemu's default aiui and that gives us "Translator died"; gnumach cannot find it does not create eth0
<youpi>gnumach doesn't have drivers for such recents cards, ne2k_pci is what you can get to work
<janneke>i have been using rtl8139, which gives me an outgoing connection
<janneke>i'll try ne2k_pci
<youpi>that one works too yes
<janneke>ah, okay
<janneke>just wondering why/where the connection into the VM gets lost and how to get progress there
<janneke>youpi: check, ne2k_pci gives the same result for me
<civodul>linux/ says --enable-device-drivers=qemu gives you "ne" (ne2k, etc.)
<civodul>janneke: at some point we'll have to package netdde anyway :-)
<youpi>or rump when it gets available :)
<youpi>but that's not a priority since we have netdde working fine enough for now
<janneke>yeah, i heard the networking failure was only Temporarily :-)
<janneke>sometimes when you change something that "shouldn't make a difference", it fixes or breaks stuff
<youpi>and then it's useful why it was actually making a difference
<gnu_srs1>youpi: Booting a broken image still does not enter the mode that enables e2fsck /dev/hd0s1. It continues to check other partitions and reboots with a broken /.
<youpi>I haven't had any look at this
<gnu_srs1>That bug is somewhere in sysv* packages.
<youpi>(and have no idea whether I'd have any time to)
<youpi>probably, yes, investigation needed
<gnu_srs1>Anyway, how to enable check of / outside linux until that bug is fixed?
<gnu_srs1>enter rescue mode; umount /; fsysopts / --readonly; e2fsck -y /dev/hd0s1; complains that / is busy
<youpi>you can't unmount/ for sure
<youpi>but making it readonly with fsysopts should be enough
<youpi>for e2fsck to be fine with it
<gnu_srs1>Does not work, I've tried several times.
<youpi>we should make it to work
<civodul>BTW, i thought it would be time to at least remove "-x" from the xattr option of ext2fs
<civodul>and perhaps even make it the default
<youpi>I don't know the status of this
<youpi>whether it was actually tested etc.
<youpi>if it has been tested extensively, then sure
<youpi>otherwise it needs to be tested
<youpi>blindly enabling it may get subtle bugs that we'd have a hard time relating them to this
<civodul>that's something we can test with the VM
<civodul>and based on how it works, we can decide what to do
***DNS is now known as DNS777
***Emulatorman____ is now known as Emulatorman