IRC channel logs
2024-01-25.log
back to list of logs
<wbartczak>I'm new to Hurd, but I look into to play a bit with RiscV. Is there any port or any materials worth to look. I have lot experience with bare-metal and Linux. So, system level should be no problem. Currently, I go over all published papers (archicteture) and sorurce code. <damo22>i think gcc is not ported to riscv for i386-gnu target yet ? <gnucode>wbartczak: the Hurd works well with X86 and X86-64. There is a started ARM port (GNU Mach still needs to be ported). <gnucode>If you want to work on porting the Hurd to Risc V, then you should probably talk to solid_black <gnucode>he did a lot of the X86_64 port and most of the ARM port <wbartczak>gnucode: Unfortunately, I left x86 world long time ago. Mostly, due to prohibitive amout of work to start with it. That said, I have lots of experience with armv7 and armv8. It could be also nice option to start like that. <damo22>wbartczak: it would be nice to have you contribute ! <damo22>i think you need hurd specific toolchain <damo22>and once the toolchain works, you can attempt to port gnumach <damo22>solid_black is good to talk to as he is upstreaming aarch64-gnu support for gcc currently <damo22>wbartczak: you can start with qemu-riscv target i guess <wbartczak>demo22: Thanks! This is a lot of help. Since, it's usually hard to fins reasonable foothold at the begining. Let me google the aarch64 port and see the progess. <wbartczak>demo22: Yes, this is my intention. However, I recently purchased beaglv fire :) so, full working hardware implementation is also feasible. <wbartczak>For arm I have some older raspberry PI boards and some rockchip 3568 and rockchip 3288. They are also nice for prototyping. <wbartczak>So, in case I want to test hw there's a lot to pick from. I believe I have some older nxp boards too. But they can be trickier to work with. That said, it's long road ahead of me. I have installed debian hurd using qemu. So, I need to get better with userspace and see some differences regarding kernel/servers/userspace iteraction. <damo22>wbartczak: if you want to have contributions merged, you need to assign copyright to FSF for hurd related projects <damo22>this is so that FSF can fight on your behalf if a 3rd party breaks the license, and can more easily manage the ownership of the project <wbartczak>damo22: :D That's not prohivitive luckily. I'm in EU, so I am not restricted what I do in my free time. I'm more than happy for that. <wbartczak>signing the document shouldn't be the problem. But I will look into it. <damo22>feel free to join the mailing list and you can ask on there <gnucode>wbartczak: you might send an email to bug-hurd@gnu.org asking how to assign copyright <gnucode>and you can ask me if you ever want to contribute to the Hurd wiki <damo22>wbartczak: i am working on getting more x86 ISA support <damo22>gnumach is the kernel, it should not have any drivers in it, as hurd is a microkernel model. We have drivers in userspace <damo22>the kernel only supports memory management, IPC and timer/interrupts <wbartczak>damo22: Yes, my dictionary needs update. I worked too long with monolithic kernels. But, I spent some time looking into minix3 which would be probably closest thing to GNU Hurd + GNU Mach. If I'm correct, there was MIG as generator for messages between kernel and rest of the system. I have to look for it better, but this reminds me QNX/Minix a lot. <wbartczak>gnucode: About FSF copyright agreement. I found this two pages: <gnucode>wbartczak: I just recently signed my copyright. It took me a while to complete it... <gnucode>Samuel sent me the forms to complete it. <wbartczak>gnucode: Yes, I see why. I have nothing but bad experience with legal teams in all companies I worked for. The're usually bunch of nitpickers. The worst is, everything not stated clear is fobidden for them. <damo22>i dont think its that, its more that the process takes time <damo22>so if youre thinking of contributing, it may be in your interest to get that ball rolling <wbartczak>damo22: Yup. I see a few point to plan around. <wbartczak>damo22: It's done. I have no idea, if you can look it up. But, if you have. Then I go with same nickname and gmail domain. <damo22>please send an email to the list to introduce yourself and you can ask about the process for assigning copyright <damo22>our fantastic maintainer will help you <wbartczak>are there any specific rules for posting? text vs. html .etc? <damo22>plain text is preferable, and patches sent via git-send-email <damo22>[ 0.028266] IOAPIC[0]: apic_id 4, version 33, address 0xfec00000, GSI 0-23 <damo22>[ 0.028274] IOAPIC[1]: apic_id 5, version 33, address 0xfec20000, GSI 24-55 <damo22>why is IOPIAC reporting 24 + 8 = 32 interrupts? <damo22>i thought IOAPICs only have 24 pins <anatoly>damo22: just curious, where did get "8" from the second line? <damo22>well counting 0 as another interrupt its an extra 8 <anatoly>by looking at number I didn't realise 24 + 24 < 55 :-) <damo22>i have no idea where these extra 8 interrupts are connected <damo22>but if there are 56 interrupts, that could explain why its crashing my board <damo22>as i only defined 48 vectors for the apics <anatoly>damo22: from wikipedia "On APIC with IOAPIC systems, typically there are 24 IRQs available, and the extra 8 IRQs are used to route PCI interrupts, avoiding conflict between dynamically configured PCI interrupts and statically configured ISA interrupts." Could it be the reason? <damo22>no the extra 8 are in addition to ISA interrupts which has 16 <damo22>but this IOAPIC seems to be reported to have GSIs between 24-55 = 32 interrupts (?) <damo22>i guess my question is, do IOAPICs have variable number of GSIs? or can you safely assume they always have 24 <saravia``>so, in a T43 <-- is posible run debian hurd in fisic, sure? <saravia``>gnucode: T60, T43 <-- are the laptop which debian hurd run? <anatoly>damo22: from some article "It is worthwhile to note that it is possible to have several I/O APIC controllers in the system. For example, one for 24 interrupts in a southbridge and the other one for 32 interrupts in a northbridge. In the context of I/O APIC, interrupts are usually called GSI (Global System Interrupt). So, the forementioned system has GSIs 0-55." <damo22>yeah, ok but im still not sure how to configure the IOAPIC with 32 <damo22>or if the extra 8 are connected elsewhere in the chipset <damo22>no, the MADT table has the starting GSI per IOAPIC but not the number of GSIs per ioapic <damo22>thats why i thought they were hardcoded to 24 <damo22>where the version is stored has the number of entries <anatoly>I tried to decipher "This type represents a I/O APIC. The global system interrupt base is the first interrupt number that this I/O APIC handles. You can see how many interrupts it handles using the register by getting the number of redirection entries from register 0x01, as described in IO APIC Registers." from https://wiki.osdev.org/MADT <anatoly>... using the register by gettinh from register XXXX <anatoly>I'm not a native English speaker but it doesn't feel clear :-) <etno>It seems to imply that the number of indirections may not equal the number of interrupts <anatoly>^^^ that I don't see there at all :-) <anatoly>"0x01: Get the version in bits 0-7. Get the maximum amount of redirection entries in bits 16-23. All other bits are reserved. Read only." <etno>(not a native English speaker as well :-) ) <damo22>i already coded that while we were talking about it, and now i support 56 GSIs <damo22>but its still crashing at calibrate timer <anatoly>damo22: I was not implying or assuming the opposite given my message being posted about 40 mins later after yours. I just shared what was confusing me when I read it while talking to you. The last message about 0x01 register was for etno :-) <anatoly>Thanks for bringing ti for discussion, it pushed me to search and read and now I know a little bit more :-) <sneek>Welcome back solid_black, you have 1 message! <sneek>solid_black, gnucode says: I have a very very simple website set up with emacs' org-mode <etno>Now that I have context switch working on this Inspiron 1750, gnumach may be usable. The next problem seems to be with ACPI, which is hanging for about 1mn. I need to setup an env to fetch the servers from a build env. damo22 : you are using bootp to transfer files to grub, is that right? <damo22>etno: no i have two disks attached and one boots linux and the other boots hurd <damo22>but i can run qemu from inside linux on the second disk if i want to <damo22>so i boot linux, then boot qemu and develop inside hurd, then reboot the vm, if i want to test real hw i reboot linux as well <etno>damo22: ok thanks, I'll see if I can do a similar setup, build everything and run 👍 <damo22>but i have a second laptop for doing the development <damo22>so i can control the bios with serial <etno>Argh, no physical serial port available here :-/ <solid_black>(this pulls not only glibc, but also openssl zlib, apk, busybox, etc) <damo22>if i dont call the set_timeout but just loop forever in calibrate lapic timer, and print in hardclock, i get clock interrupts forever <damo22>the cpu seems to crash when the timeout expires <anatoly>solid_black: does boot into something? :-) <anatoly>solid_black: I'm planning to fork abuild and add your commit and then prepare script and Container file to build it locally and let the builder container to use for bootstrapping <anatoly>are you ok with that or am I off the plan? <solid_black>anatoly: haven't yet tried booting, gonna do that now <solid_black>though I already see that /libexec/runsystem wants bash, so I'll install that too for now <solid_black>can't say I understand exactly what you're planning to do with the container file / script, but sure, please go ahead <anatoly>/libexec/runsystem is this from hurd, I guess? <solid_black>it of course makes sense for upstream Hurd to require Bash, since both are GNU projects <anatoly>My idea is to have a fork of abuild for now to patch it "a proper way". Then the forked package is needs to be built and be installable into the building system (where we do bootstrapping) <solid_black>so you want to have an OCI image that is basically the alpine:latest, with the abuild package replaced with our patched version? <anatoly>Could be this way as well with cross-compiler pre-built. So anyone can quickly jump into system building and package building. <anatoly>Then all of that can be moved to CI as well <solid_black>make sure to *not* include the aports/main/gcc/src & aports/main/gcc/pkg directories into the image <solid_black>also we'll want to change the target name (and rebuild everything again) once we pick a name <anatoly>Well it won't be an issue to repeat steps because all of that will be "scripted" <anatoly>So my first step is to be able to build abuild locally (within a containers), then produce repo so it will be installable on the next stage. Or install the package from the file, but it seems it's a "hacky" way <solid_black>have you looked into how upstream Alpine have their CI set up? <anatoly>Another question: so for now you patched a lot of alpine package build scripts. Have you got an idea how to do it better? <solid_black>some of the patches (not Hurd/glibc-specific ones) to the build scripts might be upstreamable, if upstream alpine would take them <solid_black>we could also try to upstream patches to the other projects, but I wouldn't have much hope for that <anatoly>I understand that debian keeps similar patches for package source for ages because of that reason <solid_black>yes, I imagine Samuel / Debian people have already upstreamed the patches that the upstreams would take <anatoly>But it seems to patch "building" code of the whole repository is not the way <solid_black>some of it is just making things cross-compile that the uptream Alpine doesn't cross-compile <youpi>solid_black: concerning hurd patches, people have not always done the work <damo22>stupid console not flushing to screen made me confused what is broken <solid_black>i.e. splitting makedepends into makedepends_host vs makedepends_build <anatoly>solid_black: do you think it's possible to come with some sort generalise solution so that it will be good for alpine? <solid_black>we could ask them to upstream the cross-compilation patches, they mostly make sense and are not Hurd- or glibc- specific <solid_black>but overall, no, I don't think we can do much, unless they're ok with having hurd/glibc support upstream <solid_black>why do you think that maintaining a set of patches (as in, Git commits, not patch files) on top of their tree is problmatic? <anatoly>solid_black: regarding CI of alpine: so far what I've seen in their gitlab related files is about testing packages for changes in MRs. The staff for building packages is in a separate project and I saw only little piece of code from there so I don't ahve a good picture of how they do it but it's on the list. <solid_black>why do you think the packaging level is the wrong level to do it? <anatoly>How many build scripts do you think needs patching? <solid_black>the most complex/tricky patches were in binutils/gcc/glibc bootstrap <solid_black>for instance, I had to add cross directory poisioning to binutils (based on a patch from the buildroot project), otherwise some package much later would fail building due to what is apparently a libtool bug <solid_black>most patches to individual APKBUILDs are just tweaking dependencies -- splitting makedepends_host / makedepends_build, adjusting dependencies for hurd vs linux & musl vs glibc <solid_black>are you saying it would be unmaintainable to keep marging / rebasing these changes? <anatoly>Technically nothing is impossible. it's just some code. Time and man-power is a limiting factor. And it's not helping if the process of supporting another distro in the up-to-date state is cumbersome and boredom. Think about those new developers whom you (as we) want to atttract to the project <solid_black>unless they change their APKBUILDs every day, it should mostly be doing a 'git rebase' every now and then <anatoly>I love rebasing, rebasing is my workflow :-) All I'm trying to understand is your vision of the project as a team effort. <solid_black>well, I don't particularly have a vision, I hoped we'd figure it out as we go; but my current idea is we'd have the repo where we'd contribute changes -- such as patching various packages to make them work better on the Hurd, or updating gnumach/hurd packages regularly (rolling release-like for these two; my undestanding is Samuel does something like that in Debian), or applying glibc patches, or sometimes merging / rebasing on to <solid_black>sometimes you just want to experiment, without it having to end up in the repo / packages, for example you make a change to libdiskfs and want to test it out <solid_black>you'd apply the patch locally, '(cd main/hurd && abuild)', install the new package, boot, test it out <solid_black>then eventually when the change gets commited, the distro will pick it up the normal way <solid_black>again, maybe we'd test out a patch for some software, and then once we're sure it works nicely we'd suggest it upstream <solid_black>gnumach loads the multiboot modules, but doesn't start them, despite task-prompt-resume; rings any bells anyone? <solid_black>for one thing I misspelled it, it's prompt-task-resume <solid_black>also of course my build of gnumach contains no symbols, argh <anatoly>You started the project, called for interesting parties, you should have a vision :-) Like saying that debian developing processes are complicated for non-experienced someone and then repeating the same towards other people is probably not the reason why you're spending time on it. <anatoly>Please, don't get it as offence or personally. <anatoly>It's the beginning and it's important to have some POC, etc. But project is also involves people with various ability to participate, so it should be clear for them. <anatoly>And while I do bla-bla nothing is forking itself :-D <damo22>i kind of like the way debian handles patches that are not upstreamed <damo22>its stored in debian/patches/ as a series that apply clean to a base tarball, but the tarball can be exactly a git hash <damo22>ideally that series of patches is a noop <damo22>but where it is difficult to upstream a patch, you can maintain what is needed to make it compile <damo22>if you try to do the same thing with git only, you end up with a series of branches that need to be rebased every time you update the base <anatoly>It's not possible to avoid rebasing, no matter of its form, it's clear and understandable for me <anatoly>Mostlt APKBUILDs are changed in tow places: a new version of the package or a build and a hash <anatoly>I would not expect conflicts in this case. It should be part of CI as a separate step <anatoly>I wonder how alpine solves building interdepencies when upgrading packages. solid_black is it a valid question? <etno>Rebasing may not be the only possibility: slight variation could consist in duplicating the patches on top of a newer upstream. Almost the same, but it keeps history, and combined with a good branch naming convention, it would be perfect to me. <etno>"duplicate the patches" -> reapply commits (to be clear) <anatoly>so basically rebase our branch but keep previous HEAD and tag it? <solid_black>I wouln't call it a vision, and I've said this all already, but: I was thinking that this would be both a distro that people (who are not necesserily Hurd developers) could just install, the third Hurd distro after Debian and Guix, <solid_black>and a playground for us, where we can experiment with Hurd-related things, whether changes to the Hurd/Mach/MIG/glibc, or changes to other projects to work on the Hurd better <solid_black>again, not all of this has to end up in aports master and in the apk repos; some of this you're just going to want to play locally wiht <solid_black>the important thing is it's supposed to be really easy to try out a change and rebuild a package (or the whole system) with it <anatoly>so this particular step is not easy within debian? <solid_black>I don't want to shit on Debian, but from what I've seen, everythig is uper overcomplicated with Debian :( <anatoly>If you can't explain yourself in a constructive way then people will tend you're shitting on it <solid_black>and that's only the building packages part; there is also a social process (that I'm entirely unfamiliar with) to get the changes into Debian after you've made and verified them locally <solid_black>and there must be a good reason for that, it makes sense that Debian only wants trusted developers to upload packages <anatoly>Have you tried to familiarise yourself with them? <youpi>of course I'm biaised since I've been hacking in debian for two decades, but I don't see what is complex <youpi>apt source mypackage ; apt build-dep mypackage ; cd mypackage-* ; dpkg-buildpackage <youpi>and reportbug to send a patch (or submit a mr on salsa) <solid_black>I've been trying to get some code built as deb packages at $dayjob, so I roughtly know what I'm talking about <youpi>creating a new deb package is however quite involved, yes <youpi>because debian has standards in terms of copyright & technical policies <youpi>but for contributing code, I really don't see <gnu_srs1>Don't try to get changes to Hurd into Debian. It is not a supported architecture, so patches will be ignored/played down :( <youpi>gnu_srs1: don't make cases a generality <youpi>the sudo issue got fixed by itself without having to do anything about it <solid_black>in any case, having an alternative that is supposedly easier to hack on can only be a good thing, no? <youpi>60% of packages do build, and another 20% are just waiting for rustc <gnu_srs1>Debian is a no-go. And soon Devuan is too :( <youpi>I agree with the "how to get newer versions in" part <anatoly>solid_black: I can't say about social process but you could fork debian :-) <anatoly>By doing work and spending time I guess <solid_black>(not that I like alpine btw, I much prefer Debian in fact) <youpi>(I you that you actually do, but really it's not a good time-spending) <gnu_srs1>Problem is you need all the infrastructure, and that is not trivial. <youpi>yes, that's why in practice you usually don't want to <youpi>and in practice we do have the unreleased part of the debian-ports archive which allows us to have some part-forking <youpi>while keeping all the rest of the debian infrastructure available <gnu_srs1>youpi: OK, can I have an unreleased archive to continue reverting merged /usr, thanks :) <solid_black>gnu_srs1: if it's any consolation, on the new distro we're going to have split /usr, and we want to ensure that you can boot without /usr mounted <solid_black>I do personally think /usr merge is a good idea, but upstream Alpine doesn't have it, so we're not going to either <solid_black>damo22: yes, Alpine / abuild handles patches on top of upstream tarballs the same way (and so does rpmbuild, and I imagine other package build systems) <solid_black>anatoly: re rebuilding dependencies when upgrading: I don't think abuild can do that automatically (unlike Chimera's cbuild), so we're going to have to figure that out in our wrapper script somehow <solid_black>this should not be an issue if you always update a single package in a PR, and the rebuilt package is immideately available to the next CI run <solid_black>but I imagine we might want to update several things in a single PR/commit, so we better support that too <anatoly>^^^ so this is how I though they possibly do it, one by one <solid_black>they surely do have lots of MRs / commits which just bump pkgrel, with commit message indicating that this is enough to get the package rebuilt against the (previously) updated dependencies <anatoly>I see abump has -R argument which is "Run abuild with -R for recursive building" <anatoly>Is it funny that alpine has build-time patches for their abuild utility? :-) <anatoly>That I can see, integrating major version of package manager into a distribution is not trivial <anatoly>Oh, damn, your change for abuild is a patch so a separate fork is useless :-) Just need to rebuild it from our repo and install instead of manually patching as in the README.hurd <gnu_srs1>solid_black: Nice that Alpine don't have merged /usr. Maybe I'll convert to Alpine Linux for my boxes in due time (giving up on Devuan). <gnu_srs1>I've already given up on Debian Linux for now :( <anatoly>solid_black: haha, now you can't build abuild without specifying $CKERNEL :-) <anatoly>i guess it needs to derive CKERNEL if it's not executed as part of bootstrapping the process <anatoly>falling back to linux won't work when rebuilding stuff on hurd, I'd guess <solid_black>we'll want to ensure that all of our builds of abuild have the functions.sh patch applied <solid_black>does --add-gnu-debuglink record the basename or the full path in the binary? <solid_black>it must be the former, despite what the docs indicate <anatoly>As expected it builds and installes patched abuild, need to reorganise containerfile before i call this step done <solid_black>hm, so /servers/exec (& I imagine others, like /servers/startup, /servers/socket/1 etc) should be there at boot time, ext2fs cannot create them itself since it's starting up read-only <solid_black>issue is, we can't run scripts from Alpine Linux, so we have to make sure there is enough of Hurd to boot in bare packages installable with --no-scripts <solid_black>also things get built with PT_INTERP = /lib/ld.so for some reason, while we want /lib/ld.so.1 <solid_black>and I see that's also the case on Debian, but there's the ld.so -> ld.so.1 symlink that makes it work <solid_black>sneek: later tell youpi: do you happen to know whether i386-gnu executable having ld.so and not ld.so.1 as PT_INTERP is intended? <sneek>youpi, solid_black says: do you happen to know whether i386-gnu executable having ld.so and not ld.so.1 as PT_INTERP is intended? <youpi>I haven't found the time to sort this out yet <youpi>I'd tend to think that we'd rather want ld.so.1 <solid_black>from a quick look, i386-linux-gnu uses a versioned soname <solid_black>well, no binary compatibility is implied for my case, so I can set it to whatever actually <solid_black>also, executables built as a part of glibc do have /lib/ld.so.1 because glibc sets its interpreter explicitly <solid_black>so there should be nothing that would break if we change it upstream in gcc <youpi>solid_black: the glibc binaries is a different thing, it's like the GLIBC_2_38 symbol which would be required by glibc binaries, that's fine for them since they're supposed to be installed altogether <janneke>ACTION just built a fully functional hurd vm (or so it seems for now) from guix core-updates, that's with glibc-2.38 <gnu_srs1>Hi, bpf seems to be expecting a translator to run. <gnu_srs1>However dhcpcd needs support for BPF, can parts of eth-filter/{filter.c,impl}/libbpf be used <gnu_srs1>or do we need to use the bpf translator?? <youpi>I don't understand what you are saying <youpi>what do you mean by "dhcpcd needs support for bpf" ? <youpi>(our drivers already support bpf, see libmachdevdde's call to net_set_filter) <gnu_srs1>OK, with that attitude you can port dhcpcd to Hurd yourself. Good luck :) <youpi>gnu_srs1: what attitude? If I don't understand what you say, I can only say so, not more <youpi>I have not looked at the dhcpcd porting, so I have no idea what you are talking about <youpi>you shouldn't be surprised that I ask questions <youpi>more precisely: what makes you think that dhcpcd needs support for bpf: does it call bpf-related functions, does it make calls to inject a bpf program. Put another way: what *actual* error message are you getting