IRC channel logs

2024-02-18.log

back to list of logs

<rickmasters>Meanwhile, I ran into a checksum error this morning building libtool-2.2.4_0.tar.bz2.
<rickmasters>Unfortunately, I had to run 15 more builds until I could reproduce it.
<Googulator>rickmasters: "AUTOMAKE=automake-1.10 ACLOCAL=aclocal-1.10 AUTOM4TE=autom4te-2.61 AUTOCONF=autoconf-2.61 AUTOHEADER=autoheader-2.61 AUTORECONF=autoreconf-2.61 ./bootstrap"
<Googulator>sounds like that's the same issue I've seen before
<Googulator>autoreconf must *always* be called with "-f"
<Googulator>otherwise it will do weird non-reproducible things
<Googulator>btw, that particular curl failure happened in bwrap mode, but I do remember curl failing on bare metal too, dropping to the trap shell
<Googulator>as for the savannah tarballs issue - IMO we should download a set of tarballs once, and then rehost those on archive.org
<Googulator>fossy: what do you think about that plan?
<Googulator>this way, we can use xz tarballs, which are notoriously unreliable on savannah itself
<rickmasters>Googulator: Can you elaborate on the autoreconf problem? The line you quoted is running bootstrap, which runs autoreconf --force
<Googulator>hmm
<Googulator>that was https://github.com/fosslinux/live-bootstrap/issues/365
<Googulator>where -f was not used
<Googulator>and because we patched configure.ac in that package, that patch would not get applied to the real configure script when autoreconf decided there was no need to regenerate it
<Googulator>but if it's already called with --force, that's different
<Googulator>although there could well be something else in the bootstrap script that needs a --force option to make sure it doesn't just decide "it's up to date, I'm not gonna touch it"
<rickmasters>Got it. It seems to have --force so I'm suspecting a timestamp / make related issue. Not sure yet.
<rickmasters>This happened on kernel bootstrap and Fiwix uses ext2 which has coarse timestamps. We've seen that with other packages before.
<rickmasters>But it's too early to tell. I'm still trying to reproduce it better.
<rickmasters>With regards to the bash trap, I don't seem to be getting a prompt. I was testing bwrap, chroot, qemu/linux, qemu/kernel bootstrap.
<rickmast_>They all had a curl fail at the same time, no prompt though.
<pabs3>Exaga: btw, there is libre boot firmware for the RPi devices in progress https://github.com/librerpi/
<Exaga>thanks for the heads up pabs3
<Exaga>i will certainly look into that
<pabs3>I think their channel is on OFTC, but I forget the name
<stikonas>mid-kid: thanks, I've bookmarked it now. Nice work!
<fossy>mid-kid: hm, what if you built up a no-multilib system and then used that to build a multilib system? (i have veyr little knowledge of gentoo multilib, so i might be spouting stupidity :D)
<fossy>either way, this is very impressive!!
<fossy>rickmasters: i will give a go at the bare metal issue now :)
<fossy>Googulator: regarding the archive.org idea, yes i am OK with that, on the condition that the archive.org tarballs consistently produce the exact same directory structure as upstream
<fossy>Googulator: I've gone through the kernel config and made a few notes on the PR. I don't feel particularly strongly about any of them, and the removals are just to save space, but let me know what you think for each
<fossy>rickmasters: unfortunately that commit didn't fix it
<stikonas>argh, another stupid error on my openjdk gentoo overlay that I'm now retesting...
<stikonas>/var/tmp/portage/dev-java/icedtea-7.2.6.9/work/icedtea-2.6.9/bootstrap/jdk1.6.0/bin/java -XX:-PrintVMOptions -XX:+UnlockDiagnosticVMOptions -XX:-LogVMOutput -Xmx512m -Xms512m -XX:PermSize=32m -XX:MaxPermSize=160m -jar /var/tmp/portage/dev-java/icedtea-7.2.6.9/work/icedtea-2.6.9/openjdk.build-boot/btjars/generatecurrencydata.jar -o /var/tmp/portage/dev-java/icedtea-7.2.6.9/work/icedtea-2.6.9/openjdk.build-boot/lib/currency.
<stikonas>data.temp < ./openjdk/jdk/src/share/classes/java/util/CurrencyData.properties
<stikonas>Error: time is more than 10 years from present: 1388527200000
<stikonas>why would you introduce such error...
<sam_>lol
<sam_>building java is so.. challenging
<sam_>mostly because of stupid stuff like this, not even anything reasonable
<sam_>did you see the recent salt thing btw?
<stikonas>no, I don't think I've seen it...
<sam_>let me try find it
<sam_> https://social.treehouse.systems/@mgorny/111790131342376030
<sam_>it stopped working *outside* of tests, too
<stikonas>anyway, that 10 year thing is patchable of course
<stikonas>but why???
<stikonas>hmm, I've never even heard of salt before
<stikonas>I guess it's similar to ansible?
<sam_>yeah
<sam_>same deal
<oriansj>well in September 2020, VMware acquired SaltStack; so level of surprise == 0;
<oriansj>I've used it in a couple environments and once Nix came into existing, that whole category of tools lost their purpose.
<Googulator>fossy: nice catch for BIGSMP - I was wondering why I can't set the max core count higher than 8, this is probably it
<Googulator>as for the others: "Pentium Pro" is the trade name of 686
<Googulator>initramfs compression: the xz, lzo, etc. libraries still get included in the kernel for other purposes, so the space saving is minimal; on the flop side, keeping support here means we can in the future easily save RAM in Fiwix by decreasing the kexec buffer size and using higher compression for the initramfs
<Googulator>CONFIG_DRM_I915 (and other DRM drivers) isn't just for X; DRM drivers are the recommended way of getting a high resolution framebuffer on modern kernels, the plain-FB drivers are deprecated-ish and can even cause problems. In fact, I plan on switching to nouveau/radeon/amdgpu and disabling their FB counterparts, because nvidiafb turned out to be
<Googulator>the cause of the green screen freeze bug on newer GeForce cards. (And yes, this functionality works without any firmware loaded; firmware is only required for 3D acceleration.)
<Googulator>This will of course be a followup PR.
<Googulator>KVM, unfortunately, doesn't compile with our current toolchain; I tried to patch it out, with no luck. It causes "undefined reference to `__compiletime_assert___COUNTER__'" errors at the final link, and unlike drm_edid, this doesn't seem to be patchable just by removing BUILD_BUG_ONs.
<Googulator>As for mouse support, I don't think we need the savings from that right now, if it's even significant. I'd leave that for a future size reduction opportunity, if ever needed, especially since we already had mice enabled in 4.9.10.
<Googulator>fossy: sent the followup PR: https://github.com/fosslinux/live-bootstrap/pull/444
<Googulator>archive.org mirror of our "volatile" sources: https://archive.org/download/live-bootstrap-sources
<Googulator>these are the files we currently pull from Savannah or GitHub
<Googulator>with xz versions included in case of Savannah
<stikonas>and what do we need to do as we switch to different versions?
<stikonas>i.e. some package update
<stikonas>are you able to add new files?
<Googulator>Yes, it's editable
<Googulator>Download once from the forge site, checksum, upload to archive.org, and link it from there in sources - checksums are then guaranteed not to change, unlike when downloading directly from GH or Savannah
<stikonas>well, let's see what fossy says
<stikonas>I guess it's fine
<stikonas>but we need consensus for such change
<matrix_bridge><cosinusoidally> Googulator: I find archive.org performace pretty variable. Some were ok, but others were < 500KB/s. Maybe it depends on where you are in the world (I'm in the UK)
<matrix_bridge><cosinusoidally> Github should also be able to host static binaries, but I'm not sure if there are limitations in terms of numbers. Eg on https://github.com/cosinusoidally/tcc_bootstrap_alt/releases/tag/0.2 I uploaded tcc_bootstrap_alt-0.2.tar.gz to make sure the hash was stable.
<Googulator>cosinusoidally: Savannah is far slower than that, if it even works
<Googulator>GitHub is somewhat controversial as a host, because it's owned by MS
<stikonas>Googulator: strange, I've got some build error on the latest ommit
<stikonas>Googulator: https://paste.debian.net/1307738/
<stikonas>this was with ./rootfs.py --build-kernels --bwrap
<Googulator>stikonas: you will need to update submodules
<stikonas>oh right...
<stikonas>this often catches me out :(
<Googulator>new version of unxz in stage0-posix/mescc-tools-extra
<stikonas>git should just do it automatically
<Googulator>it does, but not recursively
<Googulator>submodules of submodules are left out
<stikonas>anyway, thanks
<stikonas>rerunnign now...
<fossy>stikonas, Googulator: i said earlier regarding archive.org
<fossy>regarding the archive.org idea, yes i am OK with that, on the condition that the archive.org tarballs consistently produce the exact same directory structure as upstream
<stikonas>well, archive structure is a one-off copy of upstream
<Googulator>more precisely, the archive.org tarballs are binary identical to what upstream was offering for download at some specific time
<Googulator>if upstream chooses to alter how it makes tarballs, the archive.org ones don't change unless we explicitly update them
<fossy>well, yes, however if for example a new file or something was added to the git-generated tarballs, say with some metadata (would be very silly, but nevertheless), then i'd be saying we should update the archive.org tarballs
<fossy>it just adds an extra layer of ability for verification
<Googulator>IMO that should be a new set of uploaded files on archive.org, with new URLs
<Googulator>and an explicit commit to LB to point to those URLs
<fossy>sorry yes, that's what i meant
<fossy>i think we are on the same page here
<Googulator>if we overwrite the originals on archive.org with changed ones, breaking checksums, then we're only doing marginally better than GH/Savannah
<Googulator>(marginally better, because at least it's a consistent checksum fail, not a random one with some HTTP 5xx errors and random timeouts interspersed)
<fossy>agreed
<Googulator>but it's better to go all out and actually keep the original files at the original URLs, with new versions getting new URLs
<Googulator>it also better aligns with archive.org's goals of being, well, an archive