IRC channel logs

2024-01-29.log

back to list of logs

<oriansj>fossy: M2libc ;-p
<matrix_bridge><Christoph> Will the bare metal features be tested going forward?
<lrvick>Live-bootstrap (stagex stage1) now published: docker run -it stagex/stage1
<lrvick>Working on verifying stage2-3
<muurkha>congratulations!
<[exa]><3
<lrvick>stage2-3 now published as well. If you just want a stage0/live-bootstrap bootstrapped, containerized, and reproducible x86_64 toolchain to build your own project with, then stage3 is probably what you want.
<lrvick>Will add arm64 eventually
<oriansj>lrvick: thank you
<muurkha>lrvick: fantastic!
<lrvick>Now to wait 3 days for rust to compile twice
<muurkha>ugh
<lrvick>It is wild that building rust takes like 10x longer than then entire bootstrapping process, including several builds of gcc along the way.
<lrvick>and it will never get shorter, since you can only build rust with the previous version of rust.
<Foxboron>gcc-rust is hopefully making that situation better in the future
<lrvick>In general there always ends up being some major lag in the GCC implementations, as with go
<lrvick>which mean you end up not being able to apply patches to your dependencies, which rely on more recent language features
<lrvick>I don't forsee being able to abandon rustc any time soon, as much as I hate maintaining builds for it
<lrvick>Though if gcc-rust can let me ship to building more recent versions of mrustc, that would be amazing
<lrvick>err, rustc
<muurkha>also it's in itself a helpful check on Karger–Thompson attacks on older versions of Rust
<Googulator>lrvick: do you have the Dockerfiles uploaded for each stage too?
<Googulator>Christoph: I'm doing occasional tests on bare metal, but I don't see how we could do a proper CI for the bare metal path
<matrix_bridge><Lance R. Vick> Googulator: Yep, they are all here. You can run "make bootstrap" which will build with all the needed flags to produce the exact same digests as the published images locally. https://git.distrust.co/public/stagex/src/branch/main/src/bootstrap
<matrix_bridge><Lance R. Vick> The repo covers producing deterministic OCI tgz images of stage0 all the way to x86_64 rust 1.74
<matrix_bridge><Lance R. Vick> and golang. Sortly adding zig, and nodejs
<matrix_bridge><Christoph> Googolator: I see. Hopefully, the manual testing is enough to keep bare metal bootstrap working.
<matrix_bridge><Lance R. Vick> re bare-metal CI: I have gotten away with running qemu inside containers for doing all sorts of "baremetal" sim testing. Is that not an option in this case?
<matrix_bridge><Lance R. Vick> need so be able to forward a serial terminal to easily automate a VM headless though and capture output easily
<Foxboron>muurkha: what is "Karger-Thompson attack"?
<muurkha>Foxboron: https://en.wikipedia.org/wiki/Backdoor_(computing)#Compiler_backdoors sorry
<Foxboron>muurkha: Ah, so just another word for "Trusting Trust"
<muurkha>yes
<Googulator>lrvick: that's no different from qemu mode, and won't catch issues specific to bare metal
<Googulator>Most bare metal issues come from using a graphical console (vs a serial one), a commercial BIOS with DOS/9x backwards compatibility hacks (vs SeaBIOS without such hacks), a real IDE/SATA/SCSI/NVMe disk subsystem that reports some fake CHS geometry (vs. virtio that's just a pure block device), a real network adapter spanning the whole OSI stack (vs.
<Googulator>usermode-emulated networking with "nothing" below the IP layer), a real USB topology where not everything is directly connected to the root hub, or memory map differences.
<Googulator>Some of these may be testable by using different QEMU options (e.g. not disabling graphical console, or using explicit IDE or SCSI disk emulation), but most of it won't be caught if emulating via qemu.
<Googulator>And of course, no way to capture textual output from an emulated graphical terminal.
<muurkha>That's a fantastic list, thanks, Googulator!
<Googulator>there's actually one more key difference: qemu doesn't emulate real silicon, wires and electromagnetic fields between the various functional units, which are all present on bare metal
<Googulator>which means, if you clobber some MMIO registers in qemu, it will likely render the relevant functional unit unusable until you reinitialize it properly
<Googulator>but not affect the rest  of the system
<Googulator>do it on bare metal, and you lock up, instantly reboot, get weird malfunctions in seemingly unrelated places, or even damage hardware
<Googulator>because clobbering MMIO  often causes components to malfunction in _analog_ ways
<muurkha>that's a good point
<muurkha>you may end up with something crowbarring your power supply or something, you mean?
<Googulator>Classic example is writing garbage to the PCIE root complex registers
<Googulator>in qemu, you lose PCIE until you reprogram the root complex correctly
<muurkha>I don't have a good grasp on how current hardware tends to misbehave in circumstances like these
<Googulator>on bare metal, the misconfigured PCIE block sends a 3GHz swatooth wave through pins that aren't supposed to have high frequencies present, inducing eddy currents in nearby traces of the PCB, that will interfere with the CPU's attempts to communicate with DRAM
<Googulator>the only emulator that gets even remotely close to emulating such behavior is MAME
<Googulator>which is way too slow to bootstrap on, if it even supports x86-32
<muurkha>that's an awesome failure mode :)
<Googulator>even simpler - mux the same UART to 2 sets of pins on a real Raspberry Pi, vs one emulated in qemu
<Googulator>in qemu, IIRC it just works
<Googulator>on a real Pi, best case scenario is you get a weak signal on both pins that works with short cable runs, but not longer ones
<muurkha>hey, at least you aren't burning out your GPIOs with shoot-through current
<Googulator>and of course, qemu's emulated Intel SATA controllers aren't made up of transistors that wear out with use
<Googulator>unlike real ones in certain chipsets
<muurkha>ooh! really?
<muurkha>Flash, or are they just driving regular transistors way too hard?
<Googulator>No, it was an actual silicon bug that forced a recall of several chipsets back in the day
<Googulator>lrvick: phase2 is the pivot from 32-bit to 64-bit, right?
<Googulator>& then phase3 is where native 64-bit binaries are first executed
<Googulator>I mean stage2 and stage3
<Googulator>musl-1.2.4 uses "--host=${TARGET}" - won't that mean it expects the host system to be able to execute 64-bit binaries at this point, since ${TARGET} is a 64-bit one?
<matrix_bridge><Andrius Štikonas> muurkha: as transistors get smaller, wear is becoming more important
<matrix_bridge><Christoph> Hm, so there are substantial differences. Is there a suitable place to conserve your knowledge? Maybe the Wiki? Would you like to jot down some pointers for future bare metallers?
<matrix_bridge><Lance R. Vick> @irc_libera_googulator:stikonas.eu: If you mean in stage2, yes, but that is the headers and musl used for the produced binaries. It is a bit odd, but musl-cross-make by rich felkner does the same thing and both work, so long as you still have your i386 musl so file around for that stage too.
<matrix_bridge><Lance R. Vick> I won't pretend anything there is optimal. I can only attest that it all works, and does in fact get me a full native x86_64 env in the end.
<matrix_bridge><Lance R. Vick> cross compiling breaks my head a bit
<matrix_bridge><Lance R. Vick> Foxboron: Ran into my first package with the dumb uname nonsense I didn't notice before until I reproduced across different systems. Perl. https://dpaste.org/CjKFT
<matrix_bridge><Lance R. Vick> but others have solved this.
<matrix_bridge>ACTION Lance R. Vick finds references