IRC channel logs

2021-01-11.log

back to list of logs

<OriansJ>yt_: you are right I missed some functions from unistd.h; pity i'll just have to duplicate some C functions to comply with the C standard
<yt_>OriansJ: it is what it is :-)
<yt_>OriansJ: and finally (for today) https://github.com/oriansj/M2-Planet/pull/16 which adds support for global arrays (chars only though)
<OriansJ>thank you yt_ for all of your help today
<OriansJ>and got all of the missing test0021 primitives in M2libc (pity on the duplication but oh well)
<OriansJ>deesix: I think once test0021 is updated; that the changes for the tests will be done and give you the chance to continue your tweaks. To make your work easier I just need to know exactly one thing from yt
<OriansJ>yt_: when you run get_machine does it return AArch64 or aarch64?
<deesix>OriansJ, nice. I'm keeping it up-to-date as you all are progressing.
<OriansJ>because my next change in M2libc is rename x86/ELF-i386* to x86/ELF-x86*
<fossy>stikonas: wtf?
<deesix>OriansJ, has get_machine changed?
<OriansJ>deesix: no just was hoping to harmonize paths too
<stikonas>fossy: yeah...exactly the same error...
<fossy>stikonas: did you delete the 0.9.27 change?
<stikonas>I commented it out with #
<stikonas>just in tcc.kaem file
<OriansJ>so M2-Planet --architecture $arch -f $arch/Linux/unistd.h -f stdlib.c -f $arch/Linux/fcntl.h -f stdio.c -f foo.c -o foo.M1 --debug && blood-elf --64 -f foo.M1 -o foo-footer.M1 && M1 --architecture $arch --little-endian -f $arch/$arch_defs.M1 -f $arch/libc-full.M1 -f foo.M1 -f foo-footer.M1 -o foo.hex2 && hex2 --architecture $arch --little-endian --base-address 0x10000 -f $arch/ELF-$arch-debug.hex2 -f foo.hex2 -o foo
<deesix>OriansJ, tests are using aarch64 since the initial port, that was the value I got back then. And today the draft for parallel testing run fine on yt_'s machine.
<fossy>um
<OriansJ>only the flagging of --64 in blood-elf and the endianness and base address could be different
<fossy>stikonas: have you got some kind of debugging environment open?
<fossy>how are you debugging this
<fossy>stikonas, i have found the issue
<stikonas>fossy: well, I'm rebuilding my debuggint environment
<fossy>well, 0.9.26 creates a working binary
<fossy>0.9.26 is not buggy
<stikonas>strange...
<fossy>i've just confirmed that
<fossy>i missed doing this:
<stikonas>ok, let me try again
<fossy> https://gitlab.com/janneke/guix/-/blob/master/gnu/packages/commencement.scm#L858
<fossy>so we need sed before tcc 0.9.27
<fossy>so we must build sed using 0.9.26
<deesix>OriansJ, for Knight there're some shared files, but some of them have a suffix. Not the easiest for unification.
<stikonas>ok, let me try again
<deesix>OriansJ, we'll see after landing the parallelism patches.
<fossy>:thumbsup:
<stikonas>last time I ran 0.26 was in live envinment
<fossy>yes, same
<fossy>i shall revert 0.9.27 for now
<stikonas>sure...
<stikonas>fossy: so I have sed, gzip and diffutils in a branch
<stikonas>(tar is missing for now
<stikonas>I guess we need it either before gzip, or maybe before sed (depending on whether tar needs sed)
<fossy>yeah
<fossy>please could you use git-lfs instaed of wget?
<stikonas>fossy: oh I probably didn't get your HEAD^ commit yet. Maybe that's why it failed
<stikonas>yeah, we can setup git-lfs
<stikonas>I just haven't tried setting it up yet
<fossy>wait i am so dumb
<stikonas>first of all I need to intall it :()
<fossy>well no
<fossy>i just looked over something
<fossy>in commencement.scm the patch is to unconditionally enable static
<fossy>so we can use tcc 0.9.27 if we use static
<fossy>(just specify -static when linking)
<fossy>let me check that and if that's the case rebase and we're all good
<stikonas>ok, so I don't have https://github.com/fosslinux/live-bootstrap/commit/ce24c8cf3cca620ef47aa525a19871ddf075551b yet
<stikonas>in my environemtn
<stikonas>let me pulll...
<fossy>1 second
<stikonas>ok
<fossy>i'll just rever the revert
<fossy>ok, rebase now
<stikonas>ok
<fossy>and add -static to the linking command
<stikonas>ok, rebased (no git-lfs yet
<fossy>that's fine
<stikonas>and now need to find some tutorial for git-lfs :)
<fossy>iut is verye asy, https://git-lfs.github.com/
<fossy>it is very easy*
<stikonas>fossy: so where are the files stored? On some GH server?
<fossy>stikonas: they are just included using some dark magic along with the git repo
<deesix>Hmm, I'd keep it simple and dark-magic-less. Beware of vendor lock-in (my 2c)
<fossy>git lfs is not centeralized
<fossy>that's what i like about it
<stikonas>I guess you can host your own server?
<fossy>git lfs has nothing to do with "hosting your own server", afaict
<stikonas>anyway, it's juts some public tarballs
<fossy>it works the same way as git
<fossy>you commit to a repo, you push the repo somewhere, you pull the repo
<fossy>it's just an extra step to pull and commit
<fossy>as long as the push target supports git lfs
<stikonas>yes, but when you pull repo, how would git know where to download from...
<stikonas>anyway, I'm now converting my branch to lfs
<fossy>git lfs is an extention that intigrates with git
<fossy>so on the server and client, it has a communication channel over the normal git system to pull/push git lfs files
<fossy>stikonas: not an amazing source but completely correct and explained muuch better than i can
<fossy> https://www.quora.com/How-does-Git-LFS-work?share=1
<stikonas>ok, I'll take a look once I'm done...
<OriansJ>deesix: you don't have to parallelize all architectures at the same time
<OriansJ>getting just AMD64 and AArch64 would be a great win
<deesix>OriansJ, you mean the unification? Yeah, I guess so. (parallel is already fine for all of them, I think).
<OriansJ>deesix: well when you feel the parallel work is good enough for merge let me know and I'll give it a nice round of testing on a big fat server (say 64 cores) check for collisions and merge to master if no problems
<stikonas>fossy: hmm, now tcc crashed
<stikonas> +> tcc -o /after/bin/sed alloca.o getopt1.o getopt.o regex.o sed.o utils.o
<stikonas>[ 1661.524405] tcc[694]: segfault at 4 ip 0000000008064f8e sp 00000000ffd61d7c error 4 in tcc[8048000+3b000]
<OriansJ>if a collision occurs, I'll let you know; we will fix it and try again.
<fossy>stikonas: yes, ik, you need to use -static
<stikonas>oh, I though I have it...
<stikonas>hmm
<fossy>static linking is the only type of linking that works with this tcc, but we cannot patch tcc until we have sed, so we use -static for sed then we can recompile tcc
<stikonas>tcc 0.9.27 was build with +> tcc -v -static -o tcc -D TCC_TARGET_I386=1 -D CONFIG_TCCDIR="/after/lib/tcc" -D CONFIG_TCC_CRTPREFIX="/after/lib" -D CONFIG_TCC_ELFINTERP="/mes/loader" -D CONFIG_TCC_LIBPATHS="/after/lib:/after/lib/tcc" -
<stikonas>oh
<stikonas>you mean static for sed
<stikonas>ok, got it
<fossy>yeah
<OriansJ>deesix: as the unification is more about making future porting work and test updates less tedious
<OriansJ>but the parallel work is about really speeding up development cycles
<fossy>tcc 0.9.26 is already patched by janneke's branch for static linking, but tcc 0.9.27 isn't
<fossy>i think that the Mes C Library is not made for dynamic linking
<stikonas>ok, that makes everything clearer
<fossy>^-^
<stikonas>ok, running a new test
<stikonas>in the meantime I need to figure out how to push to github
<stikonas>with that LFS stuff...
<fossy>just git add it
<stikonas>I'm already trying to push it
<fossy>oh and git add .gitattributes
<stikonas>batch response: @stikonas can not upload new objects to public fork stikonas/live-bootstrap
<fossy>so if you just go
<stikonas>yes, I added attributes
<fossy>hm.
<OriansJ>git-lfs is crap
<fossy>oh
<fossy>why
<stikonas>can't push :D
<fossy>> On GitHub.com, you can't push LFS assets to a public fork unless the original repo already has LFS objects, or you have push access to the original repo.
<fossy>grr
<deesix>OriansJ, I'm pretty confident the current changes are fine. Just need a final look tomorrow for minor typos and such. Indeed the cycle would be better. I only hoped that the current round of test updates didn't happen this weekend (I'd did it before the parallelism if I knew all this). Today was a crazy day for you all amazing people :)
<OriansJ>it allows one to lose state in git history and makes changes to your ~/.gitconfig without asking
<fossy>"makes changes to your ~/.gitconfig without asking" yeah that is quite annoying
<fossy>could you elaborate in the lose state thing
<fossy>stikonas: i'm thinking, if github dosen't allow for lfs in forks, maybe we just commit the tarballs to the repo
<stikonas>hmm, that will blow up git repo, I don't think that's a good idea...
<OriansJ>fossy: it under certain conditions will truncate your ~/.gitconfig which by the way can result in the disabling of security checks
<stikonas>we'll have a lot of tarballs soon
<fossy>OriansJ: :|
<fossy>i was not aware of that..
<OriansJ>fossy: the large objects in lfs can be garbage collected
<stikonas>well, in principle they are still in mirrors...
<OriansJ>and all that remains in git is checksums with no objects to satisfy a checkout
<fossy>i want to make sure they are permanently existing, linked to the repository
<OriansJ>stikonas: yeah not if the mirrors are on github
<stikonas>fossy: well, you can try to temporarily add me as collaborator
<stikonas>OriansJ: I mean mirrors on ftp.gnu.org
<OriansJ>(they all can lose the same object at the same time)
<fossy>stikonas: i'm not sure lfs is a good idea if it's going to inhibit contributions
<fossy>and because of the issues OriansJ outliend
<fossy>(not just for you but for others)
<fossy>maybe just stick with wget for now until there is a better solution?
<stikonas>yeah, maybe wget it...
<deesix>:)
<fossy>i have an idea regarding how we can esnure distributuion of the tarballs, let's go with wget
<fossy>i'll need time to think about distribution tho
<stikonas>maybe I should also checksum them
<fossy>no, we will do that inside the chroot/qemu
<fossy>useless outside
<stikonas>well, sed, tar, gzip are too early
<fossy>along with cp, chmod, i will write a sha256sum thing for M2-Planet at some point
<fossy>because i alsso want to check binaries
<fossy>for now if you feel strongly checksum them outside, but it's not a long-term solution
<stikonas>maybe let's not bother then
<fossy>eventually all i want rootfs.sh to do is download things, copy things into a directory, and run it
<stikonas>well, you needs those chmod, mkdir tools...
<fossy>yeah
<fossy>which i am slowly writing for M2-planet
<OriansJ>fossy: just a thought but git submodules with each submodule being a git repo just holding a single tarball.
<OriansJ>So that updating of the submodules has minimal impact on the repo size
<xentrac>happy Aaron Swartz Day
<stikonas>fossy: ok, sed worked, gzip failed for unrelated reasons, I'll investigate tomorrow...
<stikonas>sed -i 165,174d util_patched.c
<stikonas>sed: illegal option -- i
<OriansJ>xentrac: not yet over here but yeah.
<xentrac>yeah
<stikonas>ok, that old sed doesn't have in place editing
<stikonas>I made a smaller PR for now (just sed)
<stikonas>going to bed now
<xentrac>awesome! goodnight!
<fossy>stikonas[m]: thanks!!
<OriansJ>fossy: I looked at the pull request for kaem and I think I have a better solution
<OriansJ>much less loops but also updates enviromental vars if possible
<OriansJ>and allocate even less memory while doing it
<OriansJ>and catch an error case too
<OriansJ>hopefully fossy you like it
<OriansJ>as getting rid of old versions of a variable seem like a waste, just like creating duplicate variables. So I made add_envar update if it exists, otherwise create and then update it like it already existed.
<OriansJ>and it is 24lines shorter and more straight forward
<malina>holy, did the classic find "$unset_var"/usr/include -type f -delete
<malina>4 yrs I had this system. bb system :D well, off to remount ro, and start grinding the recovery back.
<xentrac>bootstrap-synthesizing a CPU design: https://pbs.twimg.com/media/D18oX8TX0AEEiU2.png
<xentrac>from https://mobile.twitter.com/fpga_dave/status/1107648430757871618
<xentrac>it's a RISC-V core running on an ECP5 FPGA synthesizing the FPGA configuration for itself from Verilog source!
<xentrac>wait, maybe that's not what it's synthesizing. but it's running the same Verilog synthesis program that is used to build it, even if on a different design
<xentrac>one that's a couple orders of magnitude smaller
<fossy>OriansJ: oops
<fossy>I already merged it before I saw ur comment
<fossy>Feel free to change it tho
<gforce_de1977>stikonas: fossy: i read about the 'sed' issued tcc 0.9.26 -> 0.9.27 - can you link to the relevant lines in die 0.9.27 sources? "git grep sed" or "git grep SED" gives too much
<gforce_de1977>(good morning to everyone)
<fossy>gforce_de1977: ? the issue is that tcc 0.9.27 should be patched to not support dynamic linking but that could not occur due to a lack of sed.
<fossy> https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/packages/commencement.scm#n858
<gforce_de1977>fossy: understand, but can not find the 'sed' call in live-bootstrap.git, can you help?
<fossy>gforce_de1977: sed is not currently used
<fossy>gforce_de1977: it has been built but not used
<gforce_de1977>is the only invocation something like: sed -i 165,174d util_patched.c
<gforce_de1977>?
<gforce_de1977>is the only invocation something like: 'sed -i 165,174d util_patched.c' ???
<stikonas[m]>gforce_de1977: -i is not supported, need to fix it to use not in place...
<gforce_de1977>stikonas: can you say what 'sed' call is exactly needed? maybe it's better to extend 'catm' for this job ("e.g. delete line 33,34,35")
<gforce_de1977>stikonas: or even simpler: a small c-program which does exactly this (instead of suck in a full sed)
<fossy>stikonas[m]: removed need for tar and gzip, please see my submodule change
<fossy>making a custom submodule for bash rn
<fossy>stikonas[m]: also idk how we can redirect the output without adding pipe support to kaem or using bash..n
<yt_>OriansJ: get_machine gives me aarch64
<yt_>so seems good to standardise on that
<stikonas>fossy: ok... well no redirect is problem for old sed...
<stikonas>it can't do in place editint
<stikonas>although, maybe newer sed can be built too
<stikonas>fossy: so for now I'm avoiding redirect
<fossy>stikonas: how are you avoiding redirect?
<fossy>are you using sed at all
<stikonas>I'll look at newer versions later today
<stikonas>maybe can build something with -i option
<fossy>it's only 4.0 and up that have -i option, i checked
<stikonas>I see...
<fossy>no idea whether we can build them
<stikonas>in any case I think version 2 will build with very minimal changes to kaem
<stikonas>no idea about v4
<fossy>janneke do you remember if you had issues building new versions of sed with tcc
<stikonas>I don't think he tried, guix has version 1 but my tests indicate that version 2 should work... I'll later run more tests
<fossy>cool
<rain1>good omrning
<fossy>stikonas: here's the bash submodule FWIW, https://github.com/software-history/bash, i'll finish it tomorrow morning
<fossy>my morning
<fossy>which is in about 10hrs
<fossy>well, that's when i'll be avaliable
<fossy>(expect force-pushes, btw)
<stikonas>ok
<stikonas>do we not need make first?
<stikonas>or are you going to build everything using kaem...
<fossy>stikonas: idk where to put bash in.
<fossy>but remember wherever make it used autotools is almost always used
<stikonas>yeah, I know
<stikonas>do autotools depend on bash?
<fossy>certinaly, the configure scripts generated by autotools need bash
<fossy>no other shell works
<fossy>well not true
<fossy>any POSIX shell works
<stikonas>well, yeah, but that's similar...
<fossy>but kaem does not work by a long shot
<stikonas>yeah, I think we need to kaem script bash build
<fossy>yes
<fossy>we can probably go bash, perl (that will be annoying as fk), automake, autoconf
<fossy>, make, then first glibc+gcc+binutils
<fossy>and any other deps of such
<stikonas>yeah....
<stikonas>well, in (my evening) I'll check newer seds too
<stikonas>maybe I'll be able to get gzip built too
<fossy>if we cant fix the patching problem, then ill smash out some patch like thing too
<fossy>👍
<stikonas>diffutils probably works but not sure how much we need them
<stikonas>(diff and cmp)
<fossy>stikonas: I think a configure script needs them
<stikonas>fossy: by the way, sed 2.05 does build, so there is no reason to use older
<stikonas>I'll try even newer versions later, those have shuffled some files
<OriansJ>xentrac: as twitter requires javascript, and so everyone else knows, you can replace twitter.com with nitter.net and get the content without the javascript: https://nitter.net/fpga_dave/status/1107648430757871618
<OriansJ>fossy: if you didn't notice, I spotted your merge and "fixed it" but reasonable call given the perspective.
<OriansJ>xentrac: I think he built a smaller FPGA image because otherwise the build time would probably exceed video limits and people's patience. Unless you think there is a RAM limitation which prevents the building of an FPGA image bigger than the one currently running. To which I say *MORE RAM* please.
<OriansJ>yt_: ok let us standardize on aarch64 in M2libc and I've fully standardized x86 too.
<xentrac>OriansJ: oh, thanks!
<xentrac>yeah, I suspect there's a RAM limitation
<stikonas>fossy: I've solved sed bootstrap problem
<stikonas>instead of starting with sed 1.18 we can just start with sed 4.0.7 (maybe even something newer would work but doesn't matter, 4.0.7 is much newer)
<xentrac>HOORAY
<stikonas>so later today I should be done with tar, gzip and diffutils
<stikonas>and if fossy wants, he can rebuild patched tcc
*xentrac high-fives stikonas
<stikonas>having tar and gz means guile's bootar is completely unnecessary
<xentrac>really? how do you get the sed sources without tar and gz?
<stikonas>unpack in advance...
<stikonas>just like M2-Planet
<stikonas>or tcc...
<stikonas>but we only have to do that for a limitted number of packages
<stikonas>basically it goes mes->tcc->tcc 0.9.27->sed->tar->gz and from then we can just keep tar.gz (or .tar for gzip-1.2.4.tar)
<pder>I'd like to try out live-bootstrap. Is there a specific kernel you are using with qemu? in rootfs.sh it is just specified as -kernel ../../kernel
<stikonas>pder: anything should work, I just use my standard self-compiled Gentoo kernel
<stikonas>pder: well, for now only old sed is bootstrapped
<stikonas>in master
<stikonas>pder: just drop your current kernel into the same folder as rootfs.sh
<stikonas>../.. is only because it is called while in subfolder
<stikonas>I guess you can also run it on real harware without qemu
<stikonas>just need to pass correct initramfs file to your bootloader
<pder>ok, thanks! looking forward to trying it out
<xentrac>I guess unpacked sources are easier to audit than gzipped tar files anyway
<xentrac>but harder to check against upstream tarball checksums
<rain1>there should never a
<rain1>\0 in source code
<rain1>so you could pack them in a simple format that is filename\0date
<rain1>(filename\0data\0)* sorry early enter
<rain1>rather than tar or anything compressed
<xentrac>yeah, if it's C source code in UTF-8, or anything similar
<bauen1>rain1: never underestimate users
<bauen1>at least you would have to process the source code to enusre that (e.g. remove any image files you find)
<xentrac>on CP/M (and consequently early versions of MS-DOS) it was common for source code to end with a bunch of ^Zs
<xentrac>(and other text files)
<bauen1>and i would still argue that it would be best if you get an editor as early as possible so you can more easily audit the remaining source code to be compiled
<xentrac>because CP/M file sizes were measured in multiples of 128 bytes, so if you wanted your text file to end somewhere that wasn't a 128-byte boundary, you needed some in-band signaling convention to signal the end of file
<xentrac>not sure if MS-DOS 1.x had the same limitation
<xentrac>it was a reasonable tradeoff to save 7 bits per directory entry at a time when typical disks were 90 KiB
<bauen1>so you could have some sort of stages hex0 -> basic file viewer -> pause and audit next stage -> kaem -> m2 -> better file viewer -> pause and audit next stage -> tcc and on
<xentrac>that seems like a reasonable thing to do
<bauen1>i suppose you could even do some fancy stuff with linuxs integrity systems to ensure that only code you've audited can be accessed (and it's derived binaries there for)
<bauen1>since you're currently trusting the kernel anyway
<xentrac>no, eliminating the kernel is definitely an essential part of the plan
<bauen1>xentrac: true, but you could design with this in mind
<xentrac>yes
<bauen1>i'm not sure, if you're e.g. writing a minimal posix-ish kernel for bootstrapping, you could implement some forms of Mandatory Access Control and Integrity Measurements to do the above, it could be used to reduce possible user errors and make some bugs less exploitable ; but that might not be worth the effort and i might be thinking of everything as a nail
<bauen1>because no matter how good you audit the source code you're about to compile, eventually you will let bugs (or even malicious code) slip by
<bauen1>i think my point is thath what ever is your root of trust (hardware, the kernel, magic) should enforce that the user has reviewed a bootstrap step before it is allowed to run, and that later stages can't modify earlier stages
<pder>in live-bootstrap, it appears there is a problem with the git submodules. In the mescc-tools-seed directory, there are several submodules such as M2-Planet and mes-m2. These do not appear to be submodules
<xentrac>that's an interesting and somewhat reasonable idea
<bauen1>preventing writes up the bootstrap chain is optional, but would make editing a later bootstrap chain safer
<xentrac>yeah
<bauen1>this is funny, with the (MAC) SELinux policy i've created i already have such a system in place, a process can read everything, but only edit files with a lower integrity level than itself
<xentrac>hmm, that seems bad
<xentrac>if it can read everything then it can get contaminated with low-integrity data
<bauen1>xentrac: very good point, i've had the idea to also implement that, but it turns out that isn't too practical for a general purpose system (unless you have a very well defined system e.g. for bootstrapping)
<bauen1>xentrac: so for now only executing files with a lower integrity is prevented
<bauen1>which isn't enough since plenty of programs (every interpreter) read files and "execute" them without telling the linux kernel their intentions
<xentrac>well, so the minimal way it works is by having everything at a single integrity level
<xentrac>then every process can read and write every file
<xentrac>you can downgrade the process's integrity when it reads data from a lower integrity level, which keeps it from editing data that had its previous integrity level (or anything in between)
<xentrac>David Mazieres did a microkernel like this where every entity (process or file) was tagged with an integrity level and a confidentiality level
<bauen1>xentrac: you still need a way to make an exception for the users editor, so it can change the integrity level of a lower file
<xentrac>you don't need to make an exception for that; that editor process just runs at the lower integrity level
<xentrac>in Mazieres' system, he enforced the classic policy that data could only flow from lower confidentiality levels to higher ones, or from higher integrity levels to lower ones
<bauen1>xentrac: do you have a link ?
<xentrac>no, he told me about it in about 2002, but I don't remember the name of the paper
<xentrac>I'm pretty sure he published at least one paper
<bauen1>i'll have a look around the
<bauen1>*then
<xentrac>for bootstrapping integrity purposes you wouldn't need the whole confidentiality thing
<bauen1>xentrac: true, but at some point you might want to, when you start handling private keys or similiar secrets, in any way it probably wouldn't be to hard to add on later
<xentrac>the interesting thing about it from my point of view (probably not novel in Mazieres' work in particular) was separating the two issues, because people tend to conflate them
<xentrac>but, working from first principles, the security requirements are opposites
<xentrac>I'm not sure how he handled the user interface; if you find out let me know
<bauen1>xentrac: i think https://people.csail.mit.edu/nickolai/papers/zeldovich-dstar.pdf might be what you were talking about
<rain1>using djb netstrings <length>:<data> you could do ([filename][data])* for packing multiple files including binaries
<xentrac>you mean <length>:<data>,
<xentrac>but yes
<xentrac>sounds difficult to audit by hand
<rain1>what's the coma for
<xentrac>redundancy I guess?
<xentrac>bauen1: yes, that looks right
<xentrac>I'd forgotten it was called D*
<bauen1>alright, i'll give that a read sometime
<deesix>pder, something missing --recursive, maybe?
<stikonas[m]>Yes,or run git submodule update
<stikonas[m]>You need to run it each time submodule is updated too
<deesix>You may even want --force for the update in some cases.
<xentrac>hmm actually CP/M file sizes were 16 bits, which would seem to permit files up to a truly excessive 8 MiB, so they could have used a smaller block size to have less slack
<stikonas>ok, hopefully will have something to push in 40 minutes for live-bootstrap, testing now
<fossy>stikonas: ^.^ thanks!!
<stikonas>fossy: or maybe I can push and you can review while I'm testing
<fossy>sure
<stikonas>at least 3 commits out of 4 are tested
<stikonas>just last one needs testing
<stikonas>wth...
<stikonas> Pull request creation failed. Validation failed: You can't perform that action at this time.
<stikonas>oh, have to log out of my work account :D
<stikonas>fossy: pder and whoever else wants to look at: https://github.com/fosslinux/live-bootstrap/pull/6
<stikonas>fossy: one thing that didn't work for me is PATH in after.kaem.run...
<stikonas>probably related to that discussion yesterday
<stikonas>for cp we also use /after/bin/cp instead of cp...
<stikonas>anyway, it's nice that we could get sed 4 instead of sed version 1 (which guix did)
<xentrac>yeah!
<fossy>stikonas: i believe it's that bug that has since been fixed
<stikonas>yeah, once kaem is updated, we can try to remove /after/bin prefix
<stikonas>I guess we now need either bash or make...
<fossy>k reviewing now
<fossy>we might have a much simpler rust bootstrap soon - https://github.com/Rust-GCC/gccrs
<stikonas>yeah, somebody showed it here yesterday
<fossy>oh
<stikonas>but can it build rustc/cargo or not yet...
<stikonas>anyway, I think we are almost done with kaem scripts and soon can switch to proper shell
<stikonas>I just realized kaem and make are anagrams. OriansJ, I guess that is intentional?
<fossy>stikonas: review complete
<fossy>only a few things, most thing i care about is a tar submodule
<vagrantc>should have called it "mak"
<vagrantc>e.g. a partial make :)
<fossy>stikonas: lol i think it is intential
<malina>wow fossy. that would be great. I have to admit, when bootstrapping rust, I wasn't exactly quiet about it. And hadn't it been for mrustc by a Australian gentleman I think, I would have probably blown the internet. the ear drums, I mean :D
<xentrac>vagrantc: three-letter names are too hard to google
<fossy>malina: so they are, i did not know they were australian
<vagrantc>xentrac: true enough, kaem i'm guessing is pretty unique :)
<xentrac>actually there's a graffiti artist near here who uses it as a tag
<xentrac>I keep meaning to take a photo and post it
<malina>ah, that was of course not so important technically, but just seem to recall he was. if it wasn't for that, and I would have to have done the entire ocaml, I would have 'hated' rust more than I do now. it HAS come a far way now though since 2017 or so, optimised wise.. but I still am more of a D person
<rain1>hi
<malina>ok
<malina>:D
<stikonas>fossy: hmm, I need to check last commit first, something failed
<stikonas>gunzip succeeded but then tar failed
<stikonas>or maybe it created .gz.gz instead...
<stikonas>hmm
***puckipedia is now known as puck
<fossy>stikonas: hm
<fossy>odd
<stikonas>yeah, I'll check once I get tar fixed...
<stikonas>or maybe I should do that first, since testing takes some time...
<OriansJ>bauen1: my belief is a bootstrap kernel should be as small as possible in terms of lines of source code and toolchain dependencies. Ideally buildable by M2-Planet on bare metal but powerful enough to run GCC/TCC to bootstrap Linux. The features are indeed important and honestly would be better off written in Rust with formal proofs of correctness as a competitor to Linux but that is just my minimal perspective on that.
<stikonas>tcc shoud be enough to bootstrap linux?
<xentrac>I keep wondering if formal proofs of correctness might be able to change the equation on things like Forth and APL
<xentrac>malina: were you able to recover your lamentably lost files?
<OriansJ>stikonas: well yes kaem is an anagram of make; as what I needed is the world's worst build tool possible.
<OriansJ>as people love to shit on make, it only seemed fitting to name kaem after it. ^_^
<malina>xentrac, had a backup from when I rsynced this box to a server. So i grabbed that from may, and even the bootstrap failed (had missing c++ headers) but a reinstall of gcc and I am back in business. Basically, when I rebuild every package (which I typicaly do anyway during a full rebuild), I guess they will 'come back'
<malina>and iit's a blessing in disguise; as now I finally have no other option but to keep working on my boostrapping updates I been doing for a month (running away to cyberpynk sometimes).
<xentrac>that is good!
<stikonas>fossy: ok, I think I know why last commit failed
<stikonas>it's actually the problem in gzip commit... cp eats executable bit
<stikonas>I guess I'll just run chmod 755...
<stikonas>fossy: can you review that bash function in the last commit?
<stikonas>I'm rerunning the test now after adding chmod
<fossy>stikonas: yeah, i am aware that cp eats the executable bit
<fossy>nothing we can do about that because of no stat()
<stikonas>yeah, I saw it before, just got caught unaware...
<stikonas>forgot about it...
<stikonas>in principle we only need gunzip, but I'm creating all 3 files there
<fossy>stikonas: you can get the bname much more simply, just use '${url%%.*}'
<stikonas>oh ok
<fossy>other than that looks great
<stikonas>fossy: that fails for me...
<stikonas>it prints https://ftp
<stikonas>oh, I need basename first
<fossy>yah
<fossy>sorry
<stikonas>hmm, still fails
<stikonas>too many dots everywhere...
<stikonas>in url, in version number...