IRC channel logs

2021-06-05.log

back to list of logs

<oriansj>Melg8: it is what janneke usually uses as a base test for mescc if I remember correctly
<Melg8>okay)
<Melg8>thanks)
<oriansj>The exact details of MesCC's behavior isn't something I am deeply familiar with but the output of MesCC is just standard M1 files like M2-Planet produces which M1 converts to hex2 files and hex2 glues everything together into the final binary.
<oriansj>Which is why I probably should harmonize MesCC and M2-Planet's M1 defines to make a common M2libc an easier sell.
<Melg8[m]>Does order matter when "catm ${libdir}/x86-mes/libc.a eputs.o oputs.o globals.o exit.o _exit.o _write.o puts.o" - doing this?
<Melg8[m]>and btw (libc-mini.a/.s) - no check of sha256sum for them?
<vagrantc>[/4
<oriansj>Melg8: well hex2 supports both backwards and forward references and M1 doesn't care. So the order of the M1 files doesn't actually matter in regards to functionality but it does in regards to the final SHA256SUM
<Melg8>oriansj okay, than i will keep it)
<oriansj>but generally always put the DEFINEs before using the DEFINED values as M0 will not support such cases.
<oriansj>(M0 requires you to do DEFINE foo 123 before you can use foo)
<oriansj>M1 does a great big hash table to speedup up the application of the defines to an O(1) operation (or O(n) for all of the applications being performed with n being the number of unique DEFINEs, numbers and raw strings)
<oriansj>combined with the M2libc FILE* enhancements, it provides a 20x speed improvement for mescc-tools builds
<Melg8>this is very nice!
<Melg8>i found that mes builds are ... not so fast
<oriansj>Melg8: well it is an optimizing C compiler being interpreted on a very primitive scheme interpreter (M2-Planet's fault there)
<oriansj>It goes a bit faster on guile
<Melg8[m]>stikonas: is lib/x86-mes/libc.a sha256sum checked anywhere? (im talking not about lib/libc.a) from here (https://github.com/fosslinux/live-bootstrap/blob/8210cc9e24b5495957a074f59a353ca68a1de0a0/sysa/mes/mes.kaem#L189)
<Melg8[m]>oriansj: btw, question about catm - if i have files x y z and i do "catm result x y z" would result be same as if i do "catm result_1 x y; catm result result_1 z"?
<amirouche>(forth is brillant!)
<oriansj>Melg8: catm reads from left to right in the order of writing to the file which is the first argument. so catm temp1 x y; catm result temp1 z; would mean the exact same thing as catm result x y z with the addition of the creation of the temp1 file containing the contents of x y
<oriansj>amirouche: we have multiple bootstrapped FORTHs available if you wish to hack on them
<Melg8[m]>oriansj: big thanks!)
<xentrac>yay bootstrapped forths :)
<Melg8[m]>xentrac: what forth can give us?)
<xentrac>Melg8[m]: I'm not sure I understand the question?
<oriansj>Melg8: every programming language is just an option for someone to pick up and make something useful.
<oriansj>So if someone really likes FORTH there is nothing stopping them from creating some useful thing in it; except for their ability or desire to do so.
<oriansj>So having multiple FORTHs are just options for people who wish to use a FORTH, not that it actually provides any real value to actual bootstrapping work.
<oriansj>As thus far C and assembly seem to be the most productive bootstrapping languages.
<xentrac>it might provide real value, but so far that's an unproven hypothesis
<siraben>blynn has pushed more commits to upstream blynn-compiler, but I don't currently have time to rebase my changes on top
<siraben>in particular it looks like the module system is taking shape
<crabbedhaloablut>The front page of https://bootstrappable.org/ still points at freenode for the IRC channel
<oriansj>rekado: please take a minute to update that please
<oriansj>siraben: looks like it isn't quite done but once it is, then is when we probably should incorporate those improvements.
<stikonas>Melg8[m]: probably not, looks like we only calculate fletcher16 checksum
<pabs3>rekado_: ^^
<NieDzejkob>IMHO a FORTH is the best solution if you're starting from baremetal with a bootsector-sized seed
<NieDzejkob>I'm actually pursuing this strategy right now: https://github.com/NieDzejkob/miniforth
<Hagfish>"At this moment, not counting the 55 AA signature at the end, 493 bytes are used, leaving 17 bytes for any potential improvements."
<Hagfish>fantastic
<NieDzejkob>I'm hoping I can fit in at least some parts of a block editor, so that the initial bootstrap on top of that won't need to be typed in twice :D
<Hagfish>i like the fact it can load extra 1K blocks
<Hagfish>i guess it could rewrite its own bootloader at some stage, and reboot into the new system?
<NieDzejkob>I don't think I'm going to do that, loading the source code seems less error-prone
<NieDzejkob>but actually writing the source code into these blocks is what I'm concerned with right now
<NieDzejkob>eh, first draft: 532 bytes used...
<oriansj>NieDzejkob: I look forward to seeing that being proven true (best way to learn ^_^) but worst case we still have: https://github.com/nanochess/bootOS to fall back on.
<xentrac>NieDzejkob: awesome! that's great work!
<xentrac>NieDzejkob: how do you get by without any conditionals though?
<xentrac>oh, with self-modifying code?
<xentrac>if I understand correctly you could probably get rid of primitive + and define it as something like : + 0 swap - - ;
<mihi>NieDzejkob, unless you find a first-generation 8088 CPU (which had a hardware bug), POP SS with enabled interrupts won't create a race condition as it will delay interrupts by one instruction. And the next instruction will fix the stack pointer.
<mihi>(there is another caveat: there was one processor which did not delay interrupts if POP SS is directly after STI instruction, but that is not true in your code either)
<mihi>probably you will earlier run into buggy BIOSes who believe they should "fix" the BIOS Parameter block in memory after having loaded the boot sector...
<mihi>rekado_, thank you for fixing the logs :)
<xentrac>mihi: ooh, I didn't know that about POP SS
<NieDzejkob>mihi: TIL! The x86 manual corroborates your story :D
<NieDzejkob>xentrac: in general I'm not hoping to eliminate adding primitives afterwards entirely, just minimize it and make it easy
<NieDzejkob>in 2klinux I managed to implement branches in pure forth — : BRANCH R> @ >R ; : 0BRANCH 0= DUP R@ @ AND SWAP INVERT R> CELL+ AND OR >R ;
<NieDzejkob>iirc it affected performance a lot, though
<xentrac>oh, interesting! I hadn't thought of that at all!
<xentrac>the performance loss is probably fine if the objective is, like, to be able to write your next compiler in a high-level language instead of machine code or assembly language
<xentrac>in SKF instead of implementing the stack manipulations as primitives I implemented them with VARIABLE, @, and !
<xentrac>(stoneknifeforth)
<NieDzejkob>oh, you're the author of stoneknifeforth?
<xentrac>yeah
<xentrac>didn't know you knew of it!
<xentrac>is R@ "get the return stack pointer" rather than "get the top item of the return stack"?
<NieDzejkob>R@ is R> DUP >R
<xentrac>huh, why R@ @ then?
<xentrac>oh!
<NieDzejkob>for the same reason BRANCH does R> @ >R
<xentrac>that's loading the branch address, sorry
*xentrac ← dum
<NieDzejkob>re: SKF, funny thing. I recently went back to an old reddit post that kinda inspired me to work on bootstrapping. Turns out the most comprehensive response there was yours :D
<xentrac>
<xentrac>did you know that Turing's original design for conditional branches used multiplication?
<NieDzejkob>(went back to reference it in a blog post I'm writing)
<NieDzejkob>no, tell me more!
<xentrac>you would branch to c*x + (1 - c)*y where x and y were the possible branch destinations and c was the condition
<xentrac>I think he changed that before the first actual machine though :)
<xentrac>very similar to your 0BRANCH but in a different algebra
<xentrac>I'm glad that old reddit post keeps on giving ♥
<xentrac>so I think the problem that normally afflicts bootstrapping through Forth is, sort of ironically, the agricultural-programmer problem
<xentrac>Chuck Moore complained that a lot of programmers find a problem and decide to set up shop on that problem, creating a progressively more elaborate solution, while he (the nomadic programmer) sees problems as things to be solved in order to move on to the next problem
<xentrac>and I think sort of the problem with Forth, and maybe minimality in general, is that it's always obvious that the program we have can be improved (though typically by making it simpler rather than more elaborate)
<xentrac>or sometimes by writing lipogrammic programs that, for example, totally abstain from using variables
<xentrac>so it's very easy to get started writing a bootstrapping Forth to implement a small Lisp in so you can implement Scheme or Maru or whatever
<xentrac>and then, for example, never actually get to writing the Lisp, because you keep seeing ways your Forth can be reduced from 448 bytes to 443, etc.
<xentrac>in a sense you could say that's where I've been stuck since writing SKF :)
<NieDzejkob>too bad I don't have any paperback reference that could be self-sufficient. That way I could just put my FORTH on a thumbdrive and force myself to only use use bootstrapped software for a while :)
<NieDzejkob>let's hope that the immersion I can otherwise get will be sufficient, I guess?
<xentrac>reference to what, the CPU instruction set?
<xentrac>UHCI? :)
<NieDzejkob>that's the thing, designing something sufficient in advance is a separate project
<xentrac>hmm
<NieDzejkob>I've got 13 bytes to fill with either EMIT, -, or some other small primitive. What would you choose?
<NieDzejkob>oh who am I kidding, EMIT is surely going to be useful very soon
<NieDzejkob>I found a machine in the attic that I can use for my experiments, BTW
<xentrac>oh cool!
<xentrac>I think the most interesting bootstrapping platform is the universe
<xentrac>we already have matter compilers and they have been very successful, but they are very difficult to program; genetic engineering is slow, error-prone, and still limited to cargo-culting working code. programming by copy and paste
<xentrac>we need matter compilers that we can program in practice
<NieDzejkob>yeah, but that's way out of my depth :D
<xentrac>oh, I think it's easier than we think. although I could be wrong; I won't know until someone succeeds
<vagrantc>i don't know how we're going to checksum matter for reproducibility
<xentrac>vagrantc: that's a very real and urgent problem
<Melg8>Oh guys (and girls) i returned here, and start reading last posts... and than confused what is "matter" is it new language? ... than re-read whole thing, and yea - chekcsum for matter would be nice) but is universe reproducible anyway?)
<xentrac>haha, sorry
<xentrac>being able to compile new universes would be cool but we don't have any reason to believe it's possible. compiling new hardware from its design files, aka source code or genome, is something we do all the time
<Melg8>btw is there theory on self assembly so - for example what best "strategy" looks like? because you can assemble some utils that needed, or you can assemble assembler for more sofisticated (high level?) language
<Melg8>ie, how to spend less of tape (or less of jumps) in turing machine, and get more capable program assembled for execution
<xentrac>self-assembly in what sense? like, writing an assembler in assembly language?
<Melg8>i mean like in live bootstrap you have some initial small program in "native" bytes - and it gets as input some "code" and spills out some new programms (or additions to itself)
<Melg8>from "human readable" TM to executable assembly
<xentrac>oh, I see
<Melg8>imagine 20 (10) year from now - we would face problem how to bootstrap general AI )
<xentrac>I don't think we have a well developed theory
<xentrac>a lot of the evaluation of "best strategy" seems to depend on human psychology, which is very poorly understood
<xentrac>so far it's been a matter of "try it and see"