IRC channel logs

2019-09-29.log

back to list of logs

<xentrac>OriansJ: cool! I'm reluctant to delete things from Wikis that I don't understand in many case
<xentrac>because I worry that they might just be things I don't understand instead of things that are wrong
<OriansJ> xentrac: that is why wiki's have histories; as for why that article said one needed lazy evaluation for macros on eager evaluation lisps. Consider the following (if (not boom) (something important) (exit EXIT_FAILURE)) if if is not a primitive but a macro; an eager evaluation strategy will evaluate the exit even if you are not going to exit and the resulting side effect is the incorrect termination of the lisp program.
<OriansJ>eval in an eager lisp is quite trivial: https://github.com/oriansj/Slow_Lisp/blob/master/lisp_eval.c#L168 ; even the ugly bits are also simple: https://github.com/oriansj/Slow_Lisp/blob/master/lisp_eval.c#L217 and when you look at the apply https://github.com/oriansj/Slow_Lisp/blob/master/lisp_eval.c#L114 it is easy to convince oneself that the all powerful lisp is easy to implement.
<OriansJ>Then one gets to macros and the eager evaluation producing side effects starts showing up and then you have to choose how to solve the macro problem. You can either force your reader to deal with the macros; such that the eval will never know or you shoe-horn lazy evaluation into your eval process or you bite the bullet and implement lazy evaluation and the macros become just a simple lambda.
<OriansJ>here is the complexity of a lisp-reader without support for macros: https://github.com/oriansj/Slow_Lisp/blob/master/lisp_read.c nothing too bad in C but writing that in assembly while reading the source code was 10 hours of work https://github.com/oriansj/stage0/blob/master/stage2/lisp.s#L124 and from that experience I now know why although the original kernel of lisp was done quickly; why it took months of brilliant minds hacking on
<OriansJ>it before it hit a useful state.
<OriansJ>contrast that with https://github.com/oriansj/stage0/blob/master/stage2/cc_x86.s ; which was producing useful output within 4 hours of work and with each additional bit of functionality being a simple addition and test.
<OriansJ>I thus from direct experience understand how C was hacked together in a weekend and why Lisp took months. The weakness in C was that the compiled binaries were slow and inefficient but adding optimization passes in C is rather trivial.
<OriansJ>and having written a FORTH https://github.com/oriansj/stage0/blob/master/stage2/forth.s ; I must say the easiest language on the planet to bootstrap (after macro assembly of course) is actually a C subset (I call it M2-Planet). Even the advanced features like type tracking, structs, unions, inline assembly and gotos end up being rather trival to add.
<OriansJ>However there is a possibility I missed something essential when I aborted M2-Moon. It might be possible that a compiled lisp might be easier than a compiled C to implement in assembly, especially since the reader would be far simpler and since it would be lazy (runtime) evaluation by default but it could use an eager (compile time) evaluation implementation.
<OriansJ>If someone wanted to build such a thing (thus allowing us to skip the cc_* -> M2-Planet -> mes-m2 -> MesCC process and be thing -> MesCC) say in simple C; I'd be willing to write it in assembly and thus be forced to admit lisp is a better to bootstrap language than C. [If any lisp zeolots want to have the fun of writing a lisp compiler]
<OriansJ>(what can I say, I like being proven wrong and learning a better way)
<rain2>jackhill: yes, would be great to build them from guile!
<xentrac>OriansJ: thanks for the reassurance :)
<xentrac>it's true that macro evaluation is not eager. the macro approach to implementing macros is to transform the code so that, as you say, the eval will never know
<xentrac>shoehorning lazy evaluation into your eval process is called adding fexprs; fexprs are distinct from macros but you can use them to do the same thing
<xentrac>and it's true that for macros like if, lazy evaluation replaces macros
<xentrac>but not macros like let or defun
<xentrac>as for bootstrapping C subsets, wouldn't it be even easier to bootstrap a semantically-C-subset language with Forth "syntax"?
<xentrac>or for that matter Lisp syntax
<xentrac>normally macro expansion happens in a separate stage after read and before compilation or evaluation
<xentrac>that way, (quote (if a b c)) evaluates to a list of four things whose car is if, rather than, say, cond
<xentrac>fexprs aren't just lazy evaluation either; like macros, they get the raw S-expression form of their arguments, so have to invoke eval if they want to treat them as code. this makes for an uneasy mismatch with lexical scoping, which is, e.g., resolved in Tcl with "uplevel"
<xentrac>but of course pre-Common-Lisp Lisps were mostly not lexically scoped, so eval would usually just do the right thing, except when it didn't
<xentrac>I mean that the fexpr could (usually) get the right behavior by invoking eval
<xentrac>the Wikipedia page on fexprs doesn't sufficiently call out the major difference between fexprs and macros, which is that fexprs run at runtime and macros run at compile time (which is the reason that the results of the macro are evaluated and the results of the fexpr aren't)
<xentrac>you *can* implement let, defun, and setq as fexprs, even though you can't implement them with just lazy evaluation
<xentrac>as ordinary functions
<xentrac>(it does call out the difference eventually, just not in the introductory text where maybe it should)
<OriansJ>xentrac: very nicely said
<xentrac>thanks
<OriansJ>I guess the most import part is that macro functionality must be part of the implementation of the lisp from day zero; otherwise it isn't something you can really do on top of a lisp without it.
<xentrac>You mean, without modifying the interpreter?
<OriansJ>indeed at the implementation level
<xentrac>You can add macros (or fexprs, or lazy evaluation) after day zero, but you do have to modify the implementation to add them
<OriansJ>in serious ways; especially if the implementation is in assembly
<xentrac>Macros can be added in a pretty straightforward way
<xentrac>They just transform the S-expressions after read but before compilation or evaluation
<OriansJ>correct
<xentrac>Fexprs or lazy evaluation do require deeper surgery, and fexprs in particular can make a lot of optimizations impossible
<OriansJ>which is why scheme lacks them if I remember correctly
<xentrac>yeah. also common lisp
<xentrac>but you can add a macro system to a compiler for any language as an afterthought, as long as you have an AST
<OriansJ>to manipulate
<xentrac>right
<xentrac>a Crenshaw-style or TCC-style AST-less compiler makes it quite a bit more difficult
<OriansJ>or functionally impossible
<xentrac>I wouldn't go that far :D
<xentrac>I mean both of us have already programmed things that most programmers would think impossible
<OriansJ>true
<OriansJ>and somethings are just the class of problems where solving them is entirely the wrong path
<OriansJ>aka a flat list AST could get a macro but it probably ends up being more work than simply throwing it out and doing it the right way.
<xentrac>heh
<xentrac>I think there's a reasonable argument for compiling your set of macros into a streaming form so you don't have to reify the entire AST at once in memory
<xentrac>but not for compiling them into that form by hand
<OriansJ>hmmm
<xentrac>especially if you can reuse the streaming pattern matching compilation tooling for other things, like backend optimization, database query optimization, or querying JSON documents
<xentrac>but it's clearly in the realm of optimization rather than "zero to one"
<OriansJ>it does have its place
<OriansJ>but to be honest; regex pattern matching has never been my friend when solving problems
<xentrac>yeah, regexes are not a good way to define Lisp macros :D
<xentrac>in Ur-Scheme I didn't implement user-defined macros, but I did implement some special forms as built-in macros; look in http://canonical.org/~kragen/sw/urscheme/compiler.scm.html for "(define macros"
<OriansJ>it usually results in me being too clever by half
<xentrac>the code there is maybe a little bit painful because the macros are written without quasiquote!
<xentrac>but for example Scheme's `define` is defined as a macro in terms of a primitive `%define` which doesn't understand the `(define (fn arg1 arg2) ...` syntax
<xentrac>so with five lines of code there, all the function definitions get a lot easier to read
<OriansJ>nicely done
<xentrac>adding macro processing to the compilation amounted to factoring out a `compile-toplevel-expanded` function from the body of `compile-toplevel` and defining the latter as `(compile-toplevel-expanded (totally-macroexpand expr))`
<xentrac>as I reconstruct it; I may not be remembering the actual development sequence correctly
<xentrac>with quasiquote that definition of `define` would be
<xentrac>(define-ur-macro 'cond
<xentrac> (lambda (args)
<xentrac> (cond ((null? args) #f)
<xentrac> ((eq? (caar args) 'else) `(begin ,(cdar args)))
<OriansJ>xentrac: the hardest leason about lisps seems to be it is all too temptingly easy to kinda implement them the way we like; without realizing alot of it isn't obvious to others. Which is why only a handful of lisps ever have more than 1 developer...
<xentrac> (else `(if ,(caar args) (begin . ,(cdar args))))
<xentrac>oops, it's missing the trailing cond bit, and also that's the definition of define rather than cons
<xentrac>but you get the idea
<xentrac>OriansJ: that's an interesting proposition
<OriansJ>C compilers seem to be more like sand paintings
<OriansJ>There hasn't been a C compiler I found that couldn't rather easily be picked up and worked on
<OriansJ>Even deep magic ones like C500 or C4 tend to only require you to learn about which register is the accumulator and how the stack is built
<OriansJ>the rest is plug in here to do X
<xentrac>interesting
<OriansJ>As much as I tried to avoid that with Slow_lisp; even I can see that 6months later
<xentrac>what's your experience with Forth compilers?
<OriansJ>Played with them written in FORTH but haven't tried writing one in anyother language
<xentrac>I mean in terms of comprehensibility
<xentrac>I've only looked at ones written in assembly; for the first few years they were pretty mysterious
<xentrac>but probably because I'm just slow at things like that
<OriansJ>If you want incomprehensibility look at APL interpreters/compilers written in APL
<xentrac>I've never seen one; have you?
<xentrac>I haven't looked much at Lisp compilers' source code, except maybe SIOD
<OriansJ>I've seen 3 and even 2 hours and a sheet of paper don't let you get past the first line
<xentrac>link?
<xentrac>I wonder to what extent these things are familiarity effects
<OriansJ>here is what APL programmers think clear and simple C code looks like: https://github.com/kevinlawler/kona/blob/master/src/k.c
<xentrac>oh, yeah, I've seen that
<xentrac>but you said in APL
<OriansJ>(Trying to find it but it is hard to web-search for it)
<xentrac>true
<OriansJ>The one I remember the most was an optimizing APL compiler that was written in only 5 lines of APL
<xentrac>I too would like to spend hours trying to reverse-engineer that!
<OriansJ>So that first like was the reader, parser and tokenizer all in one
<OriansJ>for example in GNU APL there is a simple APL function: https://svn.savannah.gnu.org/svn/apl/trunk/src/testcases/Lambda.tc
<OriansJ>found one: https://github.com/Co-dfns/Co-dfns
<xentrac>cool!
<OriansJ>I just look at this https://github.com/Co-dfns/Co-dfns/blob/master/codfns.dyalog
<OriansJ>and think; nope
<xentrac>haha
<xentrac>yeah, a lot of people have that reaction to calculus classes too
<OriansJ>never had that problem in math classes
<xentrac>I have to say this looks like a bit more than 5 lines
<OriansJ>yep; it was just 750 lines previously
<OriansJ> https://news.ycombinator.com/item?id=13797797
<OriansJ>I mean; honestly why write ⊖⍕⊃⊂|⌊-*+○⌈×÷!⌽⍉⌹~⍴⍋⍒,⍟?⍳0 when you just mean 42?
<OriansJ>The entire APL family is based on the belief; you need to learn the way we do things, otherwise no hope for you.
<OriansJ>and I am certain it results in extremely compressed understanding of things
<OriansJ>But that is the thing; is programming about making optimal use of the machine or about making the optimal use of the people.
<xentrac>APL is based on the plausible idea that compressed notation enables compressed thinking
<xentrac>that is, making optimal use of the people
<xentrac>this is plausible because in fact it is true in math, where we constantly do algebraic manipulation that would be very slow and error-prone to do in words
<xentrac>but I am not convinced that APL succeeds at it
<OriansJ>and I know enough K (APL relative) to solve the problem sets in the environments where it is used.
<OriansJ>but words have the great advantage of infinite number and the ability to leverage language to learn more on the fly
<xentrac>K is an APL for the purpose of this discussion
<xentrac>sure
<OriansJ>(APL for finance people honestly)
<xentrac>and I think you can reasonably do algebraic manipulation of expressions where the variables are multiple letters long :D
<xentrac>just not with a pencil
<xentrac>but the objective of APL was to formulate a language where the operators had a formal structure that makes them amenable to formal manipulation
<OriansJ>well there is rigidity in the minds of people who use APL too much
<xentrac>I think Scheme and Haskell have done a better job of this
<OriansJ>absolutely agree
<xentrac>still, there are things that are easier to do in APL (or, say, Numpy or Octave) than in Scheme or Haskell
<xentrac>I don't have a good enough understanding of why that is
<OriansJ>but I can't help but wonder how much the language we use color the thoughts we are able to have
<OriansJ>for example; why did NIX come from the Haskell community and no where else?
<xentrac>for sure. there are definitely times where solving something first in OCaml or Python makes it easy to solve in C
<OriansJ>Why were all of the advances over NIX done in the Lisp community?
<OriansJ>Why didn't any of the bootstrappers come from the demo or similiar communities?
<OriansJ>but everyone here is familiar with lisp
<OriansJ>Why did major GUI design ideas come from the smalltalk communties?
<OriansJ>it is almost the learning of a language deletes ideas you can have; allowing you to come to ideas that you previously couldn't reach
<OriansJ>but learning multiple languages expands that set to include ideas that were previously removed.
<xentrac>yeah
<OriansJ>It makes me deeply wonder what other ideas, beliefs, fears or dreams we all have in common.
<xentrac>it's hard to explore the boundaries because we can only explore incrementally
<xentrac>did you look at Dercuano?
<OriansJ>a subset of it but not all
<xentrac>there are some things in there about that
<OriansJ>I've also cloned it
<xentrac>that's good
<xentrac>hopefully it will survive after I'm dead
<OriansJ>once something is useful for others, it tends to survive until it stops being so
<xentrac>not convinced Dercuano is useful for others yet
<xentrac>I mean so far I've only gotten like two corrections
<OriansJ>well if it makes you feel any better it will be part of my offline archive
<OriansJ>which is designed to last for the rest of my life and hopefully the life of my kid(s)
<xentrac>also I can perhaps improve it
<OriansJ>my perspective is Wernher von Braun nearly has been forgotten
<OriansJ>and there is a strong possibility he might be entirely forgotten in the next 100 years
<OriansJ>and he put a man on the freaking moon
<OriansJ>The only people who might remember me in 100 years is family and friends.
<xentrac>well, hopefully I'll remember you
<xentrac>but I need a better vehicle than this body
<OriansJ>xentrac: are you more hoping for digital upload of your mind or cyborg body upgrades?
<xentrac>probably the second one first, later the first
<OriansJ>well then you have got good odds
<xentrac>I'd say around 20%; not so good
<OriansJ>20% is very good odds when looking at 100+ years