IRC channel logs

2013-12-12.log

back to list of logs

<nalaginrut>mark_weaver: thanks for reply, I'll use wingo's version. But seems it's not proper tail-call, or it could be eliminated anyway?
<mark_weaver>do you expect very long property lists?
<mark_weaver>well, it could be made tail recursive easily enough, if needed.
<nalaginrut>no just asking
<nalaginrut>and I'm nervous when I saw non-proper-tail
<nalaginrut>ah, I realized, when did we have match-lambda, last week I used match in another case...hmm
<nalaginrut>oops, it's in (ice-9 match)
<nalaginrut>ok, seems it's added in this June
<nalaginrut>no, 2010 June...I missed something
<nalaginrut>ok I rewrite it in proper tail
<l_a_m>hi
<l_a_m>i search a tutorial to make a RESTful web service using GNU Guile
<l_a_m>does someone have any link ?
<nalaginrut>l_a_m: I'm working on it, but it's under construction now, web-artanis.com
<nalaginrut>and I'm writing ORM stuffs just now
<nalaginrut>anyway, there's no release yet, but the current master is workable
<nalaginrut>l_a_m: I'm sorry it lacks of docs, since I put all my hack power on coding
<l_a_m>interesting
<l_a_m>thanks
<nalaginrut>comments/patches are welcome ;-)
<b4283> http://paste.lisp.org/display/140451 ;; Conway's Sequence generator
<b4283>just realized that there's no tail recursion for some form like this: (let L (...) (begin ... (L ...)))
<nalaginrut>why you need 'begin' here?
<b4283>nalaginrut: for displaying
*nalaginrut still don't understand
<b4283>because it's right under the (if
<nalaginrut>any example? ;-)
<b4283>nalaginrut: checkout my paste
<nalaginrut>well, I think it's tail recursive safe in 'begin' or 'let'
<nalaginrut>sorry, proper tail call
<nalaginrut>(L here is proper tail call within (begin
<b4283>hmm
<nalaginrut>so it's safe
<nalaginrut>b4283: http://www.gnu.org/software/guile/manual/html_node/Tail-Calls.html
<b4283>nalaginrut: huh, yes it is, i don't know what happened there
<nalaginrut>so it's overflowed?
<b4283>stack overflowed, yes
<nalaginrut>oh, I have to go now, see you tomorrow
<nalaginrut>8
<b4283>8
<taylanub>b4283: I think (begin ... <here>) is in fact a "tail-context".
<taylanub>Let me check the standards .. (though note that there are some contexts which aren't specified as being tail-contexts as per the standards, but can be tail-call eliminated anyhow) ..
<b4283>taylanub: yes, nalaginrut clarify that for me
<taylanub>Ah OK, didn't read enough. :)
<taylanub>(And yes, the R5RS explicitly lists (begin ... <this-thing>) to be in a tail-context.)
<unknown_lamer>oh yeah, now I remember why proxies are useless in a CLOS/GOOPS world: call-next-method lets you invoke the behavior of a superclass
<unknown_lamer>so there's no reason to wrap an instance in another type when you can specialize and then delegate
<unknown_lamer>blech, no way to subclass <generic> either?!
<unknown_lamer>urge to hack on goops internals... becoming harder to resist
<unknown_lamer>I really need to force myself to learn computational geometry first
***DerGuteM1ritz is now known as DerGuteMoritz
<civodul>Hello Guilers!
<davexunit>hello!
<add^_>'lo
<ArneBab>mark_weaver: do you mean the control flow before or after macro-expansion? (I’m mostly asking because GCC already contains lots of work and because I think it would be nice if the GNU extension language could leverage at least parts of GCC)
<mark_weaver>after macro-expansion. I don't really know what control-flow means before macro-expansion.
<mark_weaver>I can sympathize with wanting to reuse the many years of excellent work put into GCC's optimizers, believe me. I've thought about it a lot.
<mark_weaver>every optimization pass would have to be audited to make sure it didn't make assumptions about control flow that aren't valid in Scheme. many of them would have to be generalized. there would have to be an ongoing commitment on the part of the GCC community to keep in mind the needs of Scheme.
<mark_weaver>I just don't think it's feasible.
<mark_weaver>it would be a multi-year project at least, that's for sure.
<mark_weaver>and that's assuming that we got full cooperation from the GCC community.
<tromey>what are the things that aren't valid in scheme?
<mark_weaver>well, most notably, continuations can be invoked an arbitrary number of times.
<tromey>I thought guile already had to solve that generally for C code
<mark_weaver>non-local exits can happen, and then be reentered from above.
<mark_weaver>well, ideally we would solve it generally for C code, but we don't. there are additional constraints put on continuations in the presence of C stack frames.
<mark_weaver>to be honest, I don't remember all the details, since I generally avoid using C in my Guile programs, except at the lowest levels.
<mark_weaver>but as I recall, if you do a non-local exit through C stack frames, then you cannot reenter.
<mark_weaver>and a corollary to that is that continuations to C code can be invoked only once.
<tromey>I couldn't quickly find anything about this in the guile C docs
<tromey>how does the guile C code itself work?
<mark_weaver>thus we don't violate the assumptions made by GCC.
<mark_weaver>can you ask a more specific question?
<tromey>much of guile is written in C
<tromey>presumably parts of this C code can be captured in a continuation and re-entered
<mark_weaver>yes
<tromey>how does that work?
<tromey>the point being you can emit that same IR to the GCC JIT
<mark_weaver>(that "yes" was a response to "much of guile is written in C")
<tromey>ok
<mark_weaver>well, there's C code at the top-level of the program, of course, and there's also a lot of C code at the lowest levels of the call graph.
<mark_weaver>but we try to avoid it in the middle.
<mark_weaver>because if it's in the middle, then it interferes with being able to invoke continuations multiple times, etc.
<mark_weaver>for example, 'map' used to be implemented in C, which meant that you couldn't do a non-local exit from a 'map' and then reenter. now it's in Scheme, so it works properly w.r.t. continuations.
<mark_weaver>that's my understanding anyway. this is more wingo's area though.
<tromey>there's this stuff in the manual about it
<tromey>(guile) Dynamic Wind
<tromey>is that up-to-date?
*mark_weaver looks
<mark_weaver>ah, yes, the rewinding stuff.
<mark_weaver>I guess this appears to contradict some of what I believed here. I don't know how it works, to be honest.
<mark_weaver>wingo: can you help educate us?
<tromey>it's not a big deal, I'm just curious about it and wondering how far off the gcc jit thing really is
<tromey>it seems doable to me
<tromey>as a proof of concept anyhow
<kurohin>Hello all
<mark_weaver>hi kurohin
*wingo back
<mark_weaver>wingo: what constraints are placed on continuations in the presence of C code in the middle of the call stack?
<mark_weaver>is this documented somewhere?
<ArneBab>mark_weaver: ok - thank you again for the explanations!
<wingo>(guile)Dynamic Wind is up to date
<ArneBab>mark_weaver: by the way: I like it a lot, that guile is more and more written in scheme itself. That’s a sign of a healthy community, I think.
<wingo>mark_weaver: the c++-style RAII pattern isn't quite valid, because you need to allow for (or prohibit) rewinds
<wingo>fwiw i think rewinding through c code is less interesting now than it was in the past
<wingo>given that guile code is run by a vm, not a recursive C function
<wingo>and we are favoring delimited continuations over full continuations
<wingo>and you can't reinstate a delimited continuation that contains a trip through C
<wingo>so i guess we could imagine a future in which guile would not rewind through C -- it's a neat feature but most people forget about it and end up with incorrect code
<tromey>the context is really about lowering bytecode to the gcc jit and whether this is feasible without ongoing gcc changes
<mark_weaver>if it's not safe to rewind through C stack frames (I assume it's not), then why does 'scm_dynamic_wind' support rewinding?
<wingo>i guess for c++ you could have some raii-style type or mixin that would register some "modify this state in this dynamic extent" handler with the scm_dynwind interface
<wingo>mark_weaver: it is safe to do so -- that's how guile's continuations have always worked
<wingo>well.
<wingo>it has always been considered / assumed / treated as safe
<wingo>who knows what the language standard has to say about it ;)
<mark_weaver>hmm
<tromey>I wonder how thread-locals are handled in continuations
<tromey>like if you capture a continuation and call it in a different thread
<wingo>tromey: it is an interesting question :)
<tromey>it seems the saved %gs will point to the wrong thread
<wingo>basically continuations can't be called in other threads -- not currently
<wingo>there are scheme-level thread-local dynamic bindings
<kurohin>Is there any one who is working on a dbus module?
<wingo>which bindings get captured where is a topic that there are a couple of papers on
<wingo>tromey: the reason a continuation can't be instated in another thread is that a continuation captures an entire C stack (in addition to the VM stack)
<wingo>(perhaps we could change that at some point, now that we are doing more in the VM)
<wingo>but reinstating a continuation has to splat back the stack, at the address it was captured -- because we can't relocate the c stack
<tromey>yeah
<tromey>even more obvious than what I was thinking about
<tromey>typical
<wingo>delimited continuations on the other hand are only VM stack slices, so they can be called in other threads
<wingo>and we know how to relocate them
<wingo>(reinstating a delimited continuation splats it on the stack *in addition* to what is already there)
<wingo>for that reason they are sometimes called "composable" continuations because they compose with the continuation in which they are invoked
<wingo>unlike call/cc, which is more like a big global goto :)
<mark_weaver>I really don't see how it's possible to rewind through a C stack frame with the proper semantics.
<mark_weaver>I guess it's being done by copying the stack from what it was before unwinding, but that's not quite right.
<wingo>mark_weaver: the state of a stack frame is given by its registers, stack slots, and dynamic invariants
<wingo>the first two are captured by copying the C stack and a jmpbuf for the registers at the top
<wingo>the last is maintained through a dynamic stack, which can be unwound on nonlocal exit -- or rewound on nonlocal re-entry
<wingo>it's pretty simple actually -- the only tricky bit is making sure you have enough stack to instate the new continuation
<mark_weaver>the thing is, mutable local variables in the middle of a stack frame are not supposed to have their values reset when a continuation is invoked.
<wingo>so there's a nasty recursive function that just consumes stack until the sp is beyond the hot end of the new stack
<mark_weaver>at least not in Scheme
<wingo>mark_weaver: in scheme, mutable local variables are on the heap
<wingo>continuations don't capture the heap
<mark_weaver>*nod*
<wingo>though there are some optimizations... if you can prove that the mutable local is not aliased, of course you don't need to heap-allocated it
<wingo>we don't do much of that right now but hopefully soon :)
<mark_weaver>I guess I worry about correctness more than most people, but rewinding C stack frames sounds very dicey to me.
<wingo>yeah it's definitely a knife-juggling thing
<wingo>tromey: were you thinking of a method jit or a trace jit?
<wingo>or something else
<tromey>just wondering about hooking it up to the gcc jit
<wingo>would gccjit do optimizations, or would it be a simple per-bytecode thing
<tromey>I guess method jit
<wingo>?
<tromey>it does optimizations
<wingo>neat
<tromey>right now it is a proof of concept that makes a .so that is loaded
<wingo>one simple and really useful thing would be inline caches for arithmetic operations
<tromey>I'm not going to hack on it, just wondering what the difficulties are
<wingo>like the things that v8 does, for example
<wingo>cool
*wingo like speculation also :)
<wingo>tromey: you have seen inline caches, yes?
<wingo>v8 is also doing an interesting thing now
<wingo>they used to be all hand-implemented
<tromey>yeah, I think I have
<wingo>but now they are generated by their optimizing jit, so they can be inlined if they are hot
<wingo>so they are "visible" to the jit on the IR level
<wingo>gives +/- free type feedback too
<tromey>nice
<sbp>wingo: hey did you know about http://www.complang.tuwien.ac.at/anton/euroforth/ef06/shannon-bailey06.pdf and other similar work for optimising stack based VM performance by performing smarter register allocation? davexunit showed me your Guile register VM post, and I said it was full of stack FUD. I admit it!
<wingo>hehe :)
<wingo>no i have not!
*wingo takes a look
<sbp>mark_weaver has the popcorn open, he wants to see this showdown
<wingo>lucky for me i need to leave the office now :) but i will read it!
<sbp>hehe, okay. see ya!
*sbp steals some of mark_weaver's uneaten popcorn
<taylanub>wingo: Might I ask for a citation on "[It] runs faster than other "scripting language" implementations like Python (CPython) or Ruby (MRI)." re. Guile stack VM ? Not because I'm skeptic but because I've rarely seen any Guile 2.0 benchmarks at all so I must've missed something. :)
<mark_weaver>sbp: for one thing, that paper is talking about _compiling_ from a stack machine into a lower-level register-based language.
<mark_weaver>but wingo's post is talking about interpreting the VM code.
<mark_weaver>when interpreting a VM, there's a high cost for each instruction dispatch.
<mark_weaver>in a stack VM, you typically have to emit extra instructions to copy named values to the top of the stack, before doing the actual operations on those values.
<mark_weaver>a register VM avoids the cost of dispatching those extra instructions.
<mark_weaver>I don't really see how that paper refutes anything that wingo wrote in his post.
<mark_weaver>(admittedly I've only skimmed the paper so far)
<sbp>yeah, but you have a register based VM now, and you said you want to have compilation to native soon. so what I mean is, this paper basically says that switching from a stack to a register model in the VM might not give you as much of a gain when doing later compilation to register-native as you thought
<sbp>anyway, I don't know if it matters particularly
<sbp>I mean most people only seem to prefer stack models because of conceptual simplicity anyway
<sbp>but if you're working at the VM level... heh
<stis>sbp: native might mean x86-64, we still need the VM for other architectures for some time.
<sbp>I just figured that if you're moving from stack to register in 2.0 to 2.2, most of the reason you're doing that is because you're having "PERFORMANCE!" neon lights spark up in your eyes, because LLVM and LuaJIT and Parrot and basically all modern VMs are register
<sbp>stis: yep?
<mark_weaver>also, although I seem to be of a minority opinion here, I personally don't think it makes sense to compile everything to native. I think it's better to compile only the hot code to native.
<mark_weaver>because VM code can be much more compact.
<davexunit>mark_weaver: what is "hot code"? code that is used frequently?
*stis is dreaming of hardware accelerated VM ops
<sbp>"can be much more compact" — but I'll bet the new VM bytecode wasn't chosen for concision, was it?
<sbp>Kragen Sitaker did a great post about that here:
<sbp> http://lists.canonical.org/pipermail/kragen-tol/2007-September/000871.html
<mark_weaver>well, concision wasn't the only consideration. it was a tradeoff between concision and performance.
<mark_weaver>davexunit: yeah
<sbp>Squeak won by having tons of registers. you know, I'm not actually sure how many registers it has...
<stis>mark_weaver: I did some experimentation of native code generation of the rtl vm ops.
<stis>x86 is pretty compact and if you do it right the code can be smaller!
<mark_weaver>x86 is quite compact, yes. the problem is that that primitive operations in Scheme do not translate to primitive x86 operations.
<mark_weaver>for example, Scheme numbers are not merely fixed-width integers or IEEE doubles.
<sbp>just fab some more SCHEME-79 chips
<sbp>at least one of them was made without errors. there might even be two floating about
<mark_weaver>unless you can prove the dynamic type of a scheme number, you have to first do dispatch on its type tags.
<mark_weaver>and that means that you end up with a bunch of x86 code for even simple things like incrementing a number.
<stis>mark_weaver: the trick is to just inline native code for small operations, the rest is basically VM ops as you say.
<mark_weaver>whereas in the VM it's a primitive operation, and reasonably compact code.
<sbp>closures are usually a problem for performance compared to native
<stis>mark_weaver incrementing numbers in a loop is smaller with x86.
<add^_>Bah hrm
<sbp>though LuaJIT, GHC, and SBCL all do closures pretty fast
<sbp>PEBTAP (problem exists between theory and practice)
<sbp>you know how you can do nanopass compilation where you have literally dozens of IRs? I thought it'd be interesting to do that to a lisp where every level is a recognisable lisp all the way down. one of the lowest levels would look something like Hylas-Lisp, perhaps
<sbp>Steele and Sussman sort of tried to do something like that, I feel, in SIMPLE (a predecessor of SCHEME-79)
<sbp>though they only did one transformation
<sbp>the difference to other compilation schemes, even those used in the early lisp machines, was that they tried to retain an idiomatic lisp-like quality to the compiled code, as much as possible. what I mean is that you'd try to retain that through each nanopass
<mark_weaver>sbp: have you seen the paper "compilation by transformation" ?
*sbp looks
<mark_weaver>by Richard Kelsey and Paul Hudak.
<mark_weaver>"Realistic Compilation by Program Transformation" is the full title
<sbp>there's one by Peyton Jones et al. of the same name
<sbp>search search search... this one: http://haskell.cs.yale.edu/wp-content/uploads/2011/03/CompByTrans-POPL89.pdf
<mark_weaver> http://haskell.cs.yale.edu/wp-content/uploads/2011/03/CompByTrans-POPL89.pdf
<mark_weaver>right :)
<sbp>oh yes, I have. dpk showed me this
<sbp>funny, I never stored their IRs away as idiomatic lisp. but I guess that does count, yes; interesting perspective
<sbp>(stored away in that magical cabin of the mind)
<sbp>oh, I remember now, he found it from Paul Graham's essay on ORBIT or something
<sbp>it's funny because PG says this:
<sbp>"His approach was simply to keep transforming the program from one simple, CPS, lambda language to an even simpler one, until the language was so simple it only had 16 variables... r1 through r15"
<sbp>but there isn't actually anything resembling that in the paper
<sbp>I wonder if there was another paper I'm missing
<ijp>you mean the shivers essay
<sbp>oh yeah, it was Shivers not PG
<sbp>thanks
<stis>Hmm swi-prolog can handle rational prolog datastructures well, cool!
<stis>hmm assuming that guile will end up controlling emacs.
<stis>how will other languages defined on guile be able to control emacs?
<ArneBab>stis: since AFAIK they can call elisp functions, they should be able to do everything.
<stis>that would be awesome!
<ArneBab>stis: but first guile-emacs has to be finished and merged, so the default emacs uses guile as driver.
<stis>but then emacs can be controlled by scheme kanren prolog etc, that can enable some nice extensions to emacs!
<ArneBab>yes
<ArneBab>stis: but someone has to do all the hard polishing work for that
<stis>do we have the manpower to do that?
<stis>certailnly there are smart people here, but time is always scarce
<stis>I'm trying to help with getting prolog working on guile, so that part should be in in a couple of months
<ArneBab>stis: sadly the current state is from before last GSoC: http://www.emacswiki.org/emacs/GuileEmacs#toc3
<ArneBab>here’s the todo: http://www.emacswiki.org/emacs/GuileEmacsTodo
<stis>ArneBab: thx for the info, got to go, tty
<ArneBab>last change in the guile-emacs repo: 2013-08 http://git.hcoop.net/?p=bpt/emacs.git;a=summary
<ArneBab>ht
<ArneBab>np
<ArneBab>good night!
<civodul>bipt: you might want to comment ↑ :-)
<ArneBab>yes, because I only quoted from emacswiki
<ArneBab>thanks civodul!