IRC channel logs


back to list of logs

<mark_weaver>what is a "stacking event"?
<mark_weaver>capturing continuations involve making a copy of the stack.
<please_help>(let loop ((prev-cont (lambda (x) (display x)))) ... (loop (lambda (x) (prev-cont (magic x))) ...))) etc.
<mark_weaver>those aren't continuations
<mark_weaver>they are closures
<please_help>why is it called CPS if it doesn't involve continuations then?
<taylanub>please_help: does each such wrapping-closure call the previous one non-tail-recursively? if so, and if you iterate a lot, you might end up blowing the stack when you call the final result, I think. on the master branch the stack expands itself though so it would be fine there I guess
<taylanub>please_help: what you're doing seems unrelated, or maybe tangentially related, to CPS
<mark_weaver>I think the procedures created by those lambdas won't use stack space. they use heap space.
<mark_weaver>if the recursive call to 'loop' is in tail position, then it is a GOTO with arguments. you could do that all day, and the stack won't grow.
<mark_weaver>however, you'll allocate a new closure on every iteration.
<mark_weaver>the closure contains a pointer to the code and the values (or boxes) of all the free variables referenced from within the procedure body.
<mark_weaver>the values are copied for local variables that are never 'set!'. mutable local variables must be allocated on the heap, and a pointer is copied into the closure.
<mark_weaver>you can see the exact VM code using ,x from the REPL
<mark_weaver>please_help: basically, the example you gave will end up making a linked list of closures
<mark_weaver>with length equal to the number of iterations.
<mark_weaver>well, unless our optimizer is able to do something better with it, which is possible.
<please_help>yes, that's what I'd like to know
<mark_weaver>,optimize <code> is a useful REPL command
<mark_weaver>it shows the result of our partial evaluator, anyway
<taylanub>I suppose `prev-cont' in the above example (ignoring the misleading name) could be inlined into the wrapping lambda at the end? that would reduce the stack usage of the resulting procedure?
<mark_weaver>and ,x <proc> for some existing procedure shows the VM code.
<taylanub>(its stack usage when it's called I mean)
<mark_weaver>please_help: well, I see now what you mean by continuations here. I was thinking about the loop itself, but the continuations come into play when you call 'prev-cont' at the end.
<mark_weaver>and the answer is: the resulting 'prev-cont' will be an iteration, not a recursion.
<mark_weaver>that's guaranteed by any scheme that deserves the name, because the call to the next 'prev-cont' is in tail position.
<mark_weaver>(it's guaranteed by all the recent scheme standards)
<mark_weaver>well, actually all the scheme standards from the beginning.
<mark_weaver>tail calls are not an "optimization" in scheme. semantically, a procedure call in tail position is a GOTO with arguments.
<mark_weaver>no caveats
<please_help>what does ,optimize <code> do?
<mark_weaver>so the stack space required is the stack space required by 'magic'.
<please_help>it seems to output whatever I feed in with no additional optimization
<mark_weaver>please_help: it applies most of our optimizations (including the partial evaluator) and then converts the result back into scheme code.
<mark_weaver>please_help: one issue is that in guile, almost all top-level variables/procedures are mutable bindings.
<please_help>so in my example, if the loop ended on the 100th iteration, I'd have 100 closures worth of stack space used up?
<mark_weaver>e.g. the optimizer can't assume that 'reduce' or 'fold' is bound the well-known procedure of that name.
<mark_weaver>however, it can do a lot with locally-bound procedures.
<mark_weaver>please_help: 100 closures of *heap* space. not stack space.
<mark_weaver>basically it will produce a linked list, and when you call 'prev-cont' at the end it will iterate over that list.
<please_help>so it won't know how to unroll and reverse even in the case where every op is side-effect free?
<mark_weaver>well, I'm talking about the behavior in the absence of any optimizations.
<mark_weaver>in this case, I guess you want it to figure out that really you're just calling 'magic' N times?
<please_help>basically yes
<mark_weaver>well, I don't know if we have an optimization that can figure that out.
<mark_weaver>if you need that, I suggest keeping a counter instead of a chain of closures.
<mark_weaver>also, the existence of first-class continuations constrains the optimizer more than in many other languages. if 'magic' is not a "known" procedure (e.g. if 'magic' is a mutable variable), then 'magic' could return more than once.
<mark_weaver>i.e. magic could have captured its continuation and stashed it somewhere, and then it could be invoked later.
<mark_weaver>I'd have to think more on whether that fact makes your suggested optimization unsafe in the general case.
<mark_weaver>but anyway, I doubt we have anything that would deduce that a linked list of closures can be converted into a counter.
<please_help>How come (make-array *unspecified* x y) generates an array of int types?
<taylanub>please_help: it generates an array of #<unspecified> values here
<please_help>The array-type is int such that no non-integer sets will work
<taylanub>scheme@(guile-user)> (array-type (make-array *unspecified* 2))
<taylanub>$2 = #t
<please_help>oh, nevermind, I just messed up my call
<please_help>if (array-contents arr) does not die, does this imply the array is stored contiguously in memory?
<please_help>if so, is there a way to reinterpret a memory-contiguous array? for example, jagged -> 2D or vice-versa
<mark_weaver>if it doesn't return #f, then yes (it doesn't "die" :)
<please_help>so jagged array can be contiguous in memory, that's nice to know.
<mark_weaver>well, array-contents gives you a flat array from a contiguous one, so that covers the 2D -> flat conversion.
<mark_weaver>I don't know about the other way.
<please_help>array-contents is not good enough if you have a jagged array
<mark_weaver>jagged ?
<please_help>(array-contents jagged-array) ;=> jagged-array
<mark_weaver>I don't know what you mean by "jagged array"
<please_help>meaning #(#(a b c) #(c d e))
<mark_weaver>you mean just multi-dimensional?
<mark_weaver>oh, sorry
<mark_weaver>an array of arrays.
<mark_weaver>an array of arrays is not contiguous
<mark_weaver>it's not even in the same memory block.
<mark_weaver>that's an array of pointers to 1D arrays.
<lloda>you can make a 2D shared view of any 1D array, it doesn't matter if it's memory-contiguous or not. neither make-shared-array, nor transpose-array, nor array-ref care if the array is memory-contiguous or not. that's low level stuff that only matters if you pass the arrays out of Guile to a library that can't handle the strides.
<please_help>I was trying to operate on subarray views and collect the results in an array
<please_help>the result was an array of array, while I wanted to return a 2D array (in this case)
<please_help>that is, I'm trying to add generators and ec to extend srfi-42
<lloda>you need to create the output array first and copy into the subarrays of the output then
<please_help>actually, using srfi-42 (nested ...) worked out well
<please_help>yes, the problem was that I didn't have any way to create the array apriori (I wanted to try guessing the array size via continuation chaining, and doing the array-set! in reverse order but that didn't work out)
<please_help>my solution now requires users to specify the size of the output array
<please_help>in the case of non (nested ...) forms, though, it's still pretty icky
<lloda>yes, it's either that, or collect the array of arrays and copy that into a 2D array after
<lloda>in guile-ploy, functions can have a output-shape field that takes the shapes of the arguments and produces the shape of the output
<please_help>and "collecting" would add overhead I don't know I can spare
<lloda>yeah, if you can't collect, you have to predimension the output array
<please_help>or do it the C way (TM): realloc with 2x the size every so often
<please_help>then serve an array-view of the real allocationspace
<lloda>sometimes you don't even know how big the output will be until the function has run, for example for find type functions
<lloda>you can't realloc from Guile though
<lloda>arrays are fixed size, you need a bigger one you throw away the old one
<please_help>you can just link to a module that provides the C realloc function
<lloda>... I guess
<lloda>but the base array (the one that gets bigger) has to be allocated from C
<lloda>there's some functions to transfer an allocation to Guile, take-something, I don't remember
<lloda>I do C++, so they're useless to me, they don't take a deleter, which I think they should
<please_help>in the C guile API there's a way to specify a (C-code) finalizer whenever you create an SCM from a pointer
<lloda>I was thinking about an actual array, not a smob. certainly, you can do whatever with a smob, but you also have to provide an interface yourself
<lloda>I'd rather avoid that
<lloda>but if you google old messages in the ml, you'll find some people that preferred to do that and not use Guile arrays
<please_help>I think it's with any void*
<please_help>in scm_from_pointer I think
<please_help>which means, of course, arrays too
<lloda>ah, I didn't know
<please_help>C Function: SCM scm_from_pointer (void *ptr, void (*finalizer) (void*))
<lloda>let me know if you make it work
<lloda>now really afk
<lloda>good luck
<please_help>if I really try to do the shape-free version, it's going to be in a farish future
<mark_weaver>please_help: I haven't read the whole backlog yet, but fwiw, the vector comprehensions that come with it include both 'vector-ec' and 'vector-of-length-ec', the latter of which can be implemented more efficiently.
<mark_weaver>but SRFI-42 seems very much geared to one-dimensional sequences. It's not obvious to me how to generalize it to multiple dimensions.
<please_help>is vector-ec implemented like (list->vector (list-ec ...)) ?
<please_help>it seems to generalize decently to multiple dimensions, though it really does look more single-dimension-focused
<lloda>by treating a n-D array as a 1-D array of (n-1)-D arrays
<lloda>that doesn't cover all uses but it would already be useful
<mark_weaver>please_help: okay, could be. I haven't thought about it much
<mark_weaver>'vector-of-length-ec' however is much more efficient. it allocates the vector first and then fills it.
<mark_weaver>you can see the code in <guile-source>/module/srfi/srfi-42/ec.scm
<please_help>for example (array-ec shape (nested (:array a (%:subarray arr 1)) (:array b a)) (/ b (array-sum a))) would give you the x/sum(row-of x) I was asking about the other day
<mark_weaver>(search in there for vector-ec and vector-of-length-ec)
<please_help>that is, once the generators are defined, it works well enough due to nested
<mark_weaver>ah, okay
<please_help>so you can take, say, a subtensor of matrix dimensions, and then iterate over the matrix elements, or over the rows, or whatever.
<please_help>that said, another useful generator is one that calculates an "intermediary" result
<please_help>say, (:sum-over var generator function sequence) where function can be user-engineered to take whatever data of whatever shape
<please_help>otherwise, in the sum example above, you'd have to re-caclulate the sum at every point
<mark_weaver>please_help: do you know about :let
<mark_weaver>not sure it that would be appropriate here. my mind is mostly on other things now and I haven't taken the time to fully understand what you're trying to do.
<please_help>I did not know about :let
<please_help>seems to work great, thanks
<zacts>anyway, mark_weaver have you read Lisp in Small Pieces?
<zacts>or how did you learn to get involved with core guile hacking?
<zacts>other than SICP
***chaos_ is now known as Chaos`Eternal
<zacts>hello guilers
<zacts>Hey mark_weaver / davexunit I have a cool link
<zacts>^ I really like this