IRC channel logs


back to list of logs

<daviid>mark_weaver: so i went ahead, created symlink in my /opt2/bin and compiled install guile-cairo, guile-gnome, both run 'make', both 'make check' failed, then kisê [which runs fine with the latest stable] fails with quite a lot of problems, which i have first to investigate of course, but 2 things surprised me in 2.2 [not depending on gnome], let me ask
<daviid>is #; not accepted in 2.2 [to comment s-expr] ?
<daviid>no #; is fine, was a tipo, now the real quizz, let me paste an example
<daviid>this module will work in both stable and master, but uncommenting line 9, 21 and s-expr line 32, raises a bug in 2.2 only: "... ERROR: Unbound variable: *bluefox*"
<daviid>have to go afk for a little
<nalaginrut>morning guilers~
<zacts>hello, world
<DeeEff>hey guile, just saw
<DeeEff>what does the part about dynamically expandable stacks mean?
<DeeEff>specifically: 3> Where you would previously make a
<DeeEff>loop that collect its results in reverse order only to re-reverse them
<DeeEff>at the end, now you can just recurse without worrying about stack
<DeeEff>oh wow that paste was not supposed to split like that
<DeeEff>anyways, does this mean you don't need tail calls or something in order to recurse over large lists in guile 2.2?
<nalaginrut>DeeEff: even there's expandable stack, one may not rely on non-tail call program. But if you have to write certain algorithms (like trees) in recursive way, you can do that without expandable stack, it's fine and fast. I've tried it several times. ;-)
*nalaginrut go for lunch
<DeeEff>I'm not sure I understand. Perhaps an example, if possible?
<please_help>I think he means that while expandable stack means that you can recurse "for a long time" [tm], even then it is not necessarily a good idea to rely on it for non-tail-call programs. Meanwhile, even without the expandable stack, you can arrange to have efficient algorithms (i.e. no need to collect in reverse + re-reverse)
<DeeEff>ah, ok, so it basically just mitigates the tail call issue where instead you want to save time so you're not reversing a whole tree or something
<mark_weaver>DeeEff: it's still a good idea to use tail calls when you can do the job in constant space.
<mark_weaver>however, it means that for things like 'map' where you are going to be building up a result list anyway, now you can do it on the stack without worries about the stack limit being smaller than the heap.
<mark_weaver>before those changes, we had a situation where you would have to specifically arrange to allocate your temporary storage on the heap instead of the stack in order to avoid the much smaller stack limitations.
<mark_weaver>and that resulted in some fairly gross code in some cases.
<DeeEff>I've written my fair share of cons -> if null reverse so I can imagine
<DeeEff>by the way, is eli / anyone still working on getting guile to run on windows / in cygwin?
<please_help>The secret is CPS
<DeeEff>I only ask because I've been super hyped for guile 2.2 and that's one of the limiting factors preventing me from using it in my research
<mark_weaver>DeeEff: I'm afraid that's been partly blocked by me being picky about some of the things he did in his proposed patches, and him pushing back somewhat, and things kind of stalled.
<mark_weaver>and I've been busy with other things
<mark_weaver>you can find threads on guile-devel
<DeeEff>okay, will do. thanks for being honest about it though
<mark_weaver>heh :)
<nalaginrut>mark_weaver: thanks for elaborating it ;-)
***sigmundv_ is now known as sigmundv
<wingo>i got some function names without running anything on the inferior; i'm declaring victory ;)
<stis>happy pottering guilers
<please_help>Looks like I can successfully do ML in guile now, at last
<please_help>only caveat: even using gsl for all matrix ops, it's slow as balls
<please_help>about 80x slower than the python reference code
<taylanub>please_help: what does ML mean here?
<please_help>machine learning
<taylanub>please_help: by the way when are you going to change your nick? :P
<please_help>some day ;)
<lloda>please_help: have you profiled?
<wingo>i would think that guile 2.0 would be faster than python, and 2.2 significantly faster; perhaps you are using the interpreter instead of the compiler? or needlessly consing a lot?
<wingo>80x suggests interpreter
<please_help>I have yet to profile, I'm at least glad it works at last
<please_help>is the invocation different than guile <the_file> for the compiler?
<wingo>no, that should automatically compile the file
<wingo>i assume you have guile 2.0 and not 1.8 ?
<please_help>then I'm using the compiler
<please_help>yes, 2.0.9 to be exact
<wingo>then if you go to the repl like just typing "guile"
<wingo>then ,profile (load "my-file.scm")
<wingo>then that should give you profiling info
<wingo>probably we should add a --profile command line arg
<please_help>what is ,profile supposed to output exactly?
<please_help>looks like my top resource hogs are my loop, array-copy!, my (increment), and my sigmoid in that order
<please_help>I think the loop is part of increment, too
<please_help>Here' the code for increment:
<please_help>the sigmoid is an srfi-42 eager comprehension across an array that does (/ (1+ (- (exp x)))), I don't think it can be sped up without writing C code for it
<please_help>finally, the array-copy! I'd definitely like to remove, but they're necessary to broadcast vectors (e.g. Ax + b -> b needs to become the same shape as Ax)
<daviid>wingo: is the following expected using guile 2.1 [?]
<wingo>daviid: i think that's expected; 2.2 will fix a number of issues with toplevel variables and one side effect is that definitions made in one compilation unit are tracked using psyntax and not by defining dummy variables in the module
<wingo>which causes 2.0 and 2.1 to behave differently here
<wingo>best to avoid this use of eval-when; use modules instead :)
<daviid>ok, tx
<daviid>thought the second approach, the blueget :) is good coding is it not? otherwise how would i read the user's [kise] config file at startup and blueget anytime anywhere in the rest of the app ?
<mark_weaver>daviid: what is the purpose of those 'eval-when' uses? I'm lost. I've never seen scheme code like this before.
<mark_weaver>can you give me a simple overview of how this is supposed to work?
<lloda>please_help, you don't need to use array-copy! to broadcast, that can be done with make-shared-array
<mark_weaver>please_help: that 'increment' procedure looks very expensive. if you are doing that on every iteration of a loop, of course that's going to be as slow as snails.
<mark_weaver>this entire approach of using lists of indices is extremely bad for performance
<mark_weaver>don't blame guile for this.
<lloda>yeah, that looks really bad
<please_help>lloda: no it can't, because I need contiguous-memory arrays in the result
<please_help>mark_weaver: yeah, I still expected it to be not-so-bad. I'm trying to find a better approach, but in the end I need a set of indices that can be consumed by array-ref
<lloda>ok, if you're using them as destination, that's true
<lloda>we went over this here, or I remember wrong
<please_help>we did the x/sum(row-of x), I think we skipped the indexing
<lloda>yeah, you shouldn't need to generate explicit indices just to iterate over an array
<lloda>is this x/sum an important part of your profile?
<please_help>it's actually exp(x)/sum(map exp over row-where-x-is)
<please_help>and it cannot be avoided under any circumstances
<lloda>did you see the CBLAS solution I posted for that?
<lloda>anyway, if you can, post where you're using this increment-idx and we can try to get rid of it, I would hope
<mark_weaver>please_help: if you want good performance, you can't have so much complexity happening in the inner loop. if you tried to do something similar in python it could suck rocks too.
<mark_weaver>this iteration needs to be done with a fundamentally different mechanism.
<mark_weaver>if the inner loop increment is doing much more than bumping a single counter, it's probably the wrong approach.
<daviid>mark_weaver: hello, i set up this small module with 2 eval-when to be able to talk/ask to wingo, and the second eval-when to show i already did find a solution. in 'real life' :) there is 1 eval-when of course [the second in the paste]
<mark_weaver>daviid: what is the purpose of the 'eval-when'? what are you trying to accomplish?
<mark_weaver>in other words, why are you trying to run code at macro-expansion time?
<mark_weaver>*why do you want to
<daviid>mark_weaver: i need to read a user config file at startup, and provide [to myself] a procedure that i can query any time anywhere in the app, the purpose of the eval-when is to read the user config file once onhly, but at every startup
<mark_weaver>macro expansion happens during compilation, which in general is before run-time, and maybe even in a different guile process.
<mark_weaver>daviid: if this code is in a module, then the top-level forms will be run exactly once at startup (when the module is first loaded).
<mark_weaver>there's no need for 'eval-when' in that case.
<mark_weaver>'eval-when' is the wrong tool for the job.
<mark_weaver>if we implemented phases in our module system (which we may well do at some point, then code run at compile-time would not even have access to run-time variables at all)
<mark_weaver>that close paren should have been before the comma.
<mark_weaver>in R6RS for example, the compilation environment is kept distinct from the run-time environment.
<mark_weaver>I should mention one caveat to my statement above "if this code is in a module, then the top-level forms will be run exactly once at startup (when the module is first loaded)."
<mark_weaver>if you explicitly ask to 'reload' a module, then the top-level forms will be run again.
<mark_weaver>the more robust tool to use here is 'define-once'
<mark_weaver>daviid: ^^
<daviid>mark_weaver: didn't know about define-once
<mark_weaver>note that simply loading a module in the normal way will not 'reload' it. you really need to go out of your way to reload a module.
<mark_weaver>where "the normal way" is 'use-modules'
<mark_weaver>daviid: I should mention also that the 'eval-when' will actually run that code more times than if there was no 'eval-when.
<mark_weaver>if there's no eval-when, it will run the code every time the module is loaded/reloaded. with the 'eval-when' it will run it all of those times and also when the code is compiled.
<mark_weaver>so yeah, 'define-once' is probably the tool you want here.
<daviid>but my undertanding of expand load eval is that [in the second part of the paste] config [the var in the let] will be bound to the result of executing (sys/read-config "kise"), so blueget will always 'owrk'. this said i understand and will try now the (let at topleval directly
<daviid>mark_weaver: to precise, will be bound 'in real life'
<daviid>i commented this line to simulate something everyone could run of course
<daviid>anyway thanks
<daviid>will try define-once
<mark_weaver>I would just do this: (define-once blueget (let ((config (sys/read-config "kise"))) (lambda (what) ...)))
<mark_weaver>daviid: ^^
<daviid>great, thanks
<mark_weaver>daviid: what is the 'reload-module' thing supposed to do?
<dsmith-work>Tuesday Greetings, Guilers
<mark_weaver>daviid: if my guess is correct, it should probably do something like (set! config (sys/read-config "kise"))
<mark_weaver>daviid: better yet, and staying closer to your original code: (define-once *bluefox* (sys/read-config "kise"))
<mark_weaver>and then keep 'redget' as it was.
<daviid>i think you read the wrong paste mark_weaver :) here is the one i was refering to: but to answer your question, this is so i can copy paste [these are comented lines] in a repl so it recompiles and reload a module i changed [all this is kind of old, before i was using geiser...]
<mark_weaver>daviid: I guess what I'm saying is that you can start with the code you have in the first part of that paste, and just replace the (define *bluefox* #f) and (eval-when ...) with this: (define-once (sys/read-config "kise"))
<mark_weaver>and leave everything else the same.
<mark_weaver>I meant to write: (define-once *bluefox* (sys/read-config "kise"))
<daviid>i got that thanks, will try now, but i liked the blueget approach because no global variables but a config local var visible to this blueget only...
<mark_weaver>okay, either way is fine, as you wish :)
<please_help>I get ERROR: In procedure bytevector->pointer: Wrong type argument in position 1 (expecting bytevector): #f32(1.0 1.0 1.0)
<please_help>but if I try from the interpreter, (bytevector->pointer #f32(1.0 1.0 1.0)) works
<mark_weaver>so if you type (bytevector->pointer #f32(1.0 1.0 1.0)) as the REPL, it generates that error?
<please_help>as I just said, no
<mark_weaver>the REPL compiles expressions by default before running them, so what you said doesn't imply that it was the interpreter.
<please_help>ah ok
<please_help>then s/interpreter/repl
*mark_weaver looks
<mark_weaver>please_help: was the array passed to 'bytevector->pointer' contiguous?
<mark_weaver>lloda, wingo: how many different representations of arrays that print as #f32(1.0 1.0 1.0) are there? I guess that some can be treated as bytevectors, and some cannot. what is the appropriate way to test/describe what subset of arrays can be treated as bytevectors?
<wingo>mark_weaver: i think there is just one representation that prints in that way
<wingo>i could be wrong though
<mark_weaver>it is confusing that you can get "ERROR: In procedure bytevector->pointer: Wrong type argument in position 1 (expecting bytevector): #f32(1.0 1.0 1.0)" and yet (bytevector->pointer #f32(1.0 1.0 1.0)) works.
<mark_weaver>wingo: what about an array generated by 'make-shared-array' ?
<wingo>hmm, that's interesting :)
<mark_weaver>I guess this is with 2.0.9
<mark_weaver>yes, it is.
<wingo>does the same thing happen in master?
<mark_weaver>hmm, that's a lot to ask of 'please_help' :-/
<mark_weaver>I was hoping we'd be able to answer that question more easily.
<wingo>was just wondering
<please_help>mark_weaver: yes, obtained via (array-contents arr)
<mark_weaver>please_help: can you come up with a minimal self-contained example to demonstrate the problem?
<mark_weaver>e.g. create an array 'ra' such that (bytevector->pointer (array-contents ra)) fails with that error message?
<mark_weaver>it doesn't help that most people here are using 2.0.11 and you're running 2.0.9, and there was some refactoring of the array code in between as I recall.
<mark_weaver>yeah, the array code changed quite a bit between 2.0.9 and 2.0.11.
<daviid>please_help: help [our maintainers] please :)
<lloda`>i'd say (eq? a (shared-array-root a)) may be a good test
<lloda`>the array interface tries hard (too hard, in my opinion) to return a bytevector/vector/etc instead of a proper array
<lloda`>we actually have a bug related to this (not all #type(x y z) objects are what they print as)
<lloda`>a possible solution would be to print non-bytevector/vector/etc as #1(...), I wouldn't mind this
<please_help>I tried using gsl extensively for the softmax but it took more than 7 times longer than the pure-guile version to complete
<wingo>lloda`: that would make sense to me, to print as arrays
<please_help>I tried to make a minimal example for the bytevector->pointer but failed
<please_help>I have to go now, I'll continue this when I'm back
<lloda`>Never use bytevector->pointer on an arbitrary array, you should always do it on shared-array-root. This is safe to use on a bytevector/vector/etc. you can still deal with a non-contiguous array by passing the strides to C separately. This is for the FFI.
<lloda`> On the C side there's the array_handle interface to deal with general arrays, too. There are code examples for doing all this on the web and I can give pointers.
<lloda`>wingo: I can do a patch for that
<lloda`>if there're no objections
<lloda`>ah, saw this other question now
<lloda`>(array-contents A) by itself won't return a contiguous array, only a 1D array where the elements are in the same order as if you unraveled A in row-major order
<lloda`>there is (array-contents A #t), which /will/ return a contiguous 1D array, but that will only be eq? to shared-array-root if the base of A is 0, which just isn't true in general
<lloda`>so in general, you have to use bytevector->pointer on the shared-array-root of A and add the shared-array-offset of A to get to the actual data
<lloda`>mark_weaver: here's an example
<lloda`>scheme@(guile-user)> (import (system foreign))
<lloda`>scheme@(guile-user)> (bytevector->pointer (array-contents (make-shared-array #f64(1 2 3 4) (lambda (i) (list (+ i 1))) 3)))
<lloda`>ERROR: In procedure bytevector->pointer:
<lloda`>ERROR: In procedure bytevector->pointer: Wrong type argument in position 1 (expecting bytevector): #f64(2.0 3.0 4.0)
<lloda`>you can see that (shared-array-root (array-contents ...)) is #f64(1 2 3 4)
<lloda`>for an example of how to pass a general array see
<lloda`>see the line
<lloda`> (bytevector->pointer (shared-array-root in) (* (shared-array-offset in) (sizeof double) 2))
<mark_weaver>lloda`: thanks
<lloda`>maybe that (passing arrays through the ffi) deserves a paragraph in the manual...
<wingo>lloda: i am thinking that if it prints as #f64(...) then bytevector->pointer should work on it
<wingo>and if bytevector->pointer doesn't work, then it should not print as #f64()
<lloda`>I don't disagree
<lloda`>how can I run only a specific test of Guile's test suite?
<mark_weaver>lloda`: you can run only a specific test file with, e.g., ./check-guile arrays.test
<mark_weaver>(from the top-level source dir)
<lloda`>ah, thanks
<mark_weaver>lloda`: so a contiguous array returned by (array-contents A #t) cannot necessarily be treated as a bytevector? (e.g. for passing to 'bytevector->pointer') ?
<mark_weaver>because if the only issue is that it's not at the beginning of the underlying bytevector, it's not clear to me that this makes it impossible to represent as a bytevector.
<mark_weaver>as I recall, there is a representation of bytevectors that can point to the middle of an allocated block, and it even holds a field to the beginning of the block for GC purposes.
<mark_weaver>so, I would guess that we ought to be able to ensure that an array returned by (array-contents A #t) can be represented as a bytevector.
<mark_weaver>wingo: ^^ (is this right?)
<mark_weaver>(even if the array returned by (array-contents A #t) is not at the beginning of the original underlying bytevector)
<mark_weaver>and I would think that we should do that, rather than asking users to muck around with 'shared-array-root' and 'shared-array-offset'
<mark_weaver>oh, wingo is no longer here :-(
<lloda`>I disagree
<lloda`>the user must learn to muck with shared-array-root and shared-array-offset or stop using arrays
<lloda`>because that's the only way to make sure that you'll be able to do a straight (bytevector->pointer etc) anyway
<mark_weaver>lloda`: well, in 'master', it is possible to make a true bytevector from any contiguous range of memory, even if it's not at the start of an allocated block. bytevectors have separate 'contents' and 'parent' pointers. See 'scm_c_take_gc_bytevector' in bytevectors.c in master.
<mark_weaver>so, I think this means it would be possible to ensure that an array returned by (array-contents A #t) can always be passed to 'bytevector->pointer', unless I'm missing something.
<lloda`>it seems possible to me, too, but I still don't want to do it
<lloda`>it means that the code using array-contents in this way doesn't accept general arrays, but particular types of arrays
<lloda`>it's their work to check, and bytevector->pointer has an offset argument for exactly this purpose
<lloda`>also, this doesn't apply only to bytevectors, but also to vectors, bitvectors. There's no scm_c_take_vector.
<mark_weaver>returning a true bytevector whenever possible would simplify user code in many cases. so there's an advantage to doing that. what's the disadvantage?
<mark_weaver>if there's a non-trivial disadvantage to doing it, then maybe I would agree with you.
<mark_weaver>regarding vectors and bitvectors: in that case they couldn't use bytevector->pointer anyway.
<lloda`>needing to check and call scm_c_take_gc_bytevector when the user won't care is a disadvantage
<ArneBab>the Gentoo folks might need Scheme support for guile-2.0.11: (effort to port all reverse dependencies to guile-2)
<mark_weaver>lloda`: so the disadvantage is that it costs us a few more lines of code in guile?
<mark_weaver>whereas on the other side it's more code for all users who want to pass numeric arrays into C code?
<lloda`>for no benefit in the majority of cases, yes
<lloda`>in general, the user doesn't even know if array-contents is going to return #f
<lloda`>they still need to check!
<lloda`>and when they don't care, we've done the work for nothing
<mark_weaver>yes, they need to check that. that can't be avoided, it's true.
<lloda`>and it doesn't work for vector, which is only natural to expect if it works for bytevector.
<mark_weaver>however, if that check passes, we can simplify the rest.
<mark_weaver>and also, presumably some code has the property that the array will always be contiguous.
<lloda`>that only matters when the user is going to call bytevector->pointer on the result. and it only saves a shared-array-offset call for them.
<davexunit>is there a guile module out there for minikanren?
<lloda`>if that's the case, they only need a shared-array-root call to be safe.
<mark_weaver>davexunit: yes, ijp ported it
<davexunit>of course he did. should've known. :)
<davexunit>minikanren seems to have gotten more popular lately due to its inclusion in Clojure as the 'core.logic' library.
<davexunit>thanks mark_weaver
<mark_weaver>it might be in his guildhall repo also
<mark_weaver>sneek: guildhall?
<sneek>guildhall is
<davexunit>this needs a guix package now. :)
<mark_weaver>lloda`: passing numeric arrays into C is a fairly common case, I would think.
<mark_weaver>well, whatever, I don't have time to continue this, and there's a limit to how much I care.
<mark_weaver>but I for one am pretty embarrassed at the complexity of the suggestions we have to 'please_help' for passing numeric arrays into C.
<davexunit>I call bytevector->pointer a lot in Sly
<davexunit>and I've recently discovered that it accounts for a significant portion of the time spent in the main loop
<davexunit>I guess that's not really relevant to the above discussion, so ignore me.
<davexunit>I thought there was a concern about the performance of passing array contents to C functions.
<lloda`>mark_weaver: but you want to do extra work without knowing what the object you return is going to be used for
<lloda`>arrays do have offset, increments, etc. that's how they work. people using arrays need to deal with that.
<lloda`>you can always use compact srfi-4 vectors and ignore the array interface
<mark_weaver>is it extra work, in terms of run-time cost?
<mark_weaver>if it measurably slower, for example, I would agree with you.
<mark_weaver>s/would/might/ :)
<lloda`>my problem is that you don't know if the user needs it or not
<mark_weaver>lloda`: they have to deal with those things if they need to work with general arrays.
<mark_weaver>however, if they need to pass numeric arrays to C code, then they are already restricted in the set of arrays they work with.
<lloda`>arrays are general arrays
<lloda`>they can use srfi-4 vectors and never have these issues
<lloda`>if you call them issues, anyway
<mark_weaver>so you're position is that if please_help wants to pass these things to C, he must avoid the array interface altogether?
<lloda`>no, my suggestion is that he handles the offset properly
<lloda`>and it is not true that if you pass numeric arrays to C code, you are already restricted.
<lloda`>that depends on the C code.
<lloda`>see my FFTW wrapper for an example
<mark_weaver>okay, I'm going to table this for now. thanks for discussing.
<lloda`>I've sent the #1(...) patch to the lists
<lloda`>I gave wrong advice to please_help by even mentioning array-contents in the first place, and I regret that
<mark_weaver>lloda`: thanks
<please_help>is there a way to get the sizeof of a type returned by array-type? Say, (array-type->c-type 'f64) ;=> double so that I could (sizeof (array-type->c-type (array-type arr)))
<please_help>lloda`: it seems to me that using vectors would be an unimaginable pain in the ass compared to arrays, especially since they don't accept symbols for types and instead of type-specific functions
<please_help>and instead have*