<jmnoz>Hi, I'm having trouble with "syntax error: definition in expression context" when trying to use guile via Emacs (org-babel) when evaluating this code (define (add x y) (+ x y)). What am I missing? <ijp>$2 says it's org-babel's fault <ijp>do you have an example file? <mark_weaver>jmnoz: it looks like org-babel is not really set up to allow you to make definitions. <mark_weaver>because it's wrapping your expression within (display ...) <ijp>because I'm pretty sure I've done it before <mark_weaver>one solution is to change org-babel so that it wraps with something like (display (eval 'EXPR (current-module))) instead <jmnoz>(my cut and paste is broken. (briefly) considering retiring from the field of computation altogether.) <jmnoz>(that'll show'em! (shakes fist in air)) <mark_weaver>man, I really need to rewrite our pretty printer one of these days. <ijp>you can change :results from value to output <ijp>with the default ":results value" it wants to print the last expression <ijp>which since definitions aren't expresions, is problematic <ijp>but if you do that, you'll need to explicitly output the last value yourself (if you want that) <mark_weaver>the (display (eval 'EXPR (current-module))) thing might be more convenient, simply because it allows both definitions and automatic output. <ijp>and this advice turns out to be wrong in 5,4,3.... <jmnoz>I'm thinking unless this has been fixed in recent versions of org-babel maybe this would warrant a list e-mail <ijp>mark_weaver: do you fancy fixing that doc issue mentioned earlier? <ijp>about append! and reverse! <mark_weaver>I fixed the doc for 'append!'. Are the 'reverse!' docs also misleading? <mark_weaver>now we say "@code{append!} is permitted, but not required, to modify the given lists to form its return." and ditto for reverse!. <mark_weaver>ijp: btw, do you have any thoughts on my recent proposal (and preliminary patch) to improve support for R6RS exceptions in Guile? <ijp>I'm going to be honest, I haven't read the ML in a few weeks <mark_weaver>really, I'd prefer to modify Guile to use R6RS condition objects natively. our current system for reporting information about exceptions is totally ad-hoc and kind of a mess. <ijp>I'm kind of tired, but I can look at it tomorrow <mark_weaver>that's fine. no need to look at it at all, I just wanted to invite comments from folks who know R6RS better than I do. <sneek>Welcome back nalaginrut, you have 1 message. <sneek>nalaginrut, taylanub says: C pre-processor macros like #if and #ifdef change the code that is compiled at all, so the produced executable doesn't have any of the code that was excluded, so one cannot switch between the options on-the-fly, or even by restarting. Only by recompiling. <nalaginrut>sneek: later tell taylanub ah~thanks, I realized it <nalaginrut>do we have any log server (not IRC log server), I mean a log-server written with Guile <Chaos`Eternal>I have one question for that whether the procedure open-process be re-written in scheme instead of C? <wingo>Chaos`Eternal: it really can't; it has to only call async-signal-safe functions between the fork and the exec, and we can't guarantee that from scheme i don't think <wingo>run "man 7 signal" in your terminal <wingo>that can help, but it's really tricky -- there are also vm hooks, and gc isn't async-signal-safe <wingo>and whether a piece of code allocates or not isn't something that we can reason about currently <Chaos`Eternal>well, do you have some code to demo the unsafy after a fork when multi-threaded? <Chaos`Eternal>first, the child process wont harm the parent, since they seperate processes. <wingo>it's mostly a problem in multi-threaded code <Chaos`Eternal>second, the child process only has the context (let me say so) of the running threading which calling fork <wingo>yes, and all other threads are killed, even if they are in critical sections... <wingo>there is a difference between "mostly works" and "correct", and anything that does non-async-signal-safe calls after a multithreaded fork is certainly incorrect, as specified by posix <wingo>search the mailing list archives for more, if you are interested <nalaginrut>wingo: would you mind adding GUILE_STOP_ANNOYING env var? ;-) ***chaos__ is now known as Chaos`Eternal
<Chaos`Eternal>what i am complaining is that the warning caused by some specific procedures should have the ablity to be supressed <taylanub>Maybe we could have a `pragma' special-form ? :P <sneek>Welcome back taylanub, you have 1 message. <sneek>taylanub, nalaginrut says: ah~thanks, I realized it <Chaos`Eternal>i am just requesting that the warning given by primitive-fork should have some way to be supressed <wingo>from scheme you can (parameterize ((current-warning-port (make-null-port))) (primitive-fork)) <Chaos`Eternal>but the problem is that i need to enumerate each cases under which that primitive-fork is safe. <taylanub>Wouldn't that just suppress run-time warnings ? <Chaos`Eternal>don't you guys think that there are some cases primitive-fork is safe even in multi-thread environment? <wingo>see that thread i posted: iconv locks, gc locks, etc... <wingo>i used to be more optimistic but i am not any more <wingo>if you ignore the problem entirely, you get occaisional deadlocks <wingo>so you have to deal with the problem <wingo>but you can't, because you don't control all of the stack <wingo>in that case why doesn't open-process work for you? <Chaos`Eternal>only thing is that open-process is too simple to handle my case <wingo>you need to do more with file descriptors or something? <wingo>we should add that capability somehow <wingo>it is impossible to make fork safer :) <nalaginrut>well, I partly agreed it's safer compared with threads <taylanub>Let me slip in this question while we're talking about safety: Can one rely on car, set-car!, variable-ref, variable-set!, and similar things that set singular Scheme values, to be atomic even during parallelism, or could they result in partial writes and thus junk values ? In my tests they seem to behave well, but maybe it's architecture-dependent ? <wingo>taylanub: architecture-dependent <nalaginrut>when someone mentioned something may has side-effects, 'safe' is gone away from my mind ;-D <wingo>mark_weaver probably knows more <taylanub>OK, thanks. From my understanding they compile to processor instructions that read and write single "words" of data, I guess the question boils down to the behavior of those instructions ? <taylanub>(Well, we don't do native compilation yet, but still ..) <wingo>the implementations should be native instructions, but really we have no guarantee about anything without synchronization primitives <taylanub>OK. My lockless thread-safety ideas are in vain then. :P <Chaos`Eternal>inter-process communication introcuces not only locks, but also memory copies <wingo>multi-process communication can use shared memory, but that makes it look more like a multi-thread system <nalaginrut>yes, but I didn't say there's only one green-thread queue <Chaos`Eternal>multi-core means mutexes, or i learned a morden word: transactional memory <nalaginrut>you are talking about shared-transaction-memory model, which is totally different thing with Actors <nalaginrut>everything computable things (in current machine) are mathematically, in principle <nalaginrut>everything computable things (in current machine) are mathematically equal, in principle <Chaos`Eternal>The problem is , even if the mutation it's self is atomic, your whole thing is not thread-safe. <taylanub>I'm pretty sure that it would be safe if variable-ref/set! and car/set-car! are atomic, although I can't mathematically prove it. :P <taylanub>It's also limited to one producer and one consumer, though. <Chaos`Eternal>let's say, when thread-a want append a new entry to list m, it must get the tail of m, then mutate the point to next entry of m <Chaos`Eternal>let's say, when thread-a want append a new entry to list m, it must get the tail of m tail-of-m-a, then mutate the point to next entry of m <Chaos`Eternal>at the same time, another thread thread-b also want to append an entry to m, but at this time, thread-a hasn't finished it's work <taylanub>(Of course I don't do any checks to guarantee that no two threads can call enq!.) <Chaos`Eternal>only the combination of operations: (set-next (get-tail-of-m) new-entry) be atomic <Chaos`Eternal>but naturelly, the combination is not an atomic operation, it must be protected by mutexes <taylanub>It must not be protected if there's only one thread doing it. :) <taylanub>Two threads: one producer, one consumer. <taylanub>One can only enqueue, the other only dequeue. <taylanub>Dequeue doesn't touch the null object at the end; enqueue prepares its contents while it still looks like a null object, then at once (via set-cdr! in this implementation) lifts its conceptual "is null" property. So if set-cdr! and cdr were atomic wrt. each other, it would work. <Chaos`Eternal>the variable-ref in these two procedure will get the same object, but one will delete the object while another will append an object to the deleted object <taylanub>No, enq! doesn't change the variable's value. <taylanub>It follows the references in it to reach the null object, it can do this even if it gets an object that's then (immediately after) discarded by deq!, because this is done by setting the variable's reference to the same object which the first node was referencing. <taylanub>No, the null object at the end cannot be deleted. <taylanub>If it's the sole object in the chain, it means the queue is empty. <taylanub>It only does that to nodes that aren't null. <Chaos`Eternal>yeah, i'm talk about the case when your queue has exactly one element <taylanub>In that case deq! never mutates anything. <taylanub>Yes, that's the point of the "qnull" object. <taylanub>It's not a singleton, it has instances that can be mutated into normal nodes. <taylanub>After enq! sets its contents (car), then sets its cdr to a fresh null object, deq! will now return the contents and set the variable target to the new null object. <taylanub>Nope, let me just test it .. should work on my platform, since my set-cdr! seems to be atomic <Chaos`Eternal>ok, when there is one element in your q, the q would be this {variableof (value . '())} <taylanub>It's not '(), it's a "qnull" object. Sorry for ambiguity, with "null object" I mean this special null object. <Chaos`Eternal>then the deq! , the car will get value, the cdr will get qnull <taylanub>The variable's reference will be set to the qnull object that's there. <taylanub>Yeah, it reaches the qnull, either directly through the variable (if variable-ref is executed after the variable-set! of the deq!), or through the pair (value . qnull) (if it's executed before), then prepares contents for it, then atomically (if set-cdr! is atomic) changes it so it will return #f for `qnull?' calls. <Chaos`Eternal>after enq! finished , your q will be {variable (value . {variable (value2 . qnull)})} <taylanub>No, only the queue itself is in a variable. It holds a qnull-terminated list. <Chaos`Eternal>after enq! finished , your q will be {variable (value . (value2 . qnull))} <taylanub>BTW I started two infinitely-looping threads, one always calling (q-enq! q 0), and the other (unless (or (q-empty? q) (= 0 (q-deq! q))) (display "panic!\\n")), and it ran fine for 10-20 seconds, after which it ran out of memory. <taylanub>After two enq! calls, that'd be what it looks like, yes. <taylanub>It'll get value1, set the variable's reference to the pair (value2 . qnull), then return value1. <Chaos`Eternal>given the q has one element, the q is {variable (value1 . qnull)} <Chaos`Eternal>run deq!, the value of line 74 is value1, and rest of line 75 is qnull <Chaos`Eternal>now enq! happens and finished. the q will be {variable (value1 . (value2 . qnull)) } <Chaos`Eternal>the now deq! finishs its work, set q to rest, which is qnull <taylanub><Chaos`Eternal> before enq! happens and after enq! finishes. <taylanub>The state of the deq! call is, it's about to (variable-set! q rest) where rest holds qnull, right ? <taylanub>Now deq! starts, it iterates through the linked list and finds the *same* qnull object to which the enq! state currently has a reference to, and sets its car to a value and its cdr to a new qnull object. <taylanub>Now enq! finishes, setting the variable to the qnull object that *isn't* a qnull anymore. <Chaos`Eternal>deq starts before enq starts and finishes after enq finishes <taylanub>After its finished, it mutated the qnull object to which the enq! thread was holding a reference to. <Chaos`Eternal>because the cdr in line75 and set! in line 76 are seperate operations. <taylanub>No, the qnull object's contents might be mutated, but it's still the same qnull object. <taylanub>Well, after mutated it's not a qnull object anymore, but it's the same object. <Chaos`Eternal>line 76 is not changing the qnull, it's changing your variable <taylanub>Yes, it sets the variable's reference to the object that was qnull when deq! started, but isn't qnull anymore after enq! ran. <Chaos`Eternal>so you are saying that when the set in line 76 happens, the qnull is nolonger qnull? <taylanub>Yeah, that's the point, as explained in the comments by the way. The "denull" procedure, working on the qnull objects that have identity, does the magic. <taylanub>(BTW instead of "objects that have identity" I should actually say "objects that have mutable content") <taylanub>Chaos`Eternal: enq!'s variable-ref call will either give the old pair, or the new pair (or qnull), depending on when variable-set! interferes; in either case it will iterate through the linked list until it reaches the current qnull of the list. <taylanub>So it's safe against intermediate deq! calls. <Chaos`Eternal>BTW, you know, we are making jokes, both you and me can not find the simple race-conditions <taylanub>Eh ? There are no race conditions, except inside cdr/set-cdr!, variable-ref/variable-set! <taylanub>I need to go now, so please do it in your mind and ping me if you find anything. <taylanub>Well it's you who's looking for bugs that aren't there. :P <taylanub>Please just tell me if you find something wrong, I actually need to do some work now .. <ArneBab>I read about the new compiler. Can I already test it? I have a simple performance test here and would like to see the difference… <wingo>it will be ready in a few weeks i think <ArneBab>another question: If I write a program in guile scheme, is there an easy way to package it as a binary for multiple platforms? <wingo>if you need something like that in the short term you might be better off looking at another scheme, like racket or chicken <wingo>we'd like to do that but it will be some time before we can get all the pieces together to do that <mark_weaver>taylanub: I haven't fully read the monster discussion here, but I can tell you that on most modern platforms (including Intel), one thread sees writes from other threads in possibly a different order. <mark_weaver>so clever tricks like writing words in a carefully designed order won't work at all. <mark_weaver>for example, suppose one thread atomically sets a pointer to a new data structure. the other thread might see the new pointer before the memory it points to has been initialized. <mark_weaver>basically, this has to do with all the tricks that processors do to be fast. they have things like write buffers that postpone writes for a while. and there's no guarantee what order things will be written in. <mark_weaver>unless you use synchronization primitives, which are relatively expensive.. these do things like dump all the write buffers, or force all pending reads to be done before some primitive. <mark_weaver>IMO, the "shared memory multiprocessing" model, where you have a bunch of processes mutating a single shared memory, is fundamentally slow. what you really want to be doing is explicit message passing. and essentially that's how modern computers work anyway. but they try to use message passing to present the *illusion* of a shared memory. but in practice, the illusion isn't very good, and it's very hard to write cod <taylanub>So when I have the sequence (set-car! qnull value) (set-cdr! qnull (make-qnull)), the other thread may actually see the set-cdr!'s effect first, then access the car before the value is set ? <taylanub>Wow, crazy, thanks for the heads up. I'll just stop attempts at such lock-free hackery. <mark_weaver>the only way to guarantee that the reader sees the writes in the desired order is to use some kind of synchronization primitives. <mark_weaver>the cheapest ones are the modern atomic memory barriers, as found in C11 et al. <mark_weaver>but they are still quite expensive (on the order of tens of cycles I believe) <mark_weaver>the cheaper ones are more difficult to reason about though. one must be very careful to prove the relevant "happens before" relationships. <mark_weaver>basically, if each cache line is exclusively owned by one writer, then it can be possible to cleverly design protocols that cope with the fact that readers may see the writes out of order... <mark_weaver>wingo: btw, I might be able to make some more progress on wip-cps-bis if you can spare a few minutes to give me some hints about generating the offset from the current IP to the code of the procedure, so that I can generate the 'make-closure' instruction. <Chaos`Eternal>taylanub, still there? i am now coming to the point that your codes is really thread-safe, given the condition that only one writer and one consumer <wingo>so make-closure takes as one of its argument an S32 or L32 or something <wingo>that's a label, as a diff from the current IP <wingo>so you just need to give it the label of the procedure <wingo>and that label, in a hacky hack hack hack, happens to be the "self" of the procedure <wingo>so "self" is used as a label and an identifier for the closure; nasty. <wingo>yes, labels will work across a compilation unit <wingo>which is usually a set of programs <mark_weaver>the 'self' label is part of the 'begin-program' directive. that's followed by a label "kentry". does it matter which one? <mark_weaver>okay, so they are guaranteed to not be moved apart? (or the linker will take care of it if that happens?) <wingo>and the answer is i don't recall correctly <wingo>"they" == multiple procedures compiled in one compilation unit? <wingo>if that is the question then yes, they are all compiled into one ELF image and the labels are resolved for the whole image <mark_weaver>one more thing: in the assembler I use the 'constant' helper procedure to get the slot index (for 'free-ref') as an immediate.. but the code generator is still emitting code to load that constant into a register. is there some mechanism handy to prevent that? <taylanub>Chaos`Eternal: FYI http://sprunge.us/NVLM So while the code would be correct in theory if cdr/set-cdr! etc. behaved "intuitively"; in practice, set-cdr!/cdr possibly not being atomic is not even the only problem with my code, as mark_weaver explained. <taylanub>(I hope it was a fun brain exercise at least.) <wingo>mark_weaver: the constant will be loaded into a register if constant-needs-allocation? returns true <wingo>probably you need to add another case or three to constant-needs-allocation? *wingo will get to hack tonight <Chaos`Eternal>but, i must say, (ice-9 q) is the same as your code. taylanub <taylanub>Chaos`Eternal: It probably makes no claims about thread-safety. <taylanub>If you want thread-safety with shared memory, you do need to synchronize yourself. Or just don't use shared memory. <taylanub>libdispatch: "Tasks in GCD are lightweight to create and queue; Apple states that 15 instructions are required to queue up a work unit in GCD, while creating a traditional thread could easily require several hundred instructions." I wonder what they use. <taylanub>Chaos`Eternal: My module is very broken in practice because parallel processors are crazy. :) <taylanub>And in other words: don't ever expect thread-safety without explicitly using synchronization constructs in your code. <taylanub>I'm just trusting mark_weaver's knowledge on that. <taylanub>Sadly such issues are difficult to prove due to the inherently non-deterministic behavior. <Chaos`Eternal>first, variable-set! is not necessary, you can use set-cdr! instead of it. <taylanub>I wonder what davexunit ended up using for eir task-queue that needed to be dequeued efficiently from the main-loop thread. <taylanub>Chaos`Eternal: It is not guaranteed to be atomic. <taylanub>Chaos`Eternal: Did you read the paste I gave you ? <taylanub>Chaos`Eternal: Just read it, it explains the issue. <Chaos`Eternal>already read, as i've said. he's just talking about general beliefs <taylanub>There's also the problem that when a thread does (begin (set-car! c x) (set-cdr! c y)), the processor might actually decide to set the cdr first, then the car. <taylanub>"Theoretically" (for some value of "theoretically"), the code is correct though, yes. Intuitively it seems correct, just isn't in practice. <taylanub>Chaos`Eternal: I have stuff to do so I'll leave it to you to understand the problem explained here. Just don't use my module in "real-world" code. :D <taylanub>Though it all boils down to: when no explicit syncing is done, any amount of crazy optimizations are allowed that screw intuition. <ArneBab>wingo: is it possible to compile guile into a single binary with dependencies statically compiled in? Adding a script to run by default + compiling into a single binary would be pretty close to a distributable binary. <ArneBab>mark_weaver: your message at 15:51:45 (IMO, the "shared…) was truncated after “write cod” <mark_weaver>ArneBab: that message ended with: but in practice, the <mark_weaver> illusion isn't very good, and it's very hard to write code this <mark_weaver>the first link is best if you simply want to know what you need to write safe multithreaded code using (relatively) lightweight memory barriers. <mark_weaver>the second link is better for explaining what's going on in the hardware to cause these problems. *ArneBab fired up the printer. mark_weaver: thanks! <ArneBab>(this is only for curiosity, though: I like reading up deep stuff, but I’m currently not in a position where I could use it. <ArneBab>wingo: can you alert me when there is a way to test the new compiller code for guile with a simple numeric let-loop? <mark_weaver>ArneBab: at the moment, it wouldn't quite be a fair test, because the stack VM has some assembly bits for fast fixnum arithmetic that aren't yet in the register VM. <ArneBab>mark_weaver: how about a simple loop? <stis>an evening moo to you all! <stis>wingo: nice code you wrote ... <wingo>it's nasty in many ways too :) <stis>perhasp some macros could be of use though. <stis>to help in wthe whitespace race ;-) *stis does not grok it in depth <wingo>yes some macros would be useful <wingo>like a build-cps macro for example... <stis>Yeah, match is ok, the only drawback with match is that $ is based on position <stis>but (and (= acc-1 x) (= accc-2 y)) can of cause be used <wingo>and it's nice to have positional matchers some times <wingo>samth was telling me things about the racket matcher, it seems nicers <stis>I have it in guile if you download syntax-parse <stis>I did it exersicing syntax-parse. <stis>heh you should see the output from the macro expander of that code, 500k if I did not missremembered <stis>A good way to strip the syntax expression would be very helpful. <stis>err syntax objects I mean <wingo>for serializing source or for compiled file size? <stis>to reduce complexity and yes the final compiled file is the target. <wingo>you would think that for compiled files that the shared-substructure hacks would effectively compress all that <stis>usually you can wait for the file to compile <stis>perhasp, but 500k, what is causing that? The most obious candidate was the syntax eobjects <stis>anyway it is impossible to see if there is any other code explosions. <stis>let me check the typical sizes of go files ... <wingo>we will see how big the different sections are with the rtl things <wingo>you will be able to readelf -a the compiled files <ijp>sweden needs more imaginative cows <wingo>i think in spanish they say buuuuuu <stis>anyway the man racket macher is in 526k <stis>Typically what can cause explosions is or expressions in the matcher. <stis>ice-9 had one of those bugs <wingo>i suppose compiling case statements appropriately would help <stis>one need to bind th elambda to a var and suplly it else the expansion bangs exponentially <wingo>that gets contified later of course <stis>and peval would inline if it's useful <stis>anyhow, for syntax-parse I would <stis>1. double check the or logic for any suspicious code <stis>2. if not found try to experiment with stript syntax objects to see if it has any effect <stis>Is there some hints how to strip those objects the correct way? <stis>somewhere? example code? <wingo>i think mark_weaver does it in psyntax <stis>;; strips syntax-objects down to top-wrap <mark_weaver>wingo: I haven't done anything with wip-cps-bis since my last push, so feel free to hack on it :) <mark_weaver>stis: the code I wrote for minimizing syntax objects for psyntax-pp.scm is in module/ice-9/compile-psyntax.scm <mark_weaver>note: it is not safe to run that on arbitrary code, but it's safe for psyntax.scm <wingo>currently adding a test suite <wingo>wondering what to do about lone prims <wingo>ideally at some point they should be rewritten as module-box + box-ref <wingo>so that cse can potentially eliminate duplicate module-box instructions <wingo>e.g. (compile 'cons #:to 'rtl) <wingo>maybe i should add a reify-primitives pass <wingo>that would turn primcalls to prims that don't have corresponding vm ops into normal calls <mark_weaver>I took care of implementing branchable prims outside of test contest. <wingo>and also about $primcall to unknown prims <mark_weaver>hmm. does it make sense to have primitives that don't have VM instructions? <mark_weaver>I suppose it potentially allows the compiler to reason about them. <mark_weaver>I guess it sounds sane. I haven't really thought about it much. <wingo>yes, you would have them to allow the compiler to reason about them <wingo>...conveniently. depending on what we do with modules, module variables could be reasoned about in a similar way i guess <mark_weaver>as for the question of whether to break instructions into smaller pieces, e.g. to make the type checks explicit (so they might be optimized away)... <mark_weaver>maybe the thing is to break them into small pieces for the optimizer, and then to coalesce them back into bigger pieces so reduce the number of dispatches? dunno. <wingo>i doubt allocation benefits from this significantly though <wingo>if we are thinking of the make-struct example <wingo>given that the cost of allocation is mostly in gc, and more instructions are usually retired accessing a structure than allocating it <mark_weaver>well, for 'make-struct' my main motivations were: (1) we could enforce immutable fields without making constructors an exception, and (2) to reduce the number of dispatches for this common operation. <stis>thx mark_weaver: I will use it in a test to see if the stx objects may be the reason for bloat! <mark_weaver>stis: one thing though: that code makes assumptions not only about the code being "squeezed", but also about the internals of psyntax. if you use it in code outside of guile core, it will likely break in a future version of guile. <stis>yeah, I will just issue a bug report if there is to much fluff, or a feature request? <mark_weaver>wingo: I continue to think that a variable-length 'make-struct' would be a win. you said before that it's like having a little interpreter in the instruction, but that's an exaggeration. it's actually just a loop, and modern processors are very good at optimizing loops.. e.g. their branch predictors handle loops specially. <mark_weaver>I gotta go afk for a bit.. back in about 20 minutes. <mark_weaver>stis: I actually don't see any way to fix it, even in theory, without changing the API of psyntax. in particular, the semantics of datum->syntax. <mark_weaver>In the general case, datum->syntax requires that information about the compile-time lexical scopes be preserved. <stis>mark_weaver: is it about datum->syntax hooking syntax objects to names of past variables? <stis>and if I don't care about those names your function should be ok? <mark_weaver>it's been a while since I thought carefully about this, but iirc, the main limitation is that 'datum->syntax' cannot be called on any of the syntax objects that are introduced by macros whose code has been "squeezed". <stis>hmm, is this a good question on the scheme channel? <mark_weaver>e.g. the definition of 'define' introduces a 'lambda' syntax object in the expanded expression. <mark_weaver>well, they won't know anything about my "squeeze" code. <stis>yeah, but psyntax and what one can do to minimize the overhead and what cost that may have <stis>Hi all, I have a question about psyntax, <stis>in guile the syntax objects become really huge, and it would be nice to know what your experience is about getting htem thinner <mark_weaver>stis: the folks on #scheme might not know off-hand that guile uses psyntax. your question is really about psyntax. <mark_weaver>the problem of large syntax objects is specific to psyntax. <stis>ok, I made that explicit, let's wait and see, are there any logs of the scheme channel? <mark_weaver>not that I know of, except for private logs that I keep. <stis>oh well, I let the computer run over the night then :-) <amk9>last guile-www release is not installable <mark_weaver>amk9: personally, I would use the new 'web' modules that come with Guile 2. <amk9>it's missing something about mime-type <shanecelis>Anyone know of anyone that uses macros to do code refactoring? <amk9>for instance request-path-components doesn't exists in guile 2 web <amk9>I'm doing a some kind of static site generator <amk9>I mean when you write the website, you need a simple http server to serve the static files <amk9>yes I should use it but right now I don't I generate html directly <amk9>I more into getting the authoring syntax right <ijp>serving a file is very simple, just return it as a string, or better yet, open the file, and return the port <amk9>yes but why the guy from the ML want to return the mime type ? <ijp>why wouldn't you want to return a mime type? <amk9>I'm not sure how browser works, maybe it needs it... I try without it, thx <ijp>the server may default to text/plain, but I'm not sure <mark_weaver>the mime type is usually returned as part of the response headers, iirc. <mark_weaver>right, there's supposed to be a "Content-Type: " header in the response from the server to the client. <ijp>you are expected to return a response object, though we let you return the headers as a shorthand <mark_weaver>when using the web stuff in Guile 2, you pass the response headers as an alist to 'build-response'. <mark_weaver>e.g. #:headers '((content-type . (text/html (charset . "utf-8")))) <amk9>text/plain is rendered as... plain text by the browser but I least I got something :) <amk9>I can use file --mime-type src/sfx/index.html -b to get the mime type <amk9>btw, I'm using run-server so there is "just" a handler <amk9>and it expects headers and body as bytevector <amk9>so I can't return a port <mark_weaver>amk9: I'd like to help, but I'm very confused by what you're saying. <ijp>you can return a port, it just isn't documented IIRC <mark_weaver>what do you mean "just" a handler, and what does that have to do with 'run-server'? <ijp>actually it is documented <mark_weaver>oh, nevermind. I see now that there is more than one 'run-server' in core guile. <ijp>"The respose and response body are run through sanitize-response, documented below. This allows the handler writer to take some convenient shortcuts: for example, instead of a <response>, the handler can simply return an alist of headers, in which case a default response object is constructed with those headers. Instead of a bytevector for the body, the handler can return a string, which will be serialized into an appropriate encoding; <ijp>or it can return a procedure, which will be called on a port to write out the data. See the sanitize-response documentation, for more." <ijp>except that says a procedure that's called with a port, hmm *mark_weaver goes back to hacking the new compiler :) <ijp>hmm, I was sure it allowed a port directly, but the code doesn't lie, at least not particularly well