<ijp>alexei: well, suppose you want to give the pipe input from a string <ijp>it isn't enough to parameterize the current-input-port with a string port <ijp>since if the current-input-port is not file-backed, it will use /dev/null <ijp>not give an exception, not work around it in of a few different ways, *fail* in an unexpected manner <ijp>now you can argue that you can just read/write from/to the pipe, right? <ijp>not for stderr you can't <ijp>as mentioned earlier you cannot reliably set the environment the command is executed it <ijp>you need to be careful with deadlocks with OPEN_BOTH (this is true of pipes in general) <fangism>in release 2.0.9, we found an issue with build-aux/install-sh: the #! line is messed up, and was causing some users build failures <alexei>That's a long list. I recently come to use *named* pipes and realized that a naive "backup" will dead-lock the server/client interaction by simply "looking" at those fifo files. <ijp>alexei: maybe use environ <ijp>hmm, but that takes a list of strings, rather than a list of pairs <ijp>it would make the old one much easier though <ijp>you can get rid of a little bit of boilerplate with define-syntax-rule, and a begin0 macro <ijp>the former is in guile, the latter is not <ijp>the important bit really, is that ... works "appropriately", you don't have to be so rigid with the templating <alexei>Thanks! I have a long way to go. So you are using begin0? I didnt find it in guile and though it is considered "anti-pattern". <ijp>alexei: I've had it in my utils module forever. it's not an every day thing, but it's common enough. <ijp>I never mutate lists, but I will sometimes mutate a variable that points to a list <ijp>(okay, there was that one time, a few years ago, when I wrote that graph code, and I did the measurements) <davexunit>I'm mutating stuff all the time. hard to avoid when something's state changes over time. <ijp>.oO(I wonder if I still have that graph code) <ijp>hah, I do, and it's so horrible <ijp>on the other hand, it took like 3 minutes go go through a graph with over 5million edges <ijp>I think you offered me code for tarjan's, but I never ran it <ijp>I should try running it on the rtl vm, see what speed increase I get <mark_weaver>my tarjan code is very fast and scalable, but ugly as hell. <ijp>hold on, I'll put my code online <ijp>programmers of a nervous disposition should pretend that is a link to some kitten photo blog isntead <mark_weaver>my SCC code did a random graph with 1M vertices and 5M edges in 12.6 seconds. <ijp>yes, but was it written by a crazy person at like 4am? <ijp>I could write it better now, but I'm leaving it as a reminder <mark_weaver>Now that's a real man's emacs startup file. Reading it makes me feel inadequate :) <ijp>mark_weaver: earlier this year, it was twice the size <adu>I should upload my emacs file <ijp>hmm, I must be using your file wrong mark, because after two minutes I get an exception <ijp>wrong type when running successors, it tries to use #f as an index <mark_weaver>that suggests an edge that refers to a vertex that's not in vertices. <ijp>well, it shouldn't happen, but I will need to double check <adu>mark_weaver: is your emacs init more minimal than mine? <ijp>I should probably also go thorugh and figure out how much of the time is spent in IO.... <ijp>mark_weaver: well, I ran it on my computer, and, discounting IO, it seems to take around 40seconds <ijp>I think it would be nice if you packaged it up for use as a benchmark <ijp>i.e. better the code you are ashamed of, than the code I am ashamed of <ijp>without IO, mine is longer at about 120 seconds ***ohama_ is now known as ohama
<wingo>perhaps the go-like solution could work for guile <civodul>or a libsigsegv-based thing like the Rust folks seem to have in mind? <wingo>well rust wants to just mmap big big stacks <wingo>relying on the vm to page in only the amount needed <wingo>so e.g. mmap a 1G stack per thread <wingo>whereas go wants to check for stack overflow on frame entry, and if there's overflow, then copy to a new stack with doubled size <wingo>would be more robust regarding stack overflow exceptions too, I would imagine <wingo>if we started new threads with 1 page (4 kB) of memory for their stack, and doubled and copied as needed, you would have enough address space for 1M threads on a 32-bit system -- which sounds quite acceptable <wingo>dunno about c interoperability though <wingo>yes, i just realized though that that doesn't allow for stack-allocating data tho <dsmith-work>wingo: While you are thinking about stacks and contunations and things, maybe keep in mind some way of saving and re-ifying a continuation. Maybe even in another process or another machine? <wingo>it's not a bad idea but i don't think i have time for it <wingo>you could build something to do that on top of the linker, i think... <wingo>yes, the same way you would save a function that references other data <wingo>only you include free variables into that set of data <wingo>and already-compiled functions ***b4284 is now known as b4283
<unknown_lamer>SML/NJ seems to perform acceptably despite heap allocation activation frames <wingo>down to 18 failures and two errors <davexunit>so, guile is lgplv3 licensed, and so I decided to also use lgplv3 for my game framework, guile-2d. I noticed that emacsy is gplv3 licensed. I'm wondering if I should do the same. <davexunit>mark_weaver: I'll read that. guile-2d isn't a standalone program, but it's a bit more than the basic library. kind of muddies the waters. <davexunit>I guess I'll stick with LGPLv3 because there are certainly tons of other alternative libraries that are licensed permissively or are proprietary. <wingo>this is an ,optimize output? <wingo>i think you can only remove it if you can prove that removing it will have no effects, and that's not the case here <ijp>wingo: but isn't it just a variable reference? <mark_weaver>well, the one possible effect that might happen in this case is an error, if the variable didn't exist. ***sneek_ is now known as sneek
<stis>hmm, that's true, but how do I prove to the optimizer that it is effect free? <stis>Hmm, rtl is about 6-7 times slower than gprolog. <stis>This is actually really good for beeing on a VM. <stis>We could cut it, about 3 times times when on native code. <stis>It is possible to cut it down even more, but then the closures need to be allocated on the stack! <stis>That os doable, but is it worth it? <mark_weaver>It's only possible to allocate closures on the stack in special cases. David Kranz's thesis on the Orbit compiler talks about this. <mark_weaver>I think we probably already do it in special cases, e.g. loops. <stis>What I did was to put a ref to the stack allocated closure in a cons cell. <stis>All references was programed to reference the ccons, not the actual closure. <TaylanUB>stis: Do you mean guile-log running on RTL Guile is 6-7 times slower than gprolog ? <stis>Yep, for the einstein toy problem! <mark_weaver>stis: the issue is whether you can prove that the closure will never be accessed after the stack area is deallocated. <mark_weaver>anyway, like I said, this issue is well discussed in the aforementioned thesis. <stis>Well at unwinding, if you have stored the state, aka all-interleave, then one swap the stack allocated closure with a heap allocated one <mark_weaver>if you have to allocate a cons cell on the heap, why not just allocate the closure in the stack? <mark_weaver>the allocation is about the same in both cases, I'd expect. <stis>Well I did generate a vector containing heap allocated cons cells and reused thoes in the normal case. <stis>When the inwinding and the stack was marked as stored, I swaped the closure and removed the cons cell from the array allocating a new one in iit's place <stis>So all operations that referenced the stack in a way that we could not prove this fact marked the stack <stis>the fact that we coan unwind safely <stis>But I would not introduce this feature again until the framwork is really stable. <stis>Cause there can be some really nasty bugs to catch during development <mark_weaver>so you're talking about cases where the compiler can statically prove that the closure doesn't escape via the normal return continuation, but might escape via other continuations. <stis>Well it's loser than that, Assuming you use the framework as intended, no problem should appear, but you can always brake it if you are nasty. <stis>But especially If we implement a prolog dialect, it can be safe to use this! <stis>btw. I managed to get decent performance out of vhashes for guile-log! <mark_weaver>anyway, I'll repeat what I've said before: if you could earn our trust that when implementing something, you had been careful to cover all the cases properly, then we could make more use of your work. but as things stand, I think we feel that we have to go over your work with a fine tooth comb to check everything. and that's a lot of work for us, so in practice it doesn't get done. <stis>Well I don't implement this strategy right now, just have the framework ready! <stis>Really these things is valuable only after we have native codegeneration! <stis>btw. I'm wiorking on using vhashes in stead of assoc's, Got really nice speedup there! <stis>comparing to plain assoc's or functional trees! <stis>But this again need the toothbrush ;-) <stis>Cause logic programs backtracks, and we again need to know if we can backtrack the vhash as well. <stis>e.g. make use of the mark of the stack. <stis>Only using this assumptions showed good performance! <mark_weaver>my gut feeling is that vhashes are just the wrong data structure if you need thread safety. HAMTs are probably better. <stis>I did try out HAMT, it has good assymptotics, but in treality I see that the constant is quite demanding! <stis>Well I suppose HAMT is a almost balanced tree. I generated truly random keys, to the variables that was allocated only once in this algorithm. <stis>and assumed the tree would be ok balanced, just to test it out. <stis>Then the treee was functionally updated. Is this HAMT? <stis>or an aproximation of it? <stis>anyway that algorithm did have a large Constant <mark_weaver>hmm, I guess I should be more specific. I'm talking about the purely-functional HAMTs as implemented in Clojure. <mark_weaver>so each node has 32 entries, and thus includes 5 bits of the hash. the HAMT is as deep as it needs to be to prevent collisions. <stis>Ok, I see, yeah, increasing the fanout might do the trick! <mark_weaver>nothing is ever mutated. when you update the HAMT, the nodes are copied starting from the leaf and up to the root. <mark_weaver>Rich Hickey came to the conclusion that a fanout of 32 was a good choice. <mark_weaver>we should probably have something like that implemented in Guile. <stis>Hm, (reading) interesting! <ijp>I have half an implementation, but I can't remember why I never finished it <stis>Anyway I will try to investigate some variants of vhashes first, which are thread safe, but may have other drawbacks, <stis>like getting bad assymptotics! <stis>and getting trashed facing the unwinding reality of logic programs! *wingo realized he put source information in the wrong place in cps <wingo>it should go on $continue, not on $cont. <madsy>When implementing new methods for guile, is it possible to add them into a module? <TaylanUB>madsy: You mean an existing module ? That's very unusual/discouraged, but there's module-define! and module-add! and such. <mark_weaver>madsy: do you mean 'method' in the sense of GOOPS? I'm sorry, but your question is very unclear. *TaylanUB assumed procedures and somesuch were meant. <madsy>mark_weaver: If I implement a new method with the C interface, it becomes visible in all the environments, right? <madsy>And not under any particular module/package <madsy>It would be nice if I could add it under a module and void polution <mark_weaver>first of all, you are using the term 'method' incorrectly, I think. <madsy>What should I call it then? API function? <mark_weaver>but yes, you can add a procedure defined in C to just a single module. <mark_weaver>in fact, they are always added to just a single module. however, many of the procedures defined in C go into the (guile) module which is imported by most other modules. <madsy>Ah, right. I thought they became built-in and didn't belong to any module <mark_weaver>scm_c_define_gsubr adds bindings to whatever is the "current module" at the time that scm_c_define_gsubr is called. <madsy>So I just need to create a new module, make it current and then call scm_c_define_gsubr <madsy>mark_weaver: I also found a livable solution regarding the discussion the other day on scm_gc_register_allocation <madsy>I overloaded the new and delete operators and called scm_gc_register_allocation explicitly in them ***turbofai` is now known as turbofail
<alexei_>madsy: scm_c_define_gsubr("proc") outside of a module scope will put it into (guile-user) module. You will need some magic like (@@ (guile-user) proc) to access that. <madsy>alexei_: Thanks. mark gave me some pointers already :-) <madsy>Another question: In my smobs, should I use scm_gc_malloc for everything, or just the actual structure itself? <madsy>I mean, if my structure for my type has its own internal pointers, they will always be referd to by the object itself, as long as the object is alive <madsy>So in my head, there's no reason to use scm_gc_malloc for those, as opposed to scm_malloc <madsy>It makes sense to use scm_gc_malloc for the structure itself, before calling scm_new_smob to declare a new SCM <madsy>Because SCM smobs themselves can be referred to by guile code <mark_weaver>well, in general, if you can avoid using finalizers, you should. <mark_weaver>finalizers have various problems, and are more expensive to deal with. <mark_weaver>so, if you can let the GC deallocate everything that your smob references, that's ideal. <madsy>But that would require a smob type for the "what" parameter, right? <madsy>So all my struct internal data would become smobs themselves