IRC channel logs


back to list of logs

***ijp` is now known as ijp
***sethalve_ is now known as sethalves
<taylanub>I want to make a "wrapper" module of some sort that imports a "base" and some "built-ins" of my module package, can I automatically re-export all of this wrapper module's imports ?
<ijp>you can, though I don't think it's documented
<taylanub>You mean via the reflective module API ? Yeah, was trying to avoid that.
<davexunit>hello guilers.
<taylanub>Hidy ho
<davexunit>I'm seeking opinions on something.
<davexunit>I am writing a game (library). since it is a realtime application, there is a main loop involved.
<davexunit>I want to have the power to modify the game as it runs, so I pass the --listen flag to guile and connect to it with geiser.
<davexunit>however, since the repl server is run on a different thread, I can potentially crash the game due to the typical threading pitfalls.
<davexunit>what would be the best way to deal with this situation?
<amirouche>what do you call «threading pitfalls» ?
<davexunit>amirouche: thread A and thread B are writing to the same memory.
<davexunit>the underlying library, SDL, isn't thread safe, so doing things like loading an image from the remote REPL will crash the game loop.
<amirouche>can't you add a «Task» in the other thread to do the job ?
<amirouche>I mean a task in the game loop
<davexunit>my first solution is adding pause/resume procedures to start/stop the game loop.
<davexunit>amirouche: yes, that works for certain things like "spawn an enemy at (x,y)"
<davexunit>what it doesn't work for is modifying and re-evaluating code.
<ijp>one non-solution is to run it in the same thread :)
<davexunit>ijp: that would be ideal. :)
<taylanub>davexunit: I think run-loops AKA run-queues AKA dispatch-queues are the thing nowadays ..
<taylanub>There's a run-q module.
<ijp>unless you were to pause the game somehow
<ijp>say on an 'r' key event, you run a repl in the main thread, and when that quits go back into the loop
<davexunit>ijp: pausing is the simplest solution, and one that I've implemented. I decided to ask here to see if there were any other techniques that I could use, perhaps.
<taylanub>Register procedures to be called by the main loop, from other threads.
<davexunit>ijp: yeah, an in-game REPL wouldn't have these issues. but an in-game REPL doesn't give me the power of geiser in emacs. :)
<davexunit>I suppose the simple pause/resume procedures will be what I stick with.
<taylanub>Perhaps you could even modify the REPL such that all entered expressions are run in the main loop, just not the REPL itself.
<ijp>well, you can still run it over a socket, even in the one thread
<ijp>but you just couldn't keep it running all the time
<ijp>run-server vs spawn-server
*davexunit goes to the docs
<davexunit>so with run-server, I'd have to kill it at some point to let the game resume
<taylanub>While I have no idea about the exact architecture of the library, I wonder what makes the dispatch strategy infeasible ?
<davexunit>taylanub: I like to C-x C-e in my scheme buffers to re-define a procedure in a module with geiser.
<davexunit>if that procedure is called frequently... potential crash.
<taylanub>I mean, make a "dispatch queue" object with atomic access, then in your main loop always empty it, and from other threads add to it, and tweak the REPL/Geiser a bit to never execute code directly but rather to insert procedures into that queue object.
<davexunit>taylanub: that would work, I guess the question becomes: how to tweak geiser?
<davexunit>what do you mean by atomic access?
<taylanub>Avoid race conditions while checking, inserting, removing ...
<davexunit>okay that's what I thought.
<taylanub>It will be the one central shared object that needs manual locking.
*taylanub checks how C-x C-e and C-M-x work ...
<ijp>I think an implementation of those exists in guile-lib, but I can't remember
<taylanub>There's a "run-queue" module, but I'm guessing that davexunit's main-loop could already do it ... (maybe is accidentally a re-implementation of the module ?)
<davexunit>taylanub: can't find docs for run-queue
<davexunit>I have an "agenda" module, which uses the (ice-9 q) module, for scheduling things.
<davexunit>it's modified from the agenda described in SICP.
<taylanub>Yeah, the module lacks docs, or they were removed some time or something, 1.8 or 1.6 docs might have them. The source is currently in module/ice-9/runq.scm
<davexunit>taylanub: thanks I'll take a look
<davexunit>if it fits the bill, I can ditch the code that I wrote.
<davexunit>if I use runq, I will need to add some code on top of it. my agenda module allows tasks to be scheduled some n number of game updates later.
<taylanub>Maybe just keep your code .. for some reason I'm not very fond of runq when I read it .. :P
<davexunit>oh good, I'm not the only one that got that impression.
<davexunit>it has some weird OO-but-not-really interface
<taylanub>I wonder why it allow the insertion of lists of procedures to be called, when one can simply insert a procedure that calls those procedures ?
<taylanub>Perhaps simply because it's nicer to type (list proc proc2 proc3) instead of (lambda () (proc) (proc2) (proc3)) ...
*ijp places £0.50 on ttn having written the module
<ijp>hmm, doesn't say
<ijp>1996 may be a bit before him though
<davexunit>I think taylanub was correct about the *right* way to solve my problem.
<ijp>I don't know how your main loop is structured, but you might consider making it a top-level function, so that it can also be swapped out
<ijp>I learned that from livehacking my web server
<davexunit>swapping out the main loop?
<davexunit>I didn't think that would work very well.
<ijp>stranger things have happened!
<ijp>well, take the web server example
<davexunit>sure, please elaborate.
<ijp>on every loop, it calls a function, say 'dispatch'
<ijp>if that function is top level, it can be changed, and will get swapped in on the next loop
<ijp>otherwise, you'd need to restart
<davexunit>dispatch is the top level function you're talking about, right?
<davexunit>I have update, draw, key-up, key-down etc. callbacks right now. I use trampolines so that I can redefine them.
<davexunit>(define (update) (do-stuff)) (set-update-callback! (lambda () (update)))
<davexunit>I'm trying to write a procedure now that will let me schedule an eval operation.
<ijp>davexunit: right, same thing
<davexunit>ijp: okay. cool. :)
<ijp>a little better in some circumstances if you are passing around them higher order
<davexunit>so I guess I need to follow taylanubs advice and make task scheduling thread-safe.
<davexunit>I was able to tweak geiser... sort of.
<taylanub>davexunit: Possibly you could just do it at the REPL-level.
<taylanub>Such that both entering an expression into the REPL, and passing one via Geiser, would be handled at once.
<davexunit>taylanub: do you have an idea of how that would be accomplished? this is unfamiliar territory for me.
<taylanub>Well I don't know if Geiser uses the REPL to send expressions, it probably doesn't ?
<taylanub>I assumed at first that it probably just fakes an interactive user on the Guile REPL, but I guess that was a faulty assumption.
<davexunit>it ends up sending a ,geiser-eval expression to the REPL
<taylanub>Oh, interesting.
<davexunit>looks like as far as updating code, the remote REPL works fine. I haven't experienced a crash changing the render callback that gets called ~60 times per second.
<taylanub>What kind of game is this, by the way ?
<davexunit>it's a 2d game framework, not any particular game.
<davexunit>I have a game in mind, but I think guile could use a game library. :)
<taylanub>I see, neat.
<davexunit>right now, if I want to load an image, I need to define the variable first, then schedule a procedure that loads the image and assigns it to the variable.
<davexunit>I'd like to make stuff like that happen automagically, if possible.
<davexunit>taylanub: you mentioned providing atomic access to the scheduler, so that it is thread-safe. that can be done with a simple mutex, correct?
<taylanub>Off the top of my head I see two possible solutions to that: either just use a macro that defines it and schedules said procedure, or use some wrapper data-type for those images which loads the image while you can already use this wrapper object (in a limited way?).
<taylanub>Whoops, that was to your previous line.
<taylanub>davexunit: A mutex should do, but there might be more light-weight solutions ...
<davexunit>taylanub: I like the macro idea. a promise object would work for the latter suggestion
<taylanub>The promise can still block though, not ?
<taylanub>(Maybe that's OK ..)
<davexunit>perhaps. I'm actually not familiar with guile's promises.
<davexunit>taylanub: what would the more lightweight solution be? a simple boolean flag?
<davexunit>and then something like (with-lock-agenda agenda (do-some-stuff))
<davexunit>but that won't work. mutexs will block until they can access the resource.
<ijp>a promise is evaluated all at once the first time it is forced
<ijp>if you want something like a promise, but that is evaluated (possibly) in parallel, you want a future, but that can still block if it hasn't been evaluated already when you 'touch' it
<davexunit>I'll look into futures.
<ijp>the api is roughly the same as promises
<taylanub>Wait, I just confused promises with futures, what is a promise then ? Oh, the return-value of a `delay' form, isn't it ?
<ijp>(delay foo) is like a memoised version (lambda () foo)
<taylanub>Indeed, that was the next thing I was going to ask --and ask myself why I never asked it before--, how delay/fore is different than lambda/apply ...
<taylanub>The memoization is the only difference, or ?..
<ijp>pretty much
<taylanub>Interesting, I guess it could be said that lazy-evaluation and memoized-lambda are "apparently separate" concepts that so happen to share the same implementation ...
<ijp>not really separate
<ijp>delaying computation is an essential part of lambda
<taylanub>davexunit: Re. more lightweight solution to mutexes .. I was thinking of an atomic fetch-and-add kind of thing, although for a moment I thought a plain integer could do that via 1+ and 1- (it can't).
<davexunit>hmm this is proving to be a tricky problem.
<taylanub>Well, a mutex should actually be just fine, since you probably won't insert lots of procedures into that queue ?
<davexunit>most likely not
<taylanub>I think a mutex has an overhead of 40 ms or something, that might be a lot when compared to some simple CPU instructions but shouldn't be problematic unless you're locking like crazy. :P
<davexunit>that's a pretty significant overhead for me, actually.
<taylanub>Oh .. hrm .. if we have overflowing integers in Guile, then I have a neat idea that's entirely free of locking; otherwise I have an idea that will reduce the locking to cases where the queue *isn't* empty, and not every main-loop iteration will lock the mutex.
<taylanub>Latter idea first: (unless (zero? queue-flag) <process the queue, then lockingly zero the flag>)
<taylanub>So that's just a comparison-to-zero and branching, so long as the queue is empty (most common case, presumably).
<ijp>remember kids: measure, measure, and measure again
<taylanub>Other idea is, you keep a consumer-counter in the main loop thread and a producer-counter in the thread that can add to the queue, then you just do (unless (= producer-counter consumer-counter) <while queue isn't empty, pop and apply procedure, and add to consumer-counter>), but now I'm unsure on whether we also need to lock the queue itself nevertheless ...
<taylanub>Hey, actually, do you even need to lock a "one way only" (terminology?) queue ?
<davexunit>I think the resource loading problem is the biggest issue for me, at the moment. I want to write code at the REPL the same way that I would write it otherwise.
<davexunit>taylanub: I think so. it's being mutated a lot. agenda-schedule and update-agenda both mutate the agenda.
<taylanub>I mean, when you only ever add objects from one end and take objects from the other end, then updating should be safe without a lock, as long as there's no more than one producer and one consumer. My mind isn't exactly clear yet though so I might be talking bull.
<davexunit>it might be just fine. not sure. I'll find out. :P
<taylanub>If I knew some proper math I could probably prove it with some formula, but all I can say is that when I strain my mind it seems that a queue where you can only ever pop from the front and only ever add to the rear should be safe without locking.
<taylanub>I think my mind will become clearer when I write the implementation I have in mind. :P