IRC channel logs

2017-01-06.log

back to list of logs

<janneke>mescc compiles strcmp to handle --help :-)
<janneke>ACTION -> zZzzz
<davexunit>paroneayea: resuming the actor-alive? discussion
<davexunit>I think the recursive messaging thing is nice, but it's kind of a bummer that you couldn't have a local loop without calling actor-alive? on every iteration
<davexunit>(while #t (foo) (something-that-yields) (bar))
<paroneayea>davexunit: so, I guess we could
<davexunit>the reason why I have so much of an opinion here is because I ran into this same problem
<paroneayea>though it would require another check on every time we resume any coroutine
<davexunit>yes
<davexunit>so that's what I ended up doing
<paroneayea>that would be a lot more calls to hash-ref!
<paroneayea>unless
<paroneayea>well, there's maybe another way to do it I can think of
<paroneayea>I've been thinking of wrapping a registered actor in a record anyway, and I could set a dead/alive flag on it
<paroneayea>it would require some rearchitecting probably
<davexunit>in my library I didn't have actors, it was more like fibers, but I wanted an enemy to be able to die and have all their threads killed.
<davexunit>wrapping each continuation in a lambda that tested for aliveness did the trick
<davexunit>then I was only doing the check in one place
<paroneayea>davexunit: yeah, that makes sense.
<paroneayea>davexunit: I'll make a TODO item to play with it.
<davexunit>not after every operation that could yield
<davexunit>food for thought. :)
<paroneayea>davexunit: I recently added a wrapper around continuations for another reason
<davexunit>it's analogous to escaping strings in templating engines to me
<davexunit>enforce the rule in one place and no one can screw up
<paroneayea>so I could make use of that to add a resume? thunk
<davexunit>cool
<paroneayea>davexunit: that makes sense
<paroneayea>davexunit: I'll give it a try.
<davexunit>you said it was simple to test if an actor was alive
<davexunit>I wouldn't worry about the number of hash-ref calls unless you see a clear performance hit from it.
<davexunit>ACTION says as he ruthlessly tries to optimize some of his own code
<paroneayea>:)
<paroneayea>davexunit: some of the artifacts of 8sync were that initially I thought the agenda itself was going to be a lot more full-featured
<paroneayea>but over time the actor model has become where everything interesting has happened
<paroneayea>and the agenda keeps getting whittled down
<davexunit>I had a similar experience
<paroneayea>which I think is a good thing.
<davexunit>agreed.
<paroneayea>the funny thing is though
<paroneayea>the name 8sync is still in use
<paroneayea>as something that kicks off another thing scheduled on the agenda
<paroneayea>but it's hidden in the actor model!
<davexunit>ha
<paroneayea>an 8sync user might never see it
<davexunit>buried in abstraction
<paroneayea>so the titular name of 8sync is now
<paroneayea>yep
<paroneayea>exactly! buried.
<paroneayea>but, I'm convinced that i shouldn't feel bad about this historical aspect
<paroneayea>I saw somewhere an advocacy of lisp programming / repl driven development / live coding as "finding the right design through successive approximation"
<paroneayea>and 8sync has definitely been a product of that.
<linas>so how does one examine the heap?
<paroneayea>luckily I think we're starting to get to the right core ideas in 8sync now.
<paroneayea>hi linas !
<davexunit>paroneayea: I like that quote
<paroneayea>linas: guile's heap? maybe I'm wrong but I don't think you do?
<davexunit>doo dee doo, don't mind me just replacing core bindings min/max with macros...
<paroneayea>linas: I could be wrong but I think the heap is not much exposed, especially since the garbage collector does a lot of management there and we don't have a lot of control over it
<paroneayea>someone might correct me though
<linas>hi paroneayea
<linas>well, my heap seems to be grwoing without bound, and I want to get a clue about why
<linas>probably some crazy bug in my code... but I don't know where to start
<davexunit>you can run the program with GC_PRINT_STATS=1 to get some info
<paroneayea>linas: there's also a gcprof procedure in the statprof library
<davexunit>and maybe gcprof will have some info but I don't think it will help in this case
<paroneayea>I haven't used it though.
<linas>that displays something more than just (gc-stats) ?
<paroneayea>ha :)
<davexunit>I don't think you can get the info you are after
<paroneayea>davexunit: so there are some things I should adjust for sure... but you think the general direction of 8sync looks good from what you read in the tutorial?
<davexunit>paroneayea: yes I do
<paroneayea>yay \\o/
<davexunit>the examples were fun
<paroneayea>:)
<linas>+1 for 8sync, btw .. I just got off a long jag of nasty C++ netowrking code that was ... tediously buggy
<paroneayea>:)
<paroneayea>one thing I'm excited about is that the design does lend itself well to mixing in a lot of different types of actors
<paroneayea>so it would not be hard to be running a MUD game like in mudsync, and toss in a web server that shows a leaderboard
<paroneayea>or even add a web interface with websockets for the GUI alongside the telnet option
<paroneayea>I hope we can make 8sync reasonably batteries-included for things like web stuff, basic worker queues, websocket support, etc
<davexunit>yeah that is fun
<davexunit>I had visions in my head of using it for games
<davexunit>one thing I thought of was a network multiplayer bomberman clone
<davexunit>a game on the simpler side but tons of fun
<paroneayea>ooh yeah that could be cool
<paroneayea>I love bomberman.
<davexunit>me too
<paroneayea>I used to play a c64 version of it a lot
<davexunit>saturn bomberman supported 10 players!
<paroneayea>:O
<paroneayea>oh man I accidentally got myself sucked into a distraction vortex
<paroneayea>now I'm watching gameplay footage from Creatures for the c64
<paroneayea>such a great game
<davexunit>hehe
<davexunit>sorry!
<artyom-poptsov1>Hello Guilers.
<artyom-poptsov1>FWIW, some of my experiments with Guile-ICS: https://gist.github.com/artyom-poptsov/5d781ee0cfcc4201726fcaa27777c244
<paroneayea>I guess there was some weird stuff int hat game too
<davexunit>artyom-poptsov1: neat. ICS is a calendar format, I take it.
<artyom-poptsov1>davexunit: Yes, but there's no release of Guile-ICS yet.
<davexunit>what do you plan to use it for?
<artyom-poptsov1>Oh, well, I could parse Google Calendar stream with it, for example.
<dsmith-work>paroneayea: Hah! c64. My first machine.
<artyom-poptsov1>And I could use it with an IRC bot living in our hackerspace's chat room to handle requests about events or something.
<davexunit>all good uses :)
<artyom-poptsov1>There's some work to be done before the Guile-ICS release; at least I should write Texinfo documentation for it. :-)
<artyom-poptsov1>davexunit: https://gist.github.com/artyom-poptsov/5d781ee0cfcc4201726fcaa27777c244#file-fossdem-schedule-to-org-mode-scm
<davexunit>artyom-poptsov1: cool!
<artyom-poptsov1>The parser is not perfect, of course.
<linas>every reply after reply 5 on https://wingolog.org/archives/2016/02/08/a-lambda-is-not-necessarily-a-closure is spam ...
***baijum is now known as Guest68954
<add^_>Happy friday?
<civodul>definitely!
<add^_>:-D
<add^_>Hey Ludo, long time no see! (Although I wouldn't be surprised if you've forgotten about me)
<add^_>lol
<dsmith-work>Happy Friday, Guilers!!
<paroneayea>davexunit: I've thought a lot about your suggestion to just make all coroutines simply not resume if an actor dies
<paroneayea>and I think you're right
<paroneayea>right enough where I should delay the 8sync 0.4 release to change this
<paroneayea>so that the documentation is simplified
<paroneayea>might as well get it right the first time
<davexunit>paroneayea: well just don't delay too long
<davexunit>I have that habit
<paroneayea>:)
<davexunit>ACTION still hasn't released haunt 0.2...
<linas>Sooo .. I'm pondering this pseudo mem-leak, which I totally don't understand.
<linas>I have some guile c++ code that mallocs a 100-byte object.
<linas>I have some scheme code that calls the above as fast as possible in inf loop.
<linas>For 5-10 minutes, the guile heap grows fast, but then settles down.
<linas>after 10 minutes, the malloc has been called about 100 million times, and the guile heap is at about 100 million bytes.
<linas>after 20 minutes, its 200M calls and 120MB heap
<linas>after 40 minutes its about 400M calls and 140MB heap.
<linas>and the guile heap usage seems to slowly slowly crawl upward.
<linas>So wtf???
<random-nick>what malloc do you use?
<linas>Also: `top` shows that VIRT is about 10x larger than the guile heap size, and RESIDENT is about 2x larger than the guile heap size
<linas>I'm actually using c++ operator new
<wingo>linas: malloc malloc or scm_gc_malloc ?
<wingo>ok
<wingo>so, i have the answer, let me find it
<linas>operator new which is malloc and then scm_gc_register_collectable_memory
<linas>its vaguely like memory-fragmentation
<wingo>see notes on https://www.gnu.org/software/guile/manual/html_node/Memory-Blocks.html#Memory-Blocks
<wingo>you are calling register_allocation or no?
<wingo>with guile 2.1 i assume?
<linas>except all the objects are the same size, so it shouldn't really fragment
<linas>guile 2.1 with scm_gc_register_collectable_memory
<linas>although I beleive I see the same thing with guile 2.0 also
<wingo>hmm
<wingo>wait you are leaking the memory?
<linas>?
<linas>I call scm_gc_register_collectable_memory and then guile does whatever it does.
<linas>Its currently "leaking" at the rate of maybe 0.1 bytes per malloc. The "leak" gets slower and slower
<wingo>is it asymptotic? :)
<wingo>i didn't see where in your description btw where you were allocating managed memory
<wingo>if you just call a function in a loop that shouldn't allocate anything, right?
<wingo>when do you free the memory?
<linas>I never free the memory, I let guile do it. I poke the memory into a smob
<wingo>!
<linas>Its almost asymptotic. It almost stops growing after an hour (and 600M mallocs)
<wingo>i think everything might be ok then. what do you do with the smob?
<wingo>do you use finalizers to free the memory then?
<linas>I registered the free with scm_set_smob_free
<linas>Its a "psuedo-leak" -- after an hour, maybe 60GB has been malloced and freed, the guile heap is about 140MB so in that sense "its all OK"
<linas>but after 90 minutes, the guile heap is now 200MB .. still quite mangeable.
<linas>but seemingly way too large for a loop that is only mallocing 100 bytes at a time
<wingo>heh
<wingo>i encourage you to look into it if you are interested in these things :)
<linas>The production system zooms up to 10GB of guile heap before it "stabilizes" -- that's what is driving this
<wingo>10GB!!!
<linas>yeah .. and it will stay there for a few hours and then go up to 12 GB...
<random-nick>linas: did you test with using C's malloc instead of C++'s new with the same code?
<linas>the production system has maybe 50 or 100 guile threads doing all kinds of crap, allocating smobs almost as fast as possible, pushing things around.
<linas>I mean, it seems to all work great, except that the mem usage is a bit outlandish.
<linas>wingo I am trying to decide if I should write a tiny stand-alone demo for this.
<linas>the problem is.. I'll write it, run it, reproduce the problem, and then.. what, dive into the guts of bdw-gc? That's maybe more than I want to do.
<random-nick>linas: well, when you reproduce the problem you can try to experiment with different things
<linas>like what?
<random-nick>linas: like running it with an older guile version (2.0?)
<linas>the "productin" system is on 2.0, I beleive the problem exists there too
<linas>I can double-check to see if its better or worse, but I suspect there's no change
<random-nick>linas: when you write it you can share it here
<wingo>linas: it's not bdw-gc
<paroneayea>hm
<wingo>it's above that level
<random-nick>huh, is named let* not a thing?
<paroneayea>davexunit: I just realized i'll need to move 8yield and 8sleep to be being managed by the actor system to make that work. Maybe I'll make this a 0.4.1 thing :)
<paroneayea>I'll just add a footnote that in 0.4.1, checking actor-alive? will likely not be necessary.
<davexunit>okay :)
<wingo>see the impl of scm_gc_register_allocation, and note that if you have free functions on your smobs, those smobs are asynchronously finalized, so to an extent you have the main prog racing with the finalization thread, and free() takes more time than malloc()...
<paroneayea>davexunit: I know how to do it at least now
<paroneayea>but 0.4.0 is so close, I should get it out the door I think!
<davexunit>yeah, that sounds like a good idea.
<paroneayea>thanks for prompting me on it though... I think it's the right direction :)
<paroneayea>bbiab!
<linas>wingo: ahh! racing! thank you! that is actually something I can fiddle with! I will track that and see!
***dje is now known as dje42
<wingo>ACTION tries to get parallel fibers + guile web server working
<wingo>oh it works, yaaay
<davexunit>oooh
<davexunit>exciting
<linas>wingo your hypothesis about racing seems competeley correct
<davexunit>wingo always was a big mariokart fan
<linas>(p.s. love what I read about fibers)
<linas>so adding atomic counters to the maloc/free code completely eliminates the growth of the guile heap
<linas>the atomic counters apparently have a side-effect of stalling any malloc threads, until free catches up.
<wingo>linas: i was referring to a malloc/free race between mutator and finalizer; afaiu there was no race in bytes_until_gc, was there?
<wingo>or was there a race somewhere else?
<linas>no just as you said
<wingo>the race i was referring to would just mean that after a GC, you don't actually decrease heap size by much because there's finalizers, it takes two gc's to free the heap and by then maybe you have grown the heap again or something
<wingo>looking forward to your writeup tho :)
<wingo>ACTION has parallel fibers working with the web server, whee
<linas>I put one atomic counter in the malloc right before SCM_NEWSMOB
<linas>I put the other atomic counter right before the free, which runs in guiles/bdw-gcs finalizer
<davexunit>wingo: so that is using multiple pthreads?
<linas>i.e. in the free routine that is called by scm_set_smob_free
<wingo>linas: oh you mean a race in your code, between the code that creates the smob and the code that frees it?
<wingo>cool
<linas>"race in my code" is not how I would put it. I have multiple threads, many of them calling SCM_NEWSMOB rapidly
<wingo>davexunit: yes -- though there's a knob. by default it uses the main thread only for the handler that threads the server state, and uses other cores for reading the request and writing the response
<wingo>linas: so what counter did you change to be atomic?
<davexunit>wingo: cool
<linas>and then, asynchronously, guile calls the free routine I registered with scm_set_smob_free()
<linas>I just added the c++ std::atomic<unsigned long> so that I could count how many mallocs and frees there are
<linas>I beleive c++ std::atomic<unsigned long> uses a mutex under the covers
<wingo>hm i don't think so, it uses atomic operations depending on your compilation target
<wingo>but yeah, it guarantees atomicity
<linas>yes.
<linas>Simply adding those counters has the side effect of preventing the growth of the guile heap. No other changes
<wingo>what
<wingo>i believe you but that is pretty weird :)
<linas>well, I am guessing that the atomic in the async-finalizer-free must somehow making the malloc thread(s) stall
<linas>but yeah, theat's weird, I'd have to do hand-waving about cache-line flushes to make it vaguely beleivable
<linas>(since atomics interat with the cpu cache lines, I believe, and maybe the malloc thread just keeps loosing over and over which the free thread spins madly, freeing everything...!?)
<linas>now to see if the production system can be cured the same way...
<wingo>well inserting an atomic forces your whole program into a sequential-consistency mode around those atomic operations
<wingo>good evening civodul :)
<civodul>hey!
<sneek>Welcome back civodul, you have 1 message.
<sneek>civodul, davexunit says: I tried removing ld-wrapper from a build using package-with-explicit-inputs but the resulting shared library still has hardcoded reference to glibc libgcc. what's up with that?
<wingo>yarrr, threads
<wingo>gnarlies
<paroneayea>yarrr maties
<paroneayea>davexunit: btw, I'm wondering where I'm going to "store" the captured stacks from errors
<paroneayea>I initially figured, a parameter!
<paroneayea>but, maybe the REPL will be on another thread
<paroneayea>and I might want to still access the captured stacks there
<paroneayea>so maybe a "singleton" global is better?
<paroneayea>I'm not totally sure.
<paroneayea>I could also stick the errors on the hive, which might be a fun, different kind of hackiness.
<paroneayea>I kind of don't want to keep tacking slots forever onto the default hive though.
<wingo>lloda`: was your lock a livelock or a deadlock?
<wingo>i found a livelock recently and will push a fix tomorrow