<daviid>paroneayea: oh! a tipo would generally raise an exception, no? how come it 'just' slowdown things, just curious <paroneayea>daviid: well, typo in that I had done a refactoring and <paroneayea>I found it out by putting a (pk 'hammertime) in the place it should have stopped <daviid>ah, it happes! oh man, good you found it <paroneayea>well I still went through a ton of Concurrent ML literature, enough to add the GC of waiters to conditions, which I guess is still useful even though it wasn't the cause <daviid>paroneayea: very good news, because that what fibers claims it is ... <paroneayea>and enough to really understand the heart of the beast a lot better <paroneayea>I guess that wasn't for nothing, if I learned a lot :) <paroneayea>I still have to finish the port of 8sync on top of fibers tho <daviid>I wish I had time more to learn about concurrency, one day maybe <daviid>paroneayea: pk is a good friend :) <daviid>in a manual, saying 'Returns multiple values: ...' then I wonder if numbering would be '(1)', '1-', or '1.' ...; '(2)' ... how is that for a friday night existential quizz? :) ***vicenteH` is now known as vicenteH
<paroneayea>so, message passing is slower in 8sync with fibers, but <paroneayea>4.032482s real time, 4.572554s run time. 1.203915s spent in GC. <paroneayea>7.453797s real time, 8.658398s run time. 2.784487s spent in GC. <paroneayea>but! we can also take advantage of multiple cores now, which is nice <paroneayea>that was just with a single actor; I haven't tried the difference if there were multiple actors dispatching to each other <paroneayea>I suspect 8sync-on-fibers may be nicer there, again, because of multiple cores <cmaloney>is it pretty much double the time, or does it even off at larger loads? <paroneayea>I assume it's about double, but only for the message passing itself <paroneayea>presumably most of computation is the stuff in-between the messages passed <paroneayea>I think 125k messages / second is still pretty ok for a single core on a decade-old machine though!