IRC channel logs

2014-05-07.log

back to list of logs

<nalaginrut>morning guilers~
<ArneBab>moin nalaginrut
<nalaginrut>heya
<wingo>moin
<atheia>good day guilers!
<saul>ArneBab, in your SRFI for wisk, how does one quote a symbol?
<taylanub>uh, so now we'll have not only SRFI-49 and SRFI-110, but also a third?
<nalaginrut>LOL, years later, we don't call Scheme a Lisp dialect, there're bunch of Scheme dialect called scheme-49,scheme110, scheme200
<taylanub>how can I pipe some bytes through a process? I thought the following would work but I get an empty string: (define pipe (open-pipe* OPEN_BOTH "cat")) (display "foo" pipe) (force-output pipe) (drain-input pipe)
<ArneBab>saul: simply with '<symbol>
<taylanub>oh, I suppose cat(1) might be doing its own buffering
<ArneBab>saul: all wisp does is inferring parens from indentation. I don’t think of it as a new language but as a way to leave out the parens which you can already see from the indentation. In the same way, the inline-colon simply allows adding a paren which you do not have to close, because it automatically ends at the end of the line.
<ArneBab>taylanub: I started wisp, because the definition of SRFI-110 became much to complicated without actually fixing the problem I saw in SRFI-49 (just working around it).
<ArneBab>taylanub: I first contributed to SRFI-110, but they told me that they want to keep going down that road and actually added more syntax-elements while I contributed.
<ArneBab>(actually I contributed to project readable before it became SRFI-110)
*taylanub is still not having luck with piping. Tried to write huge amounts of data, as well as "sed -u" with GNU sed (is supposed to do unbuffered IO), yet `drain-input' always returns the empty string.
<taylanub>`read-char' works fine, weird, maybe I don't get what drain-input is supposed to do or how one uses it
<ArneBab>this also works: (use-modules (ice-9 popen)) (define pipe (open-pipe* OPEN_BOTH "cat")) (display "foo\\n" pipe) (force-output pipe) (read pipe)
<wingo>drain-input doesn't cause additional reads
<ArneBab>or try this: (use-modules (ice-9 popen)) (define pipe (open-pipe* OPEN_BOTH "cat")) (display "foo\\n" pipe) (force-output pipe) (unread-string "bar" pipe) (drain-input pipe)
<wingo>it just reads out buffered data
<taylanub>indeed, I didn't carefully read the example given in the manual. my POSIX knowledge is lacking here: is it impossible to "read everything written so far" without blocking for input? because I see no way to do it (other than by e.g. relying on a special sequence in the transmission which acts as a packet-delimiter so to say)
<nalaginrut>hmm..keylogger?
<nalaginrut>you may try non-block with epoll
<taylanub>another issue: how do I induce an EOF on a port? if I close it, I can't read anymore. all in all, I can't figure out how to open a pipe to a process, give it input, read its output, and have the process end
<nalaginrut>in non-block, you don't have to handle EOF
<wingo>taylanub: there are two fds -- one for reading and one for writing
<wingo>close the one you're done with
<taylanub>thanks, but apparently `open-pipe' returns a soft port (at least in the OPEN_BOTH case), which seems to be opaque and I can't get its associated pipe(s)? (if I got this right, there's two pipes, meaning four fds in total, for OPEN_BOTH)
<taylanub>ah, no, it's also just a pair of ports/fds
<taylanub>OK, looks like it's simply a limitation of (ice-9 popen) that one can't access the ports when using OPEN_BOTH, I'll see if I can patch it then, unless I'm missing something...
<civodul>Hello Guilers!
<daviid>hello guilers
<daviid>is there currently a way to specialize goops accessor(s) methods, such things as [clos]: (defmethod (setf date-start) :after ((self project) date) ...) ? [not worried about the :after, but the (setf accessor) ...
<ijp>no
<ijp>oh, sorry, I saw ":after" and reacted immediately
<wingo>daviid: you can make setters but #:after and other method combinations don't work
<ijp>setters are fine
<daviid>wingo: I know about setters, that's what i do, but I would prefer to specialize the accessor when used through set!, is that possible?
<daviid>I'm looking for this [will paste in a sec]
<wingo>when i was saying "setter" i meant the thing that (set! (foo bar) baz) uses
<wingo>which can be a GOOPS method
<wingo>generic, anyway
<daviid> http://paste.lisp.org/display/142429
<daviid>wingo: how do i get access to (set! (foo bar) baz) ... so i can redefine the method ?
<wingo>(define-method ((setter foo) ...) ...
<wingo>)
<daviid>ok tx
<lamefun>is there read that annotates what it read with line/column?
<lamefun>read is kind of useless to store data when it's impossible to report locations of errors
<wingo>that is a problem of read, yes :)
<wingo>look in the manual for "source properties"
***artyom-poptsov is now known as avp
***avp is now known as artyom-poptsov
<civodul>hmm test-smob-mark segfaults on 2.0.11 --without-threads
<wingo>what libgc versoin?
<wingo>*version
<ksinkar>I have compiled guile from source. How do I get pkg-manager to identify that I have installed guile?
<wingo>ksinkar: do you mean pkg-config?
<wingo>i am guessing you installed guile in the default location, which is /usr/local
<wingo>if your pkg-config is from a distro, then it only looks in /usr for pkg-config files
<wingo>to add /usr/local/pkgconfig to your search path, set PKG_CONFIG_PATH=/usr/local/pkgconfig in your environment
<dsmith-w`>ksinkar: Or: What do you mean by "pkg-manager" ?
***dsmith-w` is now known as dsmith-work
<amgarching>Evening!
<amgarching> http://pastebin.com/nrSHKFW4
<amgarching>I dont get how buffering works for custom "binary" ports.
<amgarching> http://git.savannah.gnu.org/gitweb/?p=guile.git;a=blob;f=libguile/r6rs-ports.c;h=e8674299d5a6c089c374bbbb568f5df465ee8e3e;hb=f7cf5898d8f5ec774640e3e0888ec627ce4692be
<amgarching>the sources suggest:
<amgarching> 280 /* Size of the buffer embedded in custom binary input ports. */
<amgarching>281 #define CBIP_BUFFER_SIZE 4096
<amgarching>that is 4k buffer. So why are the bytevectors passed to the port write! proc so small?
<amgarching>In real app I am broadcasting these bytevector sections among a group of processes.
<amgarching>It is very inefficient without buffering.
<amgarching>Doc link: https://www.gnu.org/software/guile/manual/html_node/R6RS-Binary-Output.html#R6RS-Binary-Output
<amgarching>Guile 2.0.5 Ubuntu 12.04
<civodul>amgarching: buffering for "cbips" didn't exist until 2.0.11
<amgarching>the soirces I linked to are for 2.0.5
<civodul>that is, it was fully-buffered by default, with this 4K buffer
<civodul>yes
<civodul>so what you get is that internal 4K buffer
<amgarching>Then how all of the bytevectors are 1-4 bytes
<civodul>what do you mean?
<amgarching>The callback procedure (write! bv start len) is supposed to transfer a section of length len of bv starting at start. Neither len nor (bytevector-length bv) is close to 4k. Even for larger writes.
<amgarching>pastebin shows n=3, size=3, that is len=3 and bytvector length 3 or even smaller number
<amgarching>write! is the input to make-cutom-binary-output-port is is supposed to do the actual transfers
<amgarching>In my pastebin it corresponds to bcast!
<amgarching>This proc never touches a BV with a size even close to 4k
<civodul>lemme see
<amgarching>I wonder who sets the "size" in calls to cbop_write (SCM port, const void *data, size_t size)
<civodul>amgarching: that's a different problem, that's because (write (iota 100) port) ends up writing one number (ie. a few bytes) at a time
<civodul>if you do (put-bytevector port bv), you'll notice that BV gets there 4K at a time
<amgarching>Hm, in my case I see all of 10M passed through: (put-bytevector port (make-bytevector 10000000)
<civodul>right, that's possible too
<civodul>even better, no? :-)
<amgarching>Well, the receiver side has 4k recvbuf anyway, I think
<civodul>oh, and CBIP_BUFFER_SIZE is for *input* ports, and here you're talking about an output port
<amgarching>yes, you are right.
<amgarching>I should be doing (write (string->utf8 (with-output-to-string (lambda () (write DATA)...) to work around that? Copy after copy!
<amgarching>ah, not quite
<civodul>depends on what you're doing
<civodul>maybe you could just as well use make-output-bytevector-port
<amgarching>yes, the receiver side has constant-length BV of 4096
<amgarching>"bytevector-port"? I didnt see them yet. Let me check. http://www.gnu.org/software/guile/manual/html_node/Port-Types.html#Port-Types
<amgarching>you mean open-bytevector-output-port, i think.
<amgarching>From the "first principles" accumulating potentially unlimited data in memory is dangerous. But let me try.
<amgarching>these open-bytevector-input/output-port are only good to serialize/deserialize the data. Not sufficent to fake a "Schemish" communication channel.
<civodul>yeah, useful in some cases, harmful in others ;-)
<amgarching>Hm. (put-bytevector port (with-output-to-bytevector (write SEND))) and plain old (read port) to receive is somewhat assymmetric
<amgarching>but works. Is it supposed to? Gven all the encoding issues and such.
<civodul>amgarching: if you're doing textual i/o, you should simply use 'write' and such
<civodul>write, read, string ports, etc.
<amgarching>you just noticed that (write (iota 100) cbop-port) does "sycalls" on custom binary output port for every 1-4 bytes. I dont see a way to control that.
<civodul>on file output ports, one can use 'setvbuf' to control that
<civodul>but not on CBOPs
<amgarching>yea I tried that, not in 2.0.5
<civodul>not what?
<amgarching>tried setvbuf in 2.0.5, does not work. You are right.
<civodul>ah, right
<amgarching>It also seems that I need to an "allreduce" over all processes in order to find a minimum and use that as the transaction length. If there are no guarantees about the buffer sizes on either side, there seems to be no other way around. A collective MPI_Allreduce (MPI_IN_PLACE, &transaction_length, 1, MPI_SIZE_T, MPI_MIN, comm) is an expensive operation.
<civodul>you're doing MPI bindings?
<amgarching>no, that would be too much. I want a port to write to to broadcast scheme objects.
<amgarching>I've almost got it with your help. Thanks!
<civodul>you're welcome
<amgarching>Any other ideas how to broadcast an s-expression that can be read from a named FIFO only by one of the processes, but is needed as input by all of them? Already looks like an overengeneering.
<amgarching>Files are simple, but watching for them appear and change contents is no fun.
<civodul>yeah
<civodul>you could use multicast, maybe?
<civodul>oh, are these local processes?
<amgarching>may run on different hosts too.
<amgarching>NFS is there, but it is buggy as hell.
<civodul>on the producer side, you could indeed use a CBOP, but perhaps add your own buffering layer
<civodul>and then, that CBOP would write that to all the sockets/ports corresponding to client connections
<civodul>something like that
<dsmith-work>amgarching: And by watching files, do you mean inotify?
<amgarching>Ok, that is basically like MPI works. Would not want to add my own layer.
<amgarching>dsmith-work: I've only heard of them: {e,k," "}poll, *notify. Never used any of them
<dsmith-work>I understand inotfy is suppsoed to be fairly effcient, though not very portable.
<dsmith-work>And not used it myself...
<dsmith-work>someone did a guile-inotify a while back
<dsmith-work>Some kind of multicast sounds right to me for multiple hosts.
<dsmith-work>But that's UDP of course, so has limitations.