IRC channel logs


back to list of logs

<old>mirai: Probably god
<old>joke aside, I don't know. I have patches too and it has been quiet since february
<gnucode>hey friends, I am trying to install haunt on the Hurd.
<gnucode>It successfully installs, but the configure script of haunt can't seem to find guile-commonmark, which is installed...
<flatwhatson>gnucode: if you run a guile repl and ,use(commonmark), does it work?
<gnucode>flatwhatson: actually no...
<flatwhatson>if you're on guix, you need "guix shell guile guile-commonmark" to get it added to load-path properly
<gnucode>but I do have a directory /usr/local/lib/guile/3.0/site-cache/commonmark
<gnucode>flatwhatson: I am using Debian GNU/Hurd on real hardware.
<gnucode>I'm not certain if I have enough hard drive space to install guix on debian.
<gnucode>I only have 40 GB.
<gnucode>but I could try.
<flatwhatson>that would be overkill for this problem, seems like it's just a load-path issue
<flatwhatson>check %load-path, i'd guess "/usr/local/lib/..." isn't there
<gnucode>flatwhatson: you are correct.
<gnucode>haunt's configure script says that it cannot find commonmark
<gnucode>I wonder if I should try to do something like:
<gnucode>./configure --libs=/usr/local/lib
<gnucode>I'm not certain what the magical incantation is.
<flatwhatson>i'm not sure if you can set the load-path from configure flags. exporting GUILE_LOAD_PATH should work.
<flatwhatson>the other option would be to build guile-commonmark with --prefix=/usr so it installs into the built-in load-path
<RhodiumToad>grumble. open-pipe is broken on freebsd it seems
<RhodiumToad>on 3.0.9
<RhodiumToad>is it working on other platforms?
<Zelphir>Good morning/day/evening!
<RhodiumToad>good morning
<RhodiumToad>clearly nobody tested this shit.
<jpoiret>RhodiumToad: it is fixed on current master
<RhodiumToad>and is a release imminent?
<jpoiret>i have no idea
<jpoiret>let me say instead that it *should* be fixed on current master
<jpoiret>feel free to test it out
<RhodiumToad>is there a reason why guile-3.0's .pc file lists -lgc-threaded in Libs: but -lffi and others in Libs.private: ?
<lloda>echo '(pk (call-with-output-string (lambda (o) (format o "~a ~a αβ" (fluid-ref %default-port-encoding) (port-encoding o)))))' > utf.scm
<lloda>> GUILE_INSTALL_LOCALE=0 $GUILE utf.scm
<lloda>("ANSI_X3.4-1968 UTF-8 \u03b1\u03b2")
<lloda>well i was puzzled why that printed that way :-\
<lloda>i guess it's normal and the string itself has the αβ, it's just how it prints to the terminal
<RhodiumToad>;;; ("US-ASCII UTF-8 ??")
<RhodiumToad>oh, were your ?? actually some other character?
<lloda>yeah i get the \u03b1...
<RhodiumToad>grr, I broke my irc utf8 support at some point
<Zelphir>Always a bit tricky with that "is it internal representation that is wrong, or is it how it is displayed" kind of things.
<lloda>been trying to stamp the ascii out of my system and it's proven hard
<chrislck>sneek: botsnac
<chrislck>sneek: botsnack
<apteryx>mirai: technically the people listed as active members in can commit changes to Guile
<apteryx>so you could pester any of them
<apteryx>pester meaning gently ping
<apteryx>is there a way to print all my recent history at the Guile REPL?
<apteryx>to paste in a script elsewhere, say
<flatwhatson>apteryx: you could do something hacky with write-history from (ice-9 readline)
<apteryx>OK! I was wondering if I missed something readily availabel
<apteryx>,history 100
<civodul>you could check ~/.guile_history
<civodul>but apparently that's all we have?
<flatwhatson>you could write-history to a temp file, then read the last N lines from it 8)
<flatwhatson>.guile_history is only written on repl exit
<apteryx>which module provides call-with-input-file?
<apteryx>the manual doesn't make this easy to find, unless I'm missing something
<apteryx>I see a reference to it under the rnrs io simple section (
<apteryx>OK yes, it's from (rnrs io simple)
<mwette>I think I found it: (rnrs io simple)
<apteryx>yes, it's was a bit difficult to find
<apteryx>the first index described it in some other section of the manual
<apteryx>what is the exception type of a flock raised exception like: ?
<apteryx>basically I'd like to catch as narrow as I can in:
<RhodiumToad>if you're using catch, then catch 'system-error and look for the errno value in the data
<RhodiumToad>if you're using guard, you can be a bit more specific
<apteryx>shouldn't just letting the exeption bubble to my REPL session show all I needÉ
<apteryx>bubble up*
<RhodiumToad>the repl doesn't print all the fields of the exception
<RhodiumToad>(with-exception-handler pk (lambda () ...)) is one way to print more informative info
<RhodiumToad>;;; (#<&compound-exception components: (#<&external-error> #<&origin origin: "flock"> #<&message message: "~A"> #<&irritants irritants: ("Resource temporarily unavailable")> #<&exception-with-kind-and-args kind: system-error args: ("flock" "~A" ("Resource temporarily unavailable") (35))>)>)
<apteryx>ah, that's a useful trick
<apteryx>I wish the REPL would do this out of the box
<RhodiumToad>&exception-with-kind-and-args is I believe what catch matches on
<RhodiumToad>guard can match on any of the fields
<Zelphir>While we are at exceptions: What is the recommended way to catch an exception and build a new compound exception from it, with additional info and possibly replacing fields like the message field?
<apteryx>Zelphir: I find this a good read:
<apteryx>not sure if it addresses your question directly, I think it might
<RhodiumToad>with-exception-handler is probably the best bet if you're going to modify and rethrow an exception
<RhodiumToad>guard is convenient if you want to unwind the stack based on arbitrary criteria
<RhodiumToad>apteryx: (guard (ex (#t ex)) ...) is another way to make exceptions visible in the repl; that will actually return the exception object as the result
<RhodiumToad>(and that's a syntax form, so no need for a (lambda () ...))
<RhodiumToad>needs (ice-9 exceptions) for that one
<Zelphir>Thanks! Will look into that!
<Zelphir>When would you generally say, that one wants to unwind the stack? As far as I understand, that loses the context, in which an exception was raised. Perhaps sometimes one does not need that context and then it is better to discard it?
<RhodiumToad>ultimately you always have to unwind it somehow except when the thing that raised the original exception specified it was continuable
<RhodiumToad>the question is what do you want to do before then
<apteryx>RhodiumToad: cool. Would it be reasonable for the REPL to handle exceptions that way out of the box?
<apteryx>to give more insights about the exception thrown
<RhodiumToad>meh, probably not
<apteryx>I'm curious as to why :-)
<apteryx>another question: how do I convert a stat:mtime (epoch seconds) to a srfi-19 date object? I want to substract the current time from mtime to get the elapsed time since the file was modified.
<RhodiumToad>it's useful if you want to see the exception fields, less useful if you want to actually diagnose the error
<apteryx>OK. I wonder if we could have both. In a Python REPL I get a full backtrace *and* the exact exception type, which is very convenient.
<apteryx>it's a small thing that reduces the friction to using exceptions
<RhodiumToad>the repl could print out more detail, or there may already be a way to print it, I don't know
<apteryx>ah, srfi-19:time-difference is what I was after, I think
<RhodiumToad>(make-time time-utc ns sec) seems to be the thing
<apteryx>RhodiumToad: that'd be nice
<apteryx>time-difference wants the same time type, so I have to convert my plain epoch seconds into a time object with make-time
<RhodiumToad>why not get the current time as an epoch time, then?
<mwette>apteryx: Thanks for the link to Vijay's post !
<apteryx>it's a good one! thanks to Vijay for authoring it!
<RhodiumToad>for guile-libarchive, I used continuable exceptions to represent libarchive's numerous warning conditions
<apteryx>is it possible to steal an advisory lock (flock)
<Arsen>in what sense
<apteryx>if a process holds an flock for more than X seconds, I'd like a 2nd process to be able to become the owner of the lock (steal it)
<Arsen>no, to my awareness
<RhodiumToad>you'd need some mechanism to inform the first process that it no longer holds the lock
<apteryx>I thought calling unlock from another process may be a way
<apteryx>(flock port (logior LOCK_UN LOCK_NB))
<apteryx>and then lock again with (flock port (logior LOCK_EX LOCK_NB)) to steal
<RhodiumToad>only the holder of the lock can unlock it
<Arsen>and only through the appropriate file description
<apteryx>is the lock released when the process dies?
<apteryx>OK; that's a good thing at least
<Arsen>when the file description holding it is released, so is the lock
<apteryx>I was worried to get stuck with stale locks
<apteryx>in case the process holding it got aborted for some reason and couldn't unlock it
<apteryx>does guile garbage collects file descriptors of dead processes?
<apteryx>or I have to pay extra care with dynamic-wind to ensure the file descriptor got closed when I'm done
<Arsen>I don't know if the guile GC closes ports but I am certain that a dead process cannot hold a file description open
<Arsen>so, if you fd = open ("foo", O_RDONLY); flock (fd, FL_EXCL); abort (); in C, the lock is released
<Arsen>oh, it's LOCK_EX
<apteryx>well that simplify greatly what I was attempting to do
<apteryx>so I have to keep the fd open for the duration I need to lock the correspnoding file?
<apteryx>or is just calling flock LOCK_EX once on it in the lifetime of the process enough?
<apteryx>as in (call-with-input-file file (cut flock <> (logior LOCK_EX LOCK_NB)))
<Arsen>if you close the file description, the lock is released
<Arsen>IME, it's usually best to tie lifetime and locking semantics together
<apteryx>OK; so the lock lives with the file description (descriptor?)
<Arsen>this way, the likelyhood of error is lowest
<Arsen>a file descriptor is a reference to a file description
<Arsen>it counts towards a file descriptions refcounter
<Arsen>dup generates another file descript*or* but not descript*ion*
<apteryx>I see!
<Arsen>so yes i'd keep it alive for the duration of your work and then close it later personally
<Arsen>(rather than opening twice or such)
<apteryx>perhaps some context manager like:
<apteryx>hm, I'm missing the interesting bit
<Arsen>yes, that seems reasonable, though I believe there's already a context manager that closes a port, no?
<Arsen>ACTION has not played with files in a while... or a in large quantities... in guile
<Arsen>seems so
<apteryx>with the 'lock' argument passed to the lambda proc, but you get the idea
<apteryx>seems to work
<apteryx>strange; the thunk gets to run even the exception is raised on the line above by flock
<apteryx>the whole mcron job/script looks like this so far:
<apteryx>annotated with output (bottom):
<apteryx>the whole thunk apparently ran; wile the exception should have aborted it earlier
<apteryx>perhaps because everything happens at the top level?
<apteryx>still, it shouldn't happen, if I understand correctly
<apteryx>ACTION devises simple reproducer
<RhodiumToad>what's happening that you think shouldn't happen?
<apteryx>the flock line throwing an error should prevent the thunk from running in the call-with-advisory-lock manager
<apteryx>*flock call
<RhodiumToad>yes, and you know that the error is thrown on that call how?
<apteryx>I'm catching it and printing "btrfs-send job already running"
<apteryx>the catch is very (overly?) precise
<apteryx>in the guard
<apteryx>and there's a single call to flock in the script
<RhodiumToad>and all the output is from the same run of the code?
<apteryx>can't reproduce from this simple script though:
<apteryx>something is wrong in the larger script or when running through mcron
<apteryx>ah no, can't be mcron, I'm testing it directly with guile
<apteryx>oh, 'touch' is not defined
<apteryx>but the file already exists so it doesn't run
<apteryx>I passed the thunk as a call with the parens
<apteryx>so it's called before ^^'
<RhodiumToad>ah indeed
<apteryx>couldn't see it, apologies
<apteryx>looks better now
<apteryx>now to send the actual btrfs snapshot to the remote host
<apteryx>it'll be fun to see how guile-ssh handles a transfer of 8 TiB