IRC channel logs


back to list of logs

<ozzloy>is there guix on mac os x or ms windows?
<ozzloy>maybe via cygwin?
<ozzloy>another error "which are comparable to quasiquote, unquote, and unquote-splicing, respectivel (see" on should say "respectively"
<ozzloy>lol @ vi package example
***exio is now known as exio4
<ozzloy>sneek: later tell civodul: "which are comparable to quasiquote, unquote, and unquote-splicing, respectivel (see" on should say "respectively".
<ozzloy>sneek: later tell civodul: "In this example, the resulting /gnu/store/… file will references coreutils, grep, and sed" should say "will reference" on
<sneek>Got it.
<ozzloy>sneek: help
<ozzloy>sneek: later tell civodul: "this is useful when disambiguation among several same-named packages" should say "disambiguating between"
<sneek>Got it.
<ozzloy>sneek, later tell civodul: " Thus, if a build fail, its build tree is kept under /tmp" should be "Thus, if a build fails"
<sneek>Will do.
<civodul>Hello Guix!
<sneek>Welcome back civodul, you have 4 messages.
<sneek>civodul, ozzloy says: "which are comparable to quasiquote, unquote, and unquote-splicing, respectivel (see" on should say "respectively".
<sneek>civodul, ozzloy says: "In this example, the resulting /gnu/store/… file will references coreutils, grep, and sed" should say "will reference" on
<sneek>civodul, ozzloy says: "this is useful when disambiguation among several same-named packages" should say "disambiguating between"
<sneek>civodul, ozzloy says: " Thus, if a build fail, its build tree is kept under /tmp" should be "Thus, if a build fails"
<civodul>ozzloy: thanks, i've fixed them locally and will push!
<roelj>Is there a way to fetch a tarball from an FTP from which one has to log in?
<roelj>(in a package definition)
<roelj>Because now I get: ERROR: Throw to key `ftp-error' with args `(#<input-output: socket 6> "PASS" 530 "Not logged in.\\r")'
<civodul>it should be possible
<civodul>take a look at (guix ftp-client)
<roelj>The ftp-client does not seem to allow specifying a username and a password using ftp://<user>:<pass>@<server>/<directory>
<civodul>currently it systematically logs in as "anonymous", in 'ftp-open'
<civodul>we should add user/password parameters to 'ftp-open'
<roelj>I wish %ftp-login was configurable from a package definition
<roelj>I'll try to add them and see how that goes
<civodul>and then (guix build download) could take into account the 'uri-userinfo' part
<roelj>Hmm, I don't get what goes on there
<roelj>How does it parse a URI?
<civodul>there are two things to do: (1) add user/password parameters to 'ftp-open', and (2) adjust the URI parsing code in (guix build download) so that it honors ftp://user:pass@server forms
<civodul>actual URI parsing is done by 'string->uri' from (web uri)
<civodul>scheme@(guile-user)> (uri-userinfo (string->uri "ftp://foo:bar@baz/"))
<civodul>$3 = "foo:bar"
<kensington>i'm having some trouble with elogind - loginctl is not showing any sessions at all
<kensington>i guess i'm missing some pam setup, but not sure what
<civodul>kensington: are you using %desktop-services?
<kensington>civodul: ah, this is not inside guix, so i suppose not
<rekado_>roelj: I ran into the same problem once, so I just rehosted the tarballs.
<civodul>kensington: i assumed you're using GuixSD; is it the case?
<kensington>civodul: nothing guix at all i'm afraid; just here per elogind's readme suggestion
<civodul>aah, ok
<civodul>kensington: you need to make sure to use pam_elogind as well
<civodul>see for an explanation
<kensington>civodul: i did add pam_elogind to my pam config where pam_systemd would normally appear, but no dice
***Digitteknohippie is now known as Digit
<janneke>it says in the manual that starting guix-publish-service is a one-liner
<ng0>upstream commit 60a8e2167d92afb0376d248cdea95ce3d5b228cf fixed the license in perlpsyc after a feedback, however i don't know how to label this change: I will exclude contrib as it was recommended by the author to me, and psycion is now gpl2+/artistic, while the rest is gpl2/artistic.
<janneke>is running `guix archive --generate-key' manually a prerequisite, or should that have happened after a guix system init?
<ng0>license gpl2 artistic gpl2 ?
<ng0>and another commit.. let's see what's fixed now
<ng0>and now contrib is also clear.
<ng0>i think with public domain + gpl2 + artistic + gpl2 i can work
<rekado_>ng0: is this artistic or artistic 2?
<rekado_>the original artistic license is unclear and doesn't count as a free software license.
<kensington>i'm pretty sure pam_elogind is not getting loaded at all because i can't find any of the relevant debug stuff in the syslog
<rekado_>there's the clarified artistic license as well, which contains only minor changes to artistic1 that make it a free software license.
<rekado_>ng0: usually we just list the licenses and add a comment to explain what the list means.
<civodul>janneke: yeah, generating the key pair is a prerequisite; the manual lies ;-)
<ng0>uh... let me ask lynX again.
<civodul>rekado_: BTW, did you write a summary of the Guix workshop you held, or something like that?
<rekado_>I've just written course notes
<rekado_>but they might be a little too specific to our installation here.
<ng0>i *guess* perl following license.
<ng0>so whatever perl is using now
<ng0>but i'll get a reply soon
<rekado_>civodul: what audience do you have in mind?
<ng0>4 line pastes incoming. those show that it's the same as perl:
<ng0>Copyright (c) 1998-2005 Arne GE<ouml>deke and Carlo v. Loesch.
<ng0>All rights reserved.
<ng0>This program is free software; you can redistribute it and/or modify it
<ng0>under the same terms as Perl itself.
<ng0>rekado_: ^
<janneke>civodul: tnx
<civodul>rekado_: me? :-) or other interested Guix
<ng0>am I promoting the use of non-free software when I state that one binary needs another binary which is shareware from 1996 and until it is not ported to another API it is up to the user to decide?
<ng0>specifically, it works, but you need to decide if you want functionallity, which you will discover once you try to run it.
<ng0>I won't link to the website either or the new source
<ng0>the website disappeared from the web though, so it's hard to get it unless you search it
<ecraven>well, it's non-free and you are promoting it... :p
<ng0>not really... ";; psycmp3 so far relies on rxaudio, a binary shareware from ~1996" is the only quote. one out of 12 binaries, the perl software which needs this itself is free software.
<ng0>i don't tell anyone who reads the source to go ahead and get the binary
<ecraven>is rxaudio necessary or optional?
<ng0>all dependencies are optional at buildtime, only required afterwards and the binaries will complaint themselves if it's not in $PATH
<ecraven>so they are required when actually running, but not for building?
<ng0>which is why I package Curses for example
<ng0>the perl libs can be used without the optional dependencies
<ng0>just the binaries are requireing htem
<ecraven>so in fact you cannot run that thing without getting rxaudio?
<ng0>now you put it like this, yes you can't, but when you run it it only complaints about MP3::List, not rxaudio
<ecraven>which is kind of misleading :)
<ecraven>as mp3::list is there, it's just missing one of its dependencies (namely rxaudio), right?
<ng0>so I leave psycmp3 out of the binaries to install.
<ecraven>ah, I think I'm starting to understand, the package that contains mp3::list doesn't need rxaudio in general, but only for some parts, and the thing you talk about needs one of these parts?
<ng0>i guess?
<ng0>one moment
<ecraven>hm.. I'd suggest maybe installing the binary, but adding a note somewhere that you need rxaudio to actually use psycmp3?
<ng0>yeah which leads me exactly to the question, can I do that? whoever (a very very small percentage of people I guess, as psycmp3 isn't the focus of the package) will use it will see it in the sourcecode
<ecraven>ng0: I'd tend toward adding a note, but I'd wait for someone more knowledgeable than me to answer you :)
<ng0>software is free (gpl2/perl package license), only quotes itself that it needs MP3::List, but to actually function you need to either port to a free API or get rxaudio
<ng0>which is not stated by the software itself
<ng0>but let me get an answer from the developer
<civodul>rekado_: FYI: java-swt-4.5: URI not reachable: 404 ("Not Found")
<civodul>and it's not on either
<ng0>ecraven / anyone: can you clone git:// and read the bin/psycmp3 header yourself and decide?
<jlicht>ng0: Having some issues cloning that repo; access denied or repo not exported
<ng0>also another question.. as the dependencies are optional, do I have to define them in the package nevertheless or can they be installed afterwards?
<ng0>hm one sec
<ng0>origin git:// (fetch)
<ng0>ah, one git. forgotten
<ng0>lynX said at some point in the chats that i should just ignore psycmp3. but i'd like to know if it can be integrated somehow so that if at some point the "API rewritten" commit message pops up I just have to fix a note.
<ng0>my motivation for this is to just have a native psyc client instead of telnet.
<ng0>which i don't need psycmp3 for
<ng0>this is my new note, should it apply:
<ng0>3870;; psycmp3 so far relies on but does not promote the use of rxaudio,
<ng0>3871;; a hard to find binary with shareware license from 1996.
<rekado_>civodul: hmm, do they not keep archives?
<rekado_>seems that the latest version is 4.5.2
<rekado_>I'll fix this some time this week.
<rekado_>civodul: I will take the chance to address your comments on the swt patch that I got only after pushing to master.
<rekado_>Is anyone here working on a more recent version of IPython?
<rekado_>ng0: how can it rely on but not promote non-free software?
<ng0>idk, i'm not good with licenses. if it's problematic i just exclude it.
<rekado_>I have a couple of old patches for IPython 4.0.0, which is probably also outdated by now.
<rekado_>not as old as our current 3.2.1, though.
<civodul>rekado_: sure, np!
<civodul>(did i comment on swt?)
<rekado_>civodul: yes, you commented on the patch I submitted to use different source tarballs for swt dependent on architecture. But at that time I had already pushed it after it had been reviewed by someone else.
<jlicht>ng0: I'd say this would promote non-free sw as it is, but that's just my 2¢
<ng0>so 1 unsure, 1 yes it does and myself I am unsure as well, author said ignore psycmp3.
<ng0>that equals to me ignoring it and leaving it out for now
<rekado_>when the software cannot be used without non-free software I would consider this "promoting the use of non-free software"
<rekado_>if it can be removed and the software would still be useful then it should be removed
<rekado_>we do this for "shogun", a library recommending the use of a non-free library.
<ng0>yes, it can.
<rekado_>we patch out all support for the non-free library there
<rekado_>ng0: so, it can be removed and the software would still work without the use of non-free software?
<ng0>the lib/perl5/ part of it can stay I guess as it's only the binary which does require it.
<rekado_>ACTION goes afk for a while
<ng0>it's a package with 12 perl binaries and its optional contrib, lib, etc
<ng0>psycmp3 being only one of them
<ng0>;; XXX: psycmp3 needs rxaudio. Integrate once it uses different API
<civodul>rekado_: ah yes, i remember now :-)
<kensington>i have elogind listing sessions correctly now - is loginctl lock/unlock-session usually working correctly?
<kensington>as my user access is denied, and as root it doesn't appear to do anything
<ng0>psycmp3 might be moved to mpg123 in the future, so that's compatible by license I guess.
<ng0>i'll still delete it for now, just saying.
<thomasd>Hi Guix
<thomasd>Any recommendations for a new Schemer, besides the Guile manual?
<efraim>I started with mostly reading the source code for guix
<efraim>but there's also teach yourself scheme in fixnum days
<ecraven>thomasd: #scheme has many suggestions in the /topic
<adfeno>thomasd: I'm not a programmer, but I'm enjoying reading the Pamphlet Against R.
<adfeno>I'll give a link...
<efraim>and I had another one but I closed that tab at some point
<efraim>the little schemer is nice too
<ecraven>adfeno: that's a good read
<efraim>s/the little schemer/The Little Schemer/
<ecraven>thomasd: if you can, use Emacs with geiser :)
<adfeno>↑ (It's under CC BY-SA 4.0).
<adfeno>s/'s/'s licensed/
<adfeno>You're welcome. :D
<thomasd>I have a feeling it should be possible to "step through" a package build interactively using geiser, would help debugging.
<adfeno>We also have to thank the copyright holders and authors of the book for choosing a license that allows at least sharing the non-functional data (in this case, the book).
<adfeno>I have seen lots of non-functional data that is great, but doesn't allow at least sharing copies of the original work unlimitedly.
<rekado_>what? against R?
<adfeno>ACTION is reminded of a book about GIMP, and other one called "The Book of Linux" in Brazilian Portuguese, which document functional data (and so are also functional), but don't give the 4 essential freedom's to society.
<adfeno>rekado_: Yes... I'm also worried by that...
<adfeno>GNU R is great too! :D
<rekado_>R's power does not come from the language itself but from CRAN and Bioconductor.
<rekado_>there's nothing quite like it.
<rekado_>even Python folks have to admit that despite the existence of numpy, pandas, and other packages R is the go-to tool for stats and bioinformatics.
<adfeno>Note: I'm in no way familiar with GNU R. But I tend to support GNU projects and packages strongly.
<rekado_>I wish we had more bindings to scientific libraries for Guile.
<jlicht>I am trying to get the (json) module of guile-json loaded on the build stratum of code, using a construction similar to how Gnutls is loaded in (guix download)
<jlicht>although for some reason, trying to actually load the module at run time seems to fail because it is loaded before I can set the %load-path
<civodul>jlicht: could you paste the code you have?
<jlicht>civodul: The relvant pieces:
<jlicht>because I wasn't using gexps (yet), I very much assumed I also needed to add guile-json to the inputs
<civodul>looks ok to me
<civodul>except you should use (assoc-ref %build-inputs "guile-json") instead of (derivation->output-path (package-derivation store (json-package)))
<civodul>jlicht: could you post a log of what fails?
<jlicht>civodul: I will, after trying your suggestion. Or is this just a cosmetic/stylistic improvement?
<civodul>it's just an improvement
<civodul>i don't think it'll fix anything here
<adfeno>Ooops... Closed the tab by mistake.
<ecraven>does guix support containers with lxc? or only kvm?
<davexunit>ecraven: could you elaborate a bit more? lxc and kvm are both regular GNU/Linux software that could be packaged with guix.
<davexunit>guix has direct integration with qemu+kvm using 'guix system vm'
<davexunit>and guix can also make containers with 'guix system container' or 'guix environment --container' but it doesn't use lxc
<jlicht>civodul: (annotations) for output /w attempted loading of json module
<jlicht>and without
<ecraven>davexunit: I'd like to run multiple instances of the same php application on one server, but segregated from each other
<ecraven>from my limited knowledge, that should work nicely with lxc (which only runs a dedicated nginx and php-fpm in each container). with kvm, I'd need to run n entire systems (including n kernels)
<civodul>jlicht: can you check whether /gnu/store/5jczyn0gm9djasvmkbrip7a8irs774qf-node-object-ready-1.0.1-guile-builder sets %load-path appropriately?
***joachifm_ is now known as joachifm
<civodul>ecraven: currently the code for these Shepherd services simply forks+execs; you could change that to use 'call-with-container'
<civodul>the nice thing is that we don't need special support from the Shepherd or anything, we can just use our container library
<jlicht>civodul: ah! It does, but loading the module happens 'before' this. Shouldn't the eval-when take care of these timing issues?
<ecraven>I'll have to look into lxc, I don't know how to do these things manually so far...
<civodul>jlicht: aaah, bah, i see
<jlicht>civodul: it seems my load-path shenanigans are put in a second (begin ...) form, whereas the use-modules takes place in a first (begin ...) form
<civodul>this is terrible, but can you use (module-use-interfaces! (current-module) (resolve-interface '(json))) instead of that (use-modules ...) form?
<efraim>hey so remember all that work I did packaging qt-5.6.0 and then 5.6.1 came out?
<efraim>5.7.0 just came out
<civodul>jlicht: that's because use-modules forms are treated specially in build-expression->derivation :-/
<civodul>efraim: hey :-)
<efraim>civodul: hi!
<jlicht>efraim: is this a good things, or are you more like fml?
<efraim>its not a bad thing, but i/we should package the rest of qt so we don't have half on qt-5.5.1 and the rest on 5.7, at some point its probably going to stop working nicely together
<efraim>some of the things want webkitwidgets
<efraim>but if we start at the leaves and work backward for converting packages it works better than starting at the base and working out, fewer breakages
<jlicht>civodul: just spitballing here, but could I perhaps use a lower level loading mechanism of guile as a workaround?
<civodul>jlicht: yes, that's what module-use-interfaces! above is :-)
<jlicht>civodul: derp... Seemed to have missed that. Time to increase the font size on my screen XD. It seems to work though :D
<rekado_>I just updated to master and see this error:
<rekado_>ERROR: no binding `fdatasync' in module (guix build syscalls)
<rekado_>ACTION runs make clean-go
<civodul>maybe you should reboot as well ;-)
<civodul>fdatasync was added recently, not sure why we'd get this error
<rekado_>I should be more careful not to switch branches while running "make"...
<jlicht>civodul: almost there. Right now, I can (module-use-interfaces! ...) in the builder proc, but I actually want to make use of the (json) module in (guix build node-build-system)
<rekado_>it's tragic to see how traditional package management always seems to resurrect this unhealthy sysadmin<->user dynamic
<rekado_>we have a shared server with central installation of R and bioconductor; central upgrades are always unhappy
<civodul>jlicht: that should be fine, no?
<rekado_>I would love to find a workaround that allows people to mix system R with Guix R.
<civodul>rekado_: the problem being that people need R packages not available in Guix, right?
<optikalmouse>I'm going to take a stab at converting an NPM package to guix.
<optikalmouse>is there a way for me to install node at an older version? I see 6.0.0, do I need to create a new package at 4.x or whatever version or can I just do guix install node@4.x ?
<rekado_>civodul: yes. There are also packages that are in development and it's not feasible to package them.
<rekado_>R people use the devtools package for that.
<rekado_>to quickly build a package from a remote repository
<rekado_>I guess it would be enough to create a profile with the needed compilers and libs
<rekado_>then they could use their usual workflow to build and test packages.
<rekado_>right now they need to use the system R whenever they want to use install.packages(...) or devtools.
<jlicht>optikalmouse: I am working on an npm importer (and node build system, and everything that comes with it).
<rekado_>and this keeps them from switching to Guix.
<jlicht>if you do want to make use of an older node, you should package it as a separate package indeed, although there are some other packages doing something similar (several gcc related packages)
<optikalmouse>jlicht: I was just thinking I would convert a small npm package and then write a package.json2guix-package.scm importer
<optikalmouse>so the naming convention...node-4.x.scm would be fine?
<jlicht>optikalmouse: literally what I am trying to do, right now ;-)
<civodul>rekado_: you could have a repo with recipes for private packages, no? (in theory at least)
<optikalmouse>jlicht: how far along are you? :O
<rekado_>civodul: we have this already, but that's not enough.
<rekado_>our group develops R packages and they are not ready to use a Guix-based workflow.
<jlicht>optikalmouse: I can push the stuff that was working to a public repo, give me a minute.
<rekado_>(i.e. "guix environment" etc)
<rekado_>they also want to be able to quickly build stuff they find online. In R it's just a matter of doing devtools("") and the package is built.
<civodul>oh, i see
<rekado_>for many of these packages it's not worth taking the time to build an expression
<rekado_>(and I would be the bottleneck)
<civodul>interesting security questions :-)
<davexunit>optikalmouse: yes, you need to create a new package variant for the node version you want.
<civodul>rekado_: we could have a tool that generates a package object and builds it right away
<davexunit>otherwise guix couldn't possibly know about it
<civodul>rekado_: i don't know enough about R, but that should be possible
<rekado_>my boss asked me if we have a wrapper around R's install.packages(...) that would automatically Guixify things in the background.
<rekado_>for now my goals are simpler.
<rekado_>I just need to provide an environment in which Guix R's install.packages(...) works.
<rekado_>i.e. I need a working GCC that links with the libc in the store to replace the system compiler toolchain.
<rekado_>not just GCC but also gfortran.
<rekado_>I think our compilers are still somewhat broken.
<optikalmouse>davexunit: can I hot-load build systems into guix or do I need to compile guix with all the build systems that I want to use?
<rekado_>people would obviously lose the advantages of Guix when it comes to R packages, but at least they would have the freedom to use R as they did before.
<rekado_>(right now they simply cannot load packages that are built with the system compiler)
<davexunit>optikalmouse: I don't know what "hot-load" means
<davexunit>you can modify the guile load path as you'd like to run whatever code you want
<davexunit>if that's what you mean
<civodul>rekado_: ok, i see
<rekado_>davexunit: I have a question about "guix environment --container". I want to load an environment from a file with "-l guix.scm", but I also need to add coreutils to the environment. Should "guix environment --container --ad-hoc coreutils -l guix.scm" work?
<optikalmouse>davexunit: I mean can I load build systems after guix has been built, hot-reload is the terminology for other lesser programming languages ;)
<davexunit>rekado_: that would be put the package in guix.scm in the environment, if you want the deps then you need to change it a bit
<davexunit>guix environment --container -l guix.scm --ad-hoc coreutils
<davexunit>--ad-hoc is positional
<rekado_>davexunit: ah, okay
<davexunit>so that you can compose things
<jlicht>civodul: How would one 'inject' the (json) module into (guix build node-build-system) with `module-use-interfaces!'?
<civodul>jlicht: you don't inject it, it's enough if the 'builder' script puts it in the load path before (guix build node-build-system) is loaded
<civodul>so to load (guix build node-build-system), the builder should use module-use-interfaces! instead of use-modules, for the reasons discussed above
<civodul>well, unless i'm missing some other issue :-)
<jlicht>civodul: >.< I loading the json module like that. LMGBTY
<ng0>make: Nothing to be done for 'test'. <- where does perl-build-system look for tests?
<ng0>the source was not so helpful
<ng0>folder t/ ?
<bavier>ng0: typically perl-build-system just runs "make test"
<ng0>ah.. so that I have no i'll replace that phase
<civodul>jlicht: what?
<optikalmouse>jlicht: welp node.js/js ecosystem is messed up. for a simple terminal color display lib it has a chain 3 layers deep, this is going to be fun :'( :'(
<davexunit>optikalmouse: see by our pal paroneayea
<davexunit>jlicht certainly didn't pick an easy project ;)
<optikalmouse>btw I love the reader view in firefox, proper page widths ;)
<jlicht>I'd like to ammend my GSoC goals to include that I'd at least like to keep a sliver of sanity after this Summer :-)
<ng0>sanity is overrated :)
<jlicht>civodul: well, I'm waiting for some stuff to compile on my stuffy laptop, but I meant I misunderstood your earlier (module-use-interfaces! ..) advice to refer to loading the json module
<jlicht>ng0: definitely for JavaScript devs ;-)
<optikalmouse>jlicht: for building a package, I would just need to set native-inputs or inputs to 'node yeah? and then propagated-inputs would be the other npm packages needed?
<davexunit>optikalmouse: roughly yes, assuming there was a build system for Node, which is what jlicht is building.
<optikalmouse>is there a way to have optional deps or minimum deps in the inputs? like i have node 4.x in one project and 6.0 in another, do I need to write separate packages that build against each target or can I say (inputs `(("node" ,(or node-6.0.0 node-4.x)))))))
<jlicht>optikalmouse: this is an example (imported) npm package:
<davexunit>optikalmouse: you would need to make different package objects
<jlicht>optikalmouse: guix by definition has only specified dependencies (e.g., an exact version)
<jlicht>^what davexunit said
<davexunit>fortunately, it's very easy to make package objects. you could even write a function to generate a package using the version of node that you want.
<davexunit>we do this in guix itself
<davexunit>for things like generating a gcc toolchain which different versions of gcc
<jlicht>just don't forget to pass your packaged node to the build system, because by default it just uses the `node' package definition
<davexunit>jlicht: oh is optikalmouse using your code?
<davexunit>I would imagine there is a #:node argument to specify the node to use?
<jlicht>davexunit: exactly ;-)
<davexunit>similar to the other language build systems
<optikalmouse>davexunit: so in my package.scm I can just define multiple packages? (define-package ...) (define-package ...)?
<optikalmouse>I've been crippled by bad habits in bad languages, I see that now ;p
<davexunit>optikalmouse: there's no 'define-package' form, but sure you can define as many packages as you'd like. they are just Scheme objects
<davexunit>see any of the modules in gnu/packages/
<optikalmouse>oops i meant define-public
<davexunit>we define tons of variables with packages
<civodul>Snap again:
<davexunit>I got 40 upvotes on this discussion about "maintainers matter"
<davexunit>rekado joined in, too :)
<davexunit>civodul: oh btw, I made this GuixSD screenshot recently. I think it's quite nice.
<ng0>can i skip the tests included which don't do very much and I did not notice that they exist up until I started fixing it? it makes the code twice as long.
<pksadiq>does guixSD have /etc/os-release ?
<davexunit>pksadiq: I don't think so.
<davexunit>what software reads this file?
<pksadiq>I think guixSD requires this, as its common in any recent distribution.
<efraim>i like that picture
<davexunit>pksadiq: why? I'm asking because I don't know what the significance is.
<pksadiq>davexunit: shell scripts (and other scripts) can rely on this file to extract details about the OS
<davexunit>pksadiq: that sounds extremely brittle.
<ng0>it's a test after all, not a check.. but needs to run before the installation phase, so I need to define what's already in the install phase once more.
<davexunit>ng0: test and check are synonyms here
<pksadiq> It's pretty standard :) (though it's brought up by systemd people)
<ng0>ACTION expands the already very long perl-net-psyc package
<optikalmouse>jlicht: davexunit:
<davexunit>pksadiq: I mean, maybe we could add such a thing, but in general we're not crazy about this kind of stuff.
<ng0>lots of fun. I should've poked lynX to write a
<optikalmouse>haven't tested it yet, but I figure that's what it should kinda sorta look like?
<davexunit>/etc/os-release smells a lot like /usr/bin/env
<davexunit>I don't know what all the implications are of adding such this file are, so I can't say "yes we should add it" or "no we shouldn't add it"
<davexunit>one could say that systemd is standard now, but GuixSD doesn't use it.
<pksadiq> /etc/os-release is now used by Debian and Derivatives, Fedora and derivatives, and probably several others.
<pksadiq>I have seen /etc/os-realease on systems without systemd too.
<davexunit>I'm not necessarily opposed to adding it. I just think we should consider what the pros/cons are.
<davexunit>from reading this man pag it seems harmless so far
<jlicht>optikalmouse: That could work, indeed.
<pksadiq>davexunit: A person writing scripts can rely on that file to get information about the OS.
<efraim>i parse /etc/os-release for my conky script
<optikalmouse>jlicht: node-build-system will pick up whichever version of node is installed yeah? will it be able to use multiple versions of node similar to nvm?
<ng0>I'll rather expand this old version and contribute it upstream:
<davexunit>pksadiq: I think someone should take a stab are adding this to our operating-system build procedures.
<ng0>keep the perl.scm shorter
<jlicht>optikalmouse: almost; node-build-system by default resolves to the `node' package that is named `node' in guix.
<jlicht>if you have your own node (e.g. `mynode'), you can pass it to the build system via the #:node keyword argument
<davexunit>I don't really see any negative side-effects if we added it.
<pksadiq>davexunit: see a post on how can we get the information about the OS: Won't /etc/os-release help better?
<davexunit>pksadiq: sure, it seems like a helpful thing.
<davexunit>if for nothing else than putting the guixsd logo in efraim's conky script ;)
<davexunit>pksadiq: thanks for the suggestion!
<davexunit>now we just need someone to send us a patch!
<efraim>right now I parse the debian and the sid out of /etc/os-release, and I have the debian swirl from some random font
<efraim>but yeah, when I do finally switch my laptop over i'll need to embed the guix logo somehow
<davexunit>optikalmouse: cool node package! that native-search-path shouldn't be there, though.
<davexunit>that belongs on the node package itself
<jlicht>davexunit: that would be a mistake in the importer then ;-)
<davexunit>jlicht: :)
<davexunit>it would also be good if the module name didn't need to be specified
<jlicht>davexunit: that is why I was fighting build-expression->derivation today to load the (json) module
<jlicht>then the builder can extract the module name (and other info) from package.json files
<davexunit>ah I see
<davexunit>I'm a little surprised though because back in the day I just used 'npm' directly
<davexunit>and that worked out well.
<jlicht>davexunit: Let module A have a dependency on module B.
<jlicht>when installing A, guix first installs B (and puts it in NODE_PATH).
<jlicht>but npm still tries to download B as specified in A's package.json when calling `npm install <pkg-dir>'
<jlicht>civodul: somehow, it seems that node-build-system is still being included in a different way :/
<davexunit>I recall being able to tell npm not to talk to the network
<jlicht>davexunit: Without diving into bundled dependencies and/or offline npm registries?
<davexunit>I don't remember exactly, but I had packaged a few things using npm to install locally and it worked out okay.
<optikalmouse>^ I would prefer zero reliance on npm other than grabbing existing packages to convert them to guix. there's no need for it, it's not like pip and python or perl and cpan where you absolutely need them to get anything done. npm is a binary I wish I could remove from all my machines -_-
<jlicht>afaik, npm is just a node script. A shoddy node script, but a readable node script nonetheless :-)
<rekado_>just confirmed that our R defaults to plotting with Xlib because the runtime check for Cairo fails.
<davexunit>optikalmouse: npm comes with node
<rekado_>it's weird because Cairo is there.
<davexunit>and it is potentially useful to us because it knows how to do several tasks
<rekado_>it actually checks whether Pango is there.
<rekado_>so I changed the recipe to use pango instead of cairo and it works fine.
<jlicht>davexunit: I assume you 'installed' node packages by making use of NODE_PATH, no?
<davexunit>I think so. I'd have to look at the code I wrote that I sent you awhile ago
<optikalmouse>npm is this: <-- whole whack of stuff not really needed when you've got a real package manager to use?
<davexunit>jlicht: looks I actually removed the usage of 'npm' for installation purposes
<davexunit>but I did use 'npm test'
<davexunit>optikalmouse: it knows how to work with package.json files
<davexunit>so it's potentially useful
<davexunit>the Ruby build system uses 'gem', Ruby's npm equivalent
<jlicht>davexunit: that is my current approach as well (use it for tests, not for installation)
<jlicht>it also seems your packages did not have any dependencies (on other node packages at least)
<jlicht>which is when the fun starts
<optikalmouse>davexunit: that's true but rubygems is built better than npm ;) there's no need to tie node packages for guix to a bad default
<paroneayea>hey yo
<jlicht>hi paroneayea
<paroneayea>hey jlicht :)
<paroneayea>jlicht: how goes the hacks?
<jlicht>does it make sense to move a build-system to use gexp's? Or am I spouting nonsense?
<jlicht>instead of a final call to build-expression->derivation, one to gexp->derivation
<paroneayea>jlicht: well it *sounds* like a nice idea, without testing it ;)
<paroneayea>jlicht: if it works it's not nonsense? :)
<paroneayea>try it!
<paroneayea>I'm getting a lot of errors like
<paroneayea>guix substitute: error: corrupt input while restoring '/gnu/store/vl1hjsxfkw54sirq1bslvvj2pzvzx5vs-python-babel-2.3.2/lib/python3.4/site-packages/Babel-2.3.2-py3.4.egg/babel/locale-data/kn.dat' from #{read pipe}#
<ng0>do we support Build.PL and Makefile.PL? I'm thinking of wether to learn the old (Makefile.PL) or the new (Build.PL)
<paroneayea>guix environment: error: corrupt input while restoring archive from #<closed: file 0>
<paroneayea>I think it might have to do with connections to substitute servers
<jlicht>ng0: I thought both when I last looked at it
<ng0>i think only Makefile.PL looking at the source
<ng0>guix/build-system/perl.scm only mentions Makefile.PL
<ng0>then again, civodul might know better.
<jlicht>Docs: 'preference is given to the former if both Build.PL and Makefile.PL exist in the package distribution.'
<bavier>ng0: it supports both
<bavier>I added the support ;)
<ng0>now only to learn how to do this so I can make this really unimportant package much smaller
<ng0>>.< yay
<jlicht>davexunit: it truly seems that there is no sane way of telling npm that it should just install a package, not its dependencies
<jlicht>so back to getting package.json parsing in guix! :-)
<davexunit>jlicht: heh, sounds good!
<bavier>davexunit: I just read a very nice article linked on HN, then thought "interesting, I wonder what others think", clicked on comments, and yours was the top :)
<davexunit>bavier: ;)
<davexunit>I'm an HN ninja
<davexunit>I just gave an impromptu talk about Guix at work
<davexunit>went well!
<bavier>its nice to have someone posting guix links on HN
<efraim>davexunit: awesome
<ng0>i dropped the idea of writing the Build.PL .. i just need the test/ to run. so, my assumption that I can install a directory and in a phase after install delete it again is wrong or correct?
<ng0>that would spare some lines of code
<ng0>yep. nvm the question, grep first, ask later.
<ng0>I think I want to disable tests. it's not a super huge program, but getting the tests done in the way I want them does not work as you can't execute in the store in the phase.
<ng0>bascially what I do in addition to wrapping the tests/Makefile is chmod the Makefile to #o777, do a system* (string-append test "/Makefile"), delete-file-recursively test , where only the chmod and the delete succeeds.
<ng0>i don't know yet how else to execute the tests
<ng0>do a direct perl execution of them?
<optikalmouse>jlicht: I was just thinking, if I have the .tar.gz urls for npm packages and all the packages are in the right path, would I need the node-build-system to build the packages at all?
<ng0>make the Makefile obsolete with this?
<bavier>ng0: just to clarify, you're doing (system* "make" "-C" "test")?
<ng0>no.. wait I can paste it, need to do some tmux stuff
<ng0>or partly paste it.
<optikalmouse>jlicht: since the "build" is just running node index.js or node test.js in some cases?
<ng0> just the partial last phase - it has too many closing parens because it close all of (arguments)
<ng0>makefile does stuff like: DINGS=perl -I../lib/perl5 -Wall for each file
<ng0>eh... DINGS is the variable used in place.. ${DINGS} file
<ng0>i will add what you said bavier, see what changes
<ng0>and remove the 777
<ng0>and just now lynX added a makefile back x.x
<myglc2>hello... does anyone out there have experence installing Guix as an unprivileged user?
<davexunit>that's not a well-supported use-case at this time.
<myglc2>davexunit: Thanks. Thats what I expected, but I wanted to be sure
<ng0>adding to the libgcrypt-1.7.1 update: gnupg 2.1.13 just released
<ng0>I'll testbuild both combined asap.
<davexunit>myglc2: I think we could support it well in the future by using unprivileged user namespaces on Linux.
<davexunit>but some engineering effort is needed.
<efraim>gnupg 2.1.13 can be pushed to master if it works out
<ng0>Makefile:3: *** missing separator. Stop. i think i just need to patch that now.
<wingo>i feel like guix could be snappier
<wingo>if you look at an `strace true'
<wingo>you see things like
<wingo>open("/gnu/store/8m00x5x8ykmar27s9248cmhnkdb2n54a-glibc-2.22/lib/tls/x86_64/", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
<wingo>stat("/gnu/store/8m00x5x8ykmar27s9248cmhnkdb2n54a-glibc-2.22/lib/tls/x86_64", 0x7ffd666fe260) = -1 ENOENT (No such file or directory)
<wingo>open("/gnu/store/8m00x5x8ykmar27s9248cmhnkdb2n54a-glibc-2.22/lib/tls/", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
<wingo>stat("/gnu/store/8m00x5x8ykmar27s9248cmhnkdb2n54a-glibc-2.22/lib/tls", 0x7ffd666fe260) = -1 ENOENT (No such file or directory)
<wingo>open("/gnu/store/8m00x5x8ykmar27s9248cmhnkdb2n54a-glibc-2.22/lib/x86_64/", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
<wingo>stat("/gnu/store/8m00x5x8ykmar27s9248cmhnkdb2n54a-glibc-2.22/lib/x86_64", 0x7ffd666fe260) = -1 ENOENT (No such file or directory)
<wingo>open("/gnu/store/8m00x5x8ykmar27s9248cmhnkdb2n54a-glibc-2.22/lib/", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
<wingo>stat("/gnu/store/8m00x5x8ykmar27s9248cmhnkdb2n54a-glibc-2.22/lib", {st_mode=S_IFDIR|0555, st_size=4096, ...}) = 0
<wingo>for every library
<wingo>but the whole point of guix is that you know the precise libraries that a binary is linked against
<wingo>so much of that run-time searching is for naught
<efraim>i'm about to go to bed, but would we lose grafting if we switched from dynamically linking against specific libraries to static linking?
<jmd>wingo: I suppose that is doing the searching.
<wingo>efraim: afaiu grafting works by patching the rpaths in the dynamically linked objects
<wingo>so no problem
<wingo>good evening civodul :)
<efraim>guix environment --pure --ad-hoc coreutils strace -- strace true "looked longer" than `strace true' on debian
<civodul>hey, wingo!
<myglc2>davexunit: Thanks. My use case: Home servers with Guix/Debian 8.3. Linux work servers w/o sudo. Wondering when/if it might become feasible to build my SW tools at home and then run on the work servers.
<davexunit>myglc2: can't give you a timeline, it's basically whenever motivated people put in the work to make it happen.
<jlicht>sneek: later tell optikalmouse: You are correct in assuming that an npm package that is simply copied over could also just be installed from tar.gz. The general problem is that these sources often include and make use of generated and/or minified files, and as such are not the preferred source
<jlicht>sneek: botsnack
<myglc2>davexunit: Thanks. Guix offers a strange mix of freedom and control: I lets an unprivileged user do what ever they want, but only if they have sudo, or know a sysadmin that is adventuresome enough to install Guix/GuixSD for them.
<davexunit>myglc2: yeah, but I think we'll eventually remove even that requirement.
<bavier>myglc2: a short while ago I explored setting up `guix publish` on a computer I have root on that would serve substitutes to my unprivileged computer
<bavier>the daemon was set up with a store location in "/ptmp" that I have write access to on the unprivileged computer
<ifur>his, is there a convinient way to download a directory with curl containing tar.gz or other files and checmsum all of them and extract into a single folder?
<davexunit>having an unprivileged guix daemon with substitutes from hydra would require using containers to mount /gnu/store
<civodul>bavier: interesting hack
<ifur>my guile/scheme skill might be a bit limited for that one
<davexunit>so you'd always need to be in the container to do guix stuff
<bavier>anyhow, I didn't get very far because of difficulties I ran into in running multiple instances of guix-daemon on a single machine
<davexunit>but I think that would be OK
<ifur>is however important that the files can be verified
<davexunit>because you could make a container that shared the host file system, but just bind mounted the unprivileged store to /gnu/store within the container
<civodul>ifur: download a directory?
<ng0_>gnupg-2.1.13 against libgcrypt-1.7.1 succeeded. patch incoming soon.
<ifur>civodul: as in:
<davexunit>time to go AFK for the long weekend. happy hacking all!
<ifur>civodul: file by file its a bit much, a brute force approach would be really nice
<bavier>civodul: yeah, I'd like to explore the hack a bit more
<ifur>civodul: so these datasets are by and large associated with a specific version of the software library
<civodul>ifur: what about (list (origin ...) ...), would it be ok?
<ifur>the way to deal with these data sets aren't consistent, .e.g. there used to be a large tarball for all of them, but they have moved towards a file by file approach using a script... which is obviously not a good solution
<ifur>civodul: notice how many there are, and that its by version?
<ifur>think one needs to follow the versions in order to be sure to get the correct ones
<ifur>and its critically important because these datasets are used for generating events in high energy physics, as in... the pdfsets are based on theoretical and experimental data
<civodul>yes but this could be generated, mostly
<ifur>is there a standard way for this?
<ifur>because ideally an end user should be able to easily update the version withot going through every single data set
<ifur>since the total size isnt massive, doing all of them in one go is likely the best approach
<civodul>you could have a script, Scheme or otherwise, that generates the list of origins
<civodul>it wouldn't help that much to do it in one go
<civodul>you still need to know the hashes of each individual file
<ifur>i was thinking more along the lines of a user updating the main library versions, and then when trying to install get an error with a single incorrect hash that could be updated
<myglc2>bavier: Thanks. This is along the lines I was imagining.
<ifur>the primary users of this piece of software is theoretical physicsts, so albeit clever may not be inclined to spend a lot of time learning scripting in order to update this absurd amount of files
<ng0>patch is out.
<bavier>myglc2: np. I'd be interested to know if the idea works for you
<ifur>with my currentl level of guile skill i may just generate it by some hackish shell script, but a more general solution would be the ideal really
<civodul>ifur: i think users themselves would use the recipe that lists the origins for all these files, they wouldn't write it
<ng0>I'm having problems with this being ignored: .. I have to check yet on a not so customized setup .. but locale keeps leaking in from gentoo. I will file a bug to discuss this later.
<ifur>civodul: they may or may not be updated for every version, so quickly becomes tedious
<civodul>ng0: bugs are not for discussions :-) you can try tho
<ng0>ah, yeah. help for starters, then if it's an actual bug. do the bug.
<civodul>ifur: then you need an auto-updater; it's comparable to "the files on", in a way
<ifur>a list of files from one url tied to a single hash would be perfect.... and a way to extract them all to a single folder of course
<civodul>how would you compute the hash? :-)
<civodul>what you suggest may be doable, but with a bit of work
<ifur>civodul: pipe it through gzip and appand each .tar to eachother and then compute it?
<civodul>yeah, sure
<civodul>we could do something, somehow :-)
<ifur>problem is, i dont know how to code it :S
<civodul>it might be easier to get the admins of that server to pack everything in one file though :-)
<ifur>but would make sense to have that capability in the download process i guess
<ifur>hmm, i'll ask :D
<ng0>can I add perl-net-psyc without tests for now? I will comment the test section, I expect lynX (dev of perlpsyc) to add the fixes in the next days.
<ng0>so whne they are added I can uncomment and chnage the commit.
<civodul>in general there must be a justification (and a good one ;-)) as a comment next to #:tests? #f
<ng0>justification is I don't want to add another 20 loc to the already very long package
<ng0>but Ican wait another n days and add jus tthe fixed version
<ifur>civodul: ive requested a tarball :P