IRC channel logs

2016-07-17.log

back to list of logs

<ng0>i think the solution just came to me, half way to sleep. sorry for the many monologues today, I had a bit too much coffee and multitasking.
<Gamayun>Which solution did you come up with ng0 ?
<ng0>I don't know yet if it will solve it. in theory it will.
<ng0>patching all the shebangs of the tests, pointing to the diretory where the gpgscm binary is built
<catonano>ng0: solving what
<catonano>?
<ng0>like #!/path/to/bash /path/to/out/gpgscm
<catonano>ah gnupg
<Gamayun>Mhm, yes...
<civodul>rekado: eigensoft refers to the "lapacke" output of lapack, which doesn't exist
<civodul>could someone look into it?
<civodul>ACTION -> zZz
<ng0>catonano: yep.
<ng0>good night
<lfam>jmd`: Did you figure out your build failure?
<lfam>I'm currently running `make check` on a fresh checkout, to investigate. It's currently in the test suite.
<lfam>Ah, jmd` quit. Oh well
<lfam>sneek: later tell jmd`: I just built a fresh checkout of Guix and passed the test suite. Did you figure out your build failure?
<sneek>Will do.
<lfam>sneek: no botsnack for you
<sneek>:)
<lfam>Yes, it was delicious. I'm smiling too
<taylan>our Guile isn't updated to 2.0.12 yet? unpossible! gotta sleep now, guess I can do it tomorrow if no one beats me to it.
<lfam>taylan: It should probably go on an *-updates branch. Maybe core-updates-next? core-updates *should* be done soon, barring any more complications
<Jookia>nee`: install datefudge + backport name-constraints patch to use it to fix gnutls issue
<lfam>Jookia: Can you share a link to the patch?
<Jookia>lfam: no
<lfam>Jookia: Did I misunderstand your advice to nee`? Is there not a patch for the gnutls build failure?
<Jookia>There is, but I don't have a link to it
<lfam>Okay, I'll look
<Jookia>It's actually part of a patch
<Jookia> http://paste.rel4tion.org/366
<lfam>Thanks!
<lfam>Looks like this commit: https://gitlab.com/gnutls/gnutls/commit/cc22a052f40ba800acde7d81fe0ab91b56e66921
<Jookia>Is crawl packaged for Guix yet?
<Jookia>lfam: Backporting requires removing check_for_datefudge
<lfam>Doesn't look like we have crawl yet
<Jookia>oh ok
<lfam>I haven't studied the gnutls test suite, but it's weird that only one test failed due to missing datefudge. There are ~100 uses of datefudge
<MaliRemorker>Can anyone tell me how to get fluxbox up and running on GuixSD? I've seen the instructions how to add ratpoison and friends to your system config file, but the exact analogue doesn't work for fluxbox. That is, slim doesn't seem to pick it up, after installing.
<efraim>I haven't tried loading fluxbox before
<MaliRemorker>efraim: it's installable, the package exists, but it doesn't show up in slim F1 menu
<efraim>i saw it in wm.scm, i don't see how it ends up being different than openbox which i've used for testing before
<efraim>in terms of showing up with slim
<MaliRemorker>so, what is wm.scm? :)
<MaliRemorker>(i'm a guiler , i know what's .scm part, but not much experience with guixSD)
<efraim>its where fluxbox's definition is, did you add wm to (use-package-modules ...) at the top of the config.scm?
<MaliRemorker>aaah
<MaliRemorker>i thought this is just another windows manager, thanks
<efraim>:)
<MaliRemorker>right, so i'm doing `guix system reconfigure /etc/config.scm' after adding (use-package-modules wm ...) and the first thing it does is to pull down all the substitutes from hydra. this takes aaaages
<rekado>does git fetch work for you? I cannot seem to fetch the latest commits from savannah.
<rekado>works now.
<rekado>just very slow.
<pmikkelsen>hello, are there someone who usually takes care of updating haskell packages? if not, can you give me som advice for how I would do it :) i am trying to package stack, but there are a lot of packages thats too old..
<sneek>Welcome back pmikkelsen, you have 1 message.
<sneek>pmikkelsen, alezost says: dualbooting is possible. I described how I do it here: http://lists.gnu.org/archive/html/help-guix/2016-03/msg00087.html (there are other ways in that thread). As for MSWindows, to boot it with grub, you just need "chainloader +1" - see https://www.gnu.org/software/grub/manual/html_node/Chain_002dloading.html#Chain_002dloading
<rekado>pmikkelsen: we have updaters for all packages that also have importers.
<rekado>sneek later tell pmikkelsen we have updaters for all packages that also have importers. See “guix refresh -h” on how to use the updaters.
<sneek>Got it.
<efraim>sneek later tell pmikkelsen `guix refresh -t hackage' will show which haskell packages from hackage can be updated, I normally copy the information over by hand rather than anything automated. `guix import hackage foo' will download the source of foo and create a template of that version, which you can mesh with the version already in guix..
<sneek>Will do.
<efraim>sneek: botsnack
<sneek>:)
<rekado>pmikkelsen: welcome back :)
<pmikkelsen>rekado: thanks :), but what do you mean by importers? sorry if i sound stupid :)
<sneek>Welcome back pmikkelsen, you have 2 messages.
<sneek>pmikkelsen, rekado says: we have updaters for all packages that also have importers. See “guix refresh -h” on how to use the updaters.
<sneek>pmikkelsen, efraim says: `guix refresh -t hackage' will show which haskell packages from hackage can be updated, I normally copy the information over by hand rather than anything automated. `guix import hackage foo' will download the source of foo and create a template of that version, which you can mesh with the version already in guix..
<rekado>pmikkelsen: Guix has tools to create package expressions from upstream package meta data.
<rekado>we call those importers.
<pmikkelsen>rekado: aaah okay :)
<rekado>for Haskell packages we import from hackage.
<pmikkelsen>efraim: okay, so i just do that and make a patch? and then send alle the patches over email or how os it done :)
<rekado>pmikkelsen: I recommend working with a git clone of the guix repository.
<rekado>then start a new branch
<rekado>on that branch you update the packages and make a commit for each successful update.
<rekado>then you turn the commits into patches with “git format-patch” and send them to the mailing list
<pmikkelsen>rekado: oh, thank you! this was what I was most confused about :) i will do that later today or later this week then :)
<rekado>pmikkelsen: great! Feel free to ask here if you need any help.
<pmikkelsen>I will, thanks! see you, gotta go
<civodul>'lo Guix!
<Jookia>o/
<janneke>hi!
<civodul>hydra.gnu.org is building again!
<civodul>but its HTTP server is kinda broken :-/
<efraim>i miss the build logs
<civodul>yeah
<civodul>and the binaries too :-)
<civodul>ACTION has preliminary gzip compression in 'guix publish'
<ng0>Hi
<ng0>is this error related to sourceforge only?
<ng0>;;; Failed to autoload make-session in (gnutls):
<ng0>;;; ERROR: missing interface for module (gnutls)
<ng0>ERROR: In procedure module-lookup: Unbound variable: make-session
<ng0>I don't install much, I built more than pulling software in, but this is new to me. happens with lxterminal
<ng0>which happens to be on sf
<ng0>or related to the thread of nee
<ng0>(gnutls 'name-constraints' test failure)
<lfam>ng0: It usually means that our package definition tries to fetch the source code over HTTP, but is then redirected to HTTPS. Since the package definition refers to HTTP, GnuTLS is not available in the build environment. The package definition should be updated to use HTTPS.
<lfam>It fails to download the source of lxterminal?
<ng0>yes
<ng0>at every mirror of sf
<lfam>Oh :(
<lfam>There are other problems with sourceforge right now. They changed the URLs of ~50% of our packages
<ng0>i know.. I hope this will be resolved soon
<lfam>Indeed, our sourceforge mirrors (in guix/download.scm) use HTTP instead of HTTPS
<ng0>i think i noticed this bwefore and wanted to patch it
<ng0>I can not patch this right now, I'm still doing the checks part of gnupg-2.1.14 .. the readme for the gpgscm tests are clear, but what I'll end up with might hopefully get fixed once upstream replies
<lfam>The annoying thing is that I ask SourceForge about all the changes, and the reply was "We haven't changed anything". So, I replied with more detail. Hopefully they write back on Monday
<ng0>oh
<lfam>Hopefully they will "escalate" my question to the engineers / operations staff
<lfam>Not that it really matters. We just need to update the URLs. But the changes are not consistent across our packages, so it would be good to see the "road map"
<ng0>yes
<lfam>When I make the sourceforge mirrors use https, I get "ERROR: In procedure connect*: Connection refused"
<lfam>Maybe they think I am doing a denial of service attack or something
<lfam>The joys of building a distribution
<ng0>the metaperspective would be source based distributions are async distributed denial of service attacks
<lfam>I think that our level of usage is much too small to cause a denial of service. The async is very asynchronous
<ng0>the async is strong in this one
<lfam>Not to mention that we cache the source code
<ng0>yes :)
<lfam>It would be a great problem to have if SourceForge started to complain about Guix-users' traffic :)
<efraim>time for a "guix welcomes Debian to mips64el support" post? ;p https://nthykier.wordpress.com/2016/07/11/mips64el-added-to-debian-testing/
<lfam>Heh ;)
<ng0>gnupg test phase is really holding me back from learning more about services.. idk.. not that the roadmap i work on has a fixed fast-forward goal and needs to be finished, but maybe someone else wants to do gnupg check phase? the other 3 applications are updated and tested against each other, I only lack a building gnupg-2.1.14 to sent them in.
<ng0>i think i give it 2 more days
<lfam>I will take a look after I fix or get sick of the SourceForge URL problem
<lfam>No critical fixes in gnupg-2.1.14, right?
<lfam>That was my understanding from the release announcment: https://lists.gnupg.org/pipermail/gnupg-announce/2016q3/000393.html
<ng0>well the only major thing is what I'm working on at the moment, the gpgscm thing.
<ng0>which is documented and also documented as "this is not ideal but here is how you can do it"
<lfam>Right, I read the gnupg-devel announcment of that yesterday.
<lfam>In their archives
<ng0>I expect it to be solved tuesday morning, else I'll send in unpg as an unfinished patch
<ng0>*gnupg
<rekado>I would be very happy if someone who uses geiser to hack Guix would write a little blog post about this.
<lfam>Me too, ideally an introductory blog post, so I could learn to use it ;)
<rekado>I’m always struggling with geiser when I want to test a few changes in some module.
<efraim>from my quick look at it I would consider first compiling gpgscm and then doing the rest of gnupg-2.1.14
<lfam>efraim, ng0: Can't gpgscm be compiled with gnupg in the build phase, and then used in the check phase? Or does gnupg use gpgscm in the build phase?
<ng0>one sec
<ng0>was afk
<efraim>if it builds in build, then i'd throw in a 'patch-gpgscm-shebangs before 'test
<efraim>from the release announcement its just for the tests
<efraim>I didn't stick with it long enough to actually get it to build, i stuck it in my core-updates-next folder
<lfam>efraim: That was going to be my approach if nobody else made it work
<efraim>`guix refresh -l gnupg' says 28 dependants, that can be tossed into core-updates
<efraim>its not like it builds reliably anyway ;p
<ng0>alright i'm back
<ng0>gpgscm lives in the source of gnupg, specifically tests/gpgscm/ and i think either in that dir or in tests/opengpg/ the README covers what has to be done. I asked upstream to move it (only gpgscm) outside of gnupg so that it can be installed as an altered tinyscheme, which it is based on
<lfam>efraim: I usually update gnupg on master
<lfam>I think we should update things on master whenever we can get away with it ;)
<ng0>i did not consider packaging it alone, only as a fallback solution
<ng0>so far i try to make it work in the gnupg package
<lfam>ng0: I saw on gnupg-devel that they considered putting it in libgpgerror, since it will be common to all their packages, like libgpgerror
<ng0>:)
<ng0>great
<lfam>Or lib-gpgerror.
<lfam>They merely _considered_ it. I don't know if they will do it or not
<civodul>hey lfam
<lfam>Hi civodul
<civodul>lfam: gpgscm is going to be common to all the GnuPG-related packages?
<ng0>okay I think will create a minimal gnupg package named gpgscm to provide gpgscm, will be easier for path
<ng0>for the test suite starting 2.1.14, yes
<civodul>fun stuff, as long as they don't end up maintaining another Scheme ;-)
<lfam>Here is the message with that idea. I don't know if they are going to do it or not. http://marc.info/?l=gnupg-devel&m=145391159009967&w=2
<ng0>it is tinyscheme with some added functionalities maybe.. I presume something which they can move their testsuite to
<ng0>i don't know enough, just from the commit messages starting last january
<lfam>gnupg also did some bug fixes in gpgscm. I don't know if tinyscheme is aware of the changes.
<lfam>The bug fixes should probably go back upstream
<ng0>i think my email asked them about this
<lfam>Did they reply yet?
<ng0>i don't know, did not look at it
<ng0>it's on gnupg-talk@
<lfam>Ah, okay :)
<lfam>ng0: I don't see that mailing list here: https://www.gnupg.org/documentation/mailing-lists.en.html
<lfam>gnupg-users?
<ng0>right
<lfam>Yes, found it
<lfam>I've sent a patch to address the gnutls problem
<ng0>I would like to have an mailinglist which collects all the relevant announce* mailinglists.. Can you subscribe mailmans to lists?
<catonano>rekado: I'd love an introduction to the couple Geiser/Guix too
<ng0>When I build a GnuPG minus everything except gpgscm, I would have to alter the includes in the c files to include system libraries which previously were in place or in source directory. As this package is not intended to be installed into user profiles, would this change in setup behavior cause problems theoretically? Is my approach to do this to complicated? I already have an almost working gpgscm package now
<efraim>it would be a native-input so it shouldn't interfere with any packages
<efraim>plus you could alter the install phase if you wanted to only install the bits you wanted
<ng0>I'm only concerned about the libs in common/
<rekado>ng0: I also have a patch for py2-pyqt-4 here.
<rekado>my package for simpleDSO depends on it, too.
<lfam>The SourceForge mirrors are definitely being inconsistent. URLs flap between success and failure from minute to minute
<ng0>if yours is proven to work, mine just needs some more time
<rekado>I was holding it back because I’m thinking about writing a replacement for simpleDSO
<ng0>ah
<ng0>i don't know if mine differs from yours. I think I posted it to the list at some point
<ng0>earlier question was solved here :) https://gcc.gnu.org/onlinedocs/cpp/Include-Syntax.html
<lfam>There's a lot of variety in the new SourceForge URLs. I don't think they can be changed programmatically.
<ng0>how many packages are affected in numbers, not percent?
<lfam>ng0: ~100
<alcasa>tried using substitutes on guix, but about half the packages were still built locally. is it possible to use guix solely with binary packages?
<lfam>I've done about 10 so far
<alcasa>* without using build offloading
<ng0>lfam: oh
<lfam>alcasa: Not as far as I know. The system is designed to transparently substitute binaries when they are available, but at its core it's about building from source. However, our goal is to provide substitutes for the current package tree, and at least as far back as the latest Guix release.
<lfam>It's possible that you are building so much because your version of the package tree is too old. You can update it with `guix pull`. The other possibility is that you tried to build something we recently updated, and we don't have substitutes for it yet. Or, there could be some networking problem or a problem with our servers.
<lfam>And, the central point of our build farm died a few days ago. It is coming back online but for the last few days, there have been fewer substitutes available than normal
<alcasa>lfam: thanks for the answer. when i call guix pull initially it tries to manually build gcc toolkits (which take a lot of time on smaller machines). i made sure that substitutions were available. is it expected behavior, that guix pull with install build tools?
<lfam>alcasa: It's possible but infrequent. I think it means that the newer version of Guix also requires newer versions of those build tools. I run `guix pull` several times per week and it only involves building the Guix dependencies every couple months
<lfam>Did you just install 0.10.0?
<alcasa>lfam: yes i installed 10.0 from aur repository. i am on archlinux. but good to know that infrequent builds are expected. maybe i should really look into build offloading
<lfam>10.0 ;)
<alcasa>just switch to chrome numbering system to make development seem faster :p
<lfam>You will have to build sometimes. It's not like Debian or Arch where it never happens. Maybe with time we will have the resources to make it very rare
<lfam>But, we do updates of leaf packages on the master branch, which is where `guix pull` comes from. You might try to install the updated package before our build farm gets to it
<lfam>Core package updates, which would require building a significant portion of the package tree, are pushed to a staging branch which gets built before it is merged into master. So, we try to not leave ourselves building the entire world on our personal machines.
<lfam>It would be impractical
<alcasa>thanks for the extensive answer. building the toolchains just seem to take so damn long. i had even compiled the kernel in much less time
<lfam>It really is very computationally expensive
<lfam>To be honest, I stopped using my little ARM single board computer when I got into Guix. I realized that
<lfam>Oops, too early
<lfam>It gave me another perspective on the concepts of positive and negative liberty. I think that very underpowered computers do not, on their own, actually give much freedom to the user, since they can't build the base of the system. Mine was unstable when loaded for long periods of time. A slow and stable machine might be fine
<lfam>Many trade offs to weigh :)
<lfam>The frustrating part is that some of the ARM boards can be booted and run with entirely free software, which is unfortunately very rare with the Intel-compatible stuff
<alcasa>lfam: you are not independent when using arm boards perse. they came out of embedded systems. they just happened to have become really powerful in the last couple years.
<alcasa>most arm boards are proprietary hell
<lfam>Yes, but there a special few
<lfam>Sometimes only due to the reverse engineering efforts of the community
<alcasa>without hardware manufacturers more committed to free hardware nothing really will change
<pacujo>Hello
<pacujo>I'm new to guix and trying to map it to our existing build processes.
<pacujo>The source code is under version control, and anybody can build it (say, with build.sh).
<pacujo>Also, "official" build servers produce a new build whenever a new commit is pushed to the repository.
<pacujo>Occasionally, we want to release/publish a build officially. We have a command to do that.
<pacujo>Now the question is, how would we have to adapt our process to guix?
<lfam>pacujo: I'd start by creating a new package definition that uses the gnu-build-system. Presumably, you'd replace the default build step with something that runs `build.sh`
<pacujo>Where would the package definition go? Among the sources like build.sh?
<lfam>I've never created a Guix package definition that is not meant to be added to the official Guix repo. But, I recently downloaded and built this fun game using the Guix package definition it provides: https://davexunit.itch.io/lisparuga
<lfam>That person is in this channel, btw
<lfam>Check out section 3.2 Invoking Guix Package of the Guix manual
<lfam>The 'guix.scm' thing it mentions is what lisparuga does
<pacujo>I'll take a look, thanks.
<lfam>Poke around and then come back with more questions :)
<rekado>in the projects that I’m working on I also follow the “guix.scm” convention.
<rekado>This is not the same as creating a package, though.
<rekado>it’s used for defining a development environment, primarily.
<lfam>pacujo: Listen to rekado :)
<pacujo>I'm looking at https://git.dthompson.us/lisp-game-jam-2016-spring.git/tree
<rekado>pacujo: lfam is right about using default build systems like the GNU build system. This makes packaging a lot easier for everyone.
<pacujo>The README file in no way seems to refer to guix.scm.
<lfam>pacujo: You should read the 'guix.scm' file itself :)
<pacujo>Let me get this straight: if I cloned the repository and followed the instructions in README, I would get build errors (because I don't have the prerequisite packages yet).
<pacujo>So, I will need to run guix environment -l guix.scm first.
<lfam>I believe that's what I did, although I don't remember exactly
<pacujo>That would answer a question I would have later.
<pacujo>The first question is, having run ./configure and make, how do I create a package?
<pacujo>(Actually, maybe instead of make, I would run guix build -f guix.scm.)
<lfam>I don't understand the question. Can you rephrase it?
<pacujo>In our current build system, a package is a special tarball.
<pacujo>In traditional RedHat, it would be a .rpm file.
<lfam>Ah, in Guix, we tend to refer to the build recipe itself as the package.
<lfam>Notice the (package) object in that guix.scm file
<lfam>(define sly (package ...
<pacujo>So is there something analogous to an .rpm?
<lfam>I'm not familiar with RPM, although I have used dpkg / apt
<pacujo>Ok, a .deb.
<lfam>You want to distribute the binary that results from building the package?
<lfam>In that case, you'd use `guix archive --export`, perhaps with --recursive
<lfam>And, you would need to distribute the signing key of the build machine
<pacujo>Yes, we want to sprinkle some holy water on a particular library build and archive it.
<lfam>I personally think you will be better served by archiving the build recipes, but `guix archive --export` will let you archive the build artifact
<lfam>If you preserve the recipes, then, later on, you can reproduce and inspect the steps you took to create the binary.
<pacujo>The build recipes need to be archived as well (we do that for documentation purposes).
<pacujo>What we document is that git repository and the tag; the build command is implicitly clear.
<lfam>Yes, that's what I'd recommend
<pacujo>Ok, thanks again, lfam.
<lfam>Although, I'm a relative novice :) For real expert advice, go to help-guix@gnu.org
<lfam>There are some people using Guix in production in institutional settings. They will have more complete advice
<pacujo>Ok, so the package format is a .nar file.
<pacujo>Sorry, *archive* format
<lfam>I have to go. Good luck!
<civodul>pacujo: the "archive format" in Guix is unimportant, unlike rpm, deb, etc.
<civodul>what matters most are the package recipes
<pacujo>Ok, moving on to the "store".
<pacujo>We have a store. Essentially, golden artifacts are copied to it using scp, and they are retrieved via HTTPS.
<pacujo>Not sure if our "store" is analogous to guix' "store."
<pacujo>If we make a build that depends on those golde artifacts, a local cache gets checked first. If it isn't in the cache, the artifacts are retrieved from the central store.