IRC channel logs

2016-08-23.log

back to list of logs

<retroj>i have a lighting device plugged into /dev/ttyUSB0, and ola reports: open(/dev/ttyUSB0): Permission denied
<retroj>what should i do about this?
<retroj>i'm currently testing in a pre-inst-env. i don't know whether that makes a difference
<retroj>ah, that device is in the dialout group.
<retroj>funny anachronism :-)
***fkz is now known as Guest7586
<retroj>woot, i have ola working
***harlequn79 is now known as harlequn78
<davexunit>ACTION thinks about first-class environments
<lfam>Does anyone have experience with the '-mtune' option to GCC? I notice that ImageMagick is setting it to the host machine's processor for i686 and x86_64: <http://hydra.gnu.org/build/1441619/log#line-2626>
<lfam>Not on armhf or mips64el, however
<lfam>Here is the GCC documentation of '-mtune': <https://gcc.gnu.org/onlinedocs/gcc-4.9.4/gcc/i386-and-x86-64-Options.html#i386-and-x86-64-Options>
<lfam>Should we set it to 'generic' on the affected architectures?
<lfam>Or, is this not a problem at all?
<lfam>Well, I guess it's a problem in that different machines will *definitely* create different outputs
<mark_weaver>lfam: yes, that's a problem
<mark_weaver>specifically, it's a problem because it makes builds non-deterministic
<lfam>Right, my locally compiled imagemagick is tuned for 'ivybridge', whereas the latest Hydra build is for 'haswell'
<mark_weaver>we should find a way to inhibit that
<lfam>Conditionally set it to 'generic' for x86_64 and i686 builds?
<davexunit>ugh, processor specific tuning. it's the ATLAS problem.
<mark_weaver>I would try to just avoid passing -mtune at all
<mark_weaver>although I suppose if -mtune=generic is more or less equivalent and easier to accomplish, that would also be fine
<mark_weaver>lfam: can you email bug-guix@gnu.org about it so it's not forgotten?
<lfam>Sure
<mark_weaver>thanks!
<jmd>mark_weaver: On the mailing list you said you were running GuixSD on the Lemote Yeelong. How did you manage that? I didn't think there was an installer.
<mark_weaver>sneek: later tell jmd: I didn't use an installer. I ran 'guix system init' from Guix running on top of another system, long ago, and later removed all remnants of the old system.
<sneek>Got it.
<mark_weaver>sneek: later tell jmd: in my case, the old system was a hand-built system based on Cross [GNU/]Linux from Scratch.
<sneek>Will do.
<mark_weaver>sneek: later tell jmd: creation of an installer is blocked on getting 'guix system vm' working on mips64el, which is blocked on getting qemu to work on mips64el. we may also need a different kernel config for whatever mips machine qemu emulates.
<sneek>Got it.
<mark_weaver>sneek: later tell jmd: although perhaps it would be worthwhile to create an installer without qemu, using 'guix system init', with some of the steps being done manually.
<sneek>Will do.
***boegel|quassel is now known as boegel
<retroj>i'm developing a program that links to a certain library (which is in guix). how do i set up my development environment to be able to build my program?
<retroj>in other words, in other distros, one would install a -dev package and be good to know. what is the guix way?
<Steap>retroj: you do *not* install the library :)
<davexunit>just install the package.
<Steap>retroj: the best way would be to use "guix environment"
<davexunit>yes, that would definitely be the best way, if you can handle it.
<davexunit>but you can also go the less awesome root of just install gcc-toolchain and whatever other packages into your profile and setting the env vars that guix tells you to set.
<ng0>Hi, PyBitmessage is done now, just in time for 0.6.1 release. The patch has just been sent out
<ng0>*patches
<ng0>I will go through my work in progress folders in the next days/weeks to rebase what I have there and let interested people pick those applications (like I did with awesome-3.5.9) so I can focus on packaging for gnunet & psyc* related things fulltime again
<ng0>for the pbpst related patches, I had a chat with the dev yesterday and will apply the latest fixes, see how it behvaes now and send out rebased patches.
<rekado_>ng0: cool! I'll take a look at them some time this week.
<rekado_>ng0: do you have a list of all your patches that still need review?
<rekado_>I haven't been able to process much of Guix-related emails in the past weeks.
<ng0>I can go through my tags and send a list including gnu.org links and/or subjects, not today but later
<rekado_>ng0: thanks. Please just send it to me at your convenience. I'll take some time to review them.
<ng0>one set I know: the 13 perl patches before surfraw can be applied is bavier reviewing, those are time consuming I guess and I have patience :)
<ng0>okay, I will CC you. thanks rekado_
<rekado_>one problem of my email setup is that I'm sending and receiving guix-devel and bug-guix email on two accounts. mu4e does a poor job of merging and displaying threads, so I get all mail twice and it appears out of order when I view them.
<rekado_>bit messy.
<rekado_>need to add a rule to just bin all guix-devel mail on my work mailbox, unless I'm in Cc.
<rekado_>does anyone here have a software recommendation for sync'ing *and* filtering email? I'm using offlineimap for sync but I'd really like to run a couple of filters locally upon sync to automatically move mail around or tag it (e.g. when my name is mentioned).
<retroj>is a build log stored for failed builds?
<rekado_>currently, I can only do this through the web interface of my mail provider (don't have a local server for mail yet)
<ng0>I run a wrapper around getmail which involves GNU parallel, notmuch tag, rm
<ng0>is it: (version (string-append "0.0.1" "." revision (string-take commit 7))) or: (version (string-append "0.0.1" "-" revision (string-take commit 7))) (just some examples for a case)
<ng0>sorry
<ng0>revision "-"
<ng0>and "."
<rekado_>2.0.11-3.cabba9e
<rekado_>this is an example from the manual (use "i" for index, then search for version)
<rekado_>2.0.11 is the version, 3 is the Guix-internal revision, cabba9e is a substring of the commit
<rekado_>in your example it would be (string-append "0.0.1-" revision "." (string-take commit 7))
<ng0>ah. okay, thanks
<ng0>do i need to export something to let $application find CA certs at runtime?
<ng0>nss-something is in inputs
<rekado_>yes, probably. Search the manual index for X.509.
<ng0>okei
<rekado_>there are different env variables for this kind of thing.
<rekado_>depends on the application.
<ng0>what, if at all, can I suggest to the developer to make abuild phase "make simple" they created to allow systems like Guix and Nix to patch the make.sh, which is created during this make phase, before it is executed?
<ng0>otherwise I know how to override it already, but I'd like to fix it
<ng0>could it be better to have the generation part in one phase, and the execution of make.sh in a phase afterwards, so that nix and guix can hook in between those phases?
<malthe>anyone created a guixsd image on google compute engine?
<ng0>I mean this is guile, there has to be something which could enable instructions to interactively patch the file... if file is detected, run patch-shebang,...
<ng0>could also suggest to move @./make.sh to a second phase.. easier.
<ng0>I'll go with just patching it for now
<rekado_>ng0: normal build systems don't execute some generated script.
<rekado_>usually, there is a configuration step before building.
<ng0>this is tup.
<ng0>you will understand once you see it
<ng0>but we're fixing it
<rekado_>ah, okay, then "normal build system" does not apply :)
<ng0>this is an attempt to let tup generate and full tup coexist
<rekado_>I don't understand what you suggest above with "if file is detected"
<rekado_>splitting generation of the file and executing it into two phases seems best
<ng0>never mind. this is a problem in the build script.. the order was changed, we're fixing it
<rekado_>ok
<retroj>hi, i'm having some trouble with tests with the ola package. http://retroj.net/scratch/ola-guix/ contains ola.scm (package definition), config.log, and test-suite.log, which indicates a missing python module, google.protobuf. any thoughts?
<retroj>config.log shows that ./configure checked for the existence of the python module
<steveeJ>is GuixSD using a fork of hydra or is the original compatible with guix?
<ng0>so now rust on guix is something I need to succeed with one part of the projects I focus on primarily... what the status update on this?
<rekado_>steveeJ: GuixSD downloads substitutes from hydra or from a server started with "guix publish". The protocol has not changed as far as I know.
<rekado_>steveeJ: we are in the process of replacing hydra with cuirass.
<rekado_>steveeJ: you cannot use the Nix hydra servers with Guix because they don't provide matching binary substitutes for Guix package builds.
<steveeJ>rekado_: I'm looking at using either nix or guix for a project and it includes having a build-farm
<steveeJ>rekado_: I see, so it is really either or and I couldn't feed one hydra with guix and nix at the same time
<rekado_>I don't have any experience running hydra myself, so I don't know what's involved.
<rekado_>I just know that there's no overlap in the software dependency graphs of Nix and Guix, because even the bootstrap binaries differ, so substitutes for Nix will differ from substitutes used with Guix.
<rekado_>ACTION goes afk
<steveeJ>rekado_: thanks for the information
<ng0>Looks like I need some kind of hydra setup sooner or later... always manually checking if something still builds with the latest version is annoying
<davexunit>hopefully CI will be easier to setup when we can use Cuirass instead of Hydra
<ng0>my debugging currently also involves uploading build-log+build-env+patches to give developers more information
<ng0>that's also done manually
<OrangeShark>Is Cuirass getting written in Guile?
<ng0>it's funny how I wanted to learn all these languages and learned basics and never got to write more than a script and now I learn through debugging..
<davexunit>OrangeShark: yes
<OrangeShark>awesome
<davexunit>Hydra is going to become increasingly unusable for us as the Nix project rewrites it
<davexunit>I already don't think we can use the latest upstream version
<davexunit>ludo made it seem that way, anyway, when I was curious about running my own.
<efraim>when fps set up hydra it took him almost 3 days
<ng0>:O
<efraim>as far as building it, perl-gd(?) is the package blocking it
<efraim>but if you marked tests? #f it should build, and then its just configuring it
<davexunit>yeah it's this nasty perl thing that no one here has any interest in maintaining
<davexunit>the Nix project is rewriting it in another language, too
<davexunit>so we really need our own written in Guile that we know how to hack on
<efraim>same basic issue with the daemon, no one wants to deal with the c++
<davexunit>right
<davexunit>we've got some basic components in place to replace the daemon
<efraim>well, i'm back to offline, --sources=transitive is great, internet should get fixed tomorrow
<davexunit>later efraim
<ng0>why is perl-gs blocking?
<ng0>*gd
<ng0>does getting rust on guix depend on the success of https://github.com/oriansj/stage0 ? that's what I understand of past discussions
<davexunit>ng0: probably not
<davexunit>we package other self-hosted compilers right now, albeit in an unideal way.
<ng0>I
<davexunit>so we could package rust similarly, I suspect, and add it to the list of compilers that we need to bootstrap properly in the future
<ng0>I'm interested in it because I want to stay ahead of releasing, and include a guix.scm for a project before it gets to release state.. prototype2016 moved to a rust build system.. easy for nix, currently impossible for guix
<sankey>how can i determine the reason a path in the store is still "alive"
<sankey>when i follow its --referrers to a profile, that profile is not referenced from any user
<sankey>i even checked under /var/guix/profiles/per-user/ and there's no symlink which points to that profile
<sankey>maybe an open file descriptor to the profile can cause this?
<habs>Hi guys -- I'm unable to update my system because I get "error: failed to unpack source code" whenever I do "guix pull". My network and tar packages seem to work fine though. Which parts of this strace log I made would be most relevant to the situation so I can fix it? http://lists.gnu.org/archive/html/bug-guix/2016-08/txtrNZlFiFYKn.txt
<malthe>I get a test FAIL (home-page: Connection refused) - https://gist.github.com/malthe/6b038ca8aecf9998377e7a4ccf9935b6
<malthe>what could be wrong?
<retroj2>the problem i'm having with the ola package (see logs here: http://retroj.net/scratch/ola-guix/ ), i would guess, has to do with PYTHONPATH. the package depends on python-protobuf, which does provide the module google.protobuf, but then python tests during check phase fail to find the module. is there something else that the package needs to do to make python modules available in its build environment?
<alezost>retroj2: try to grep for PYTHONPATH in the "gnu/packages/*.scm", perhaps you'll find something
<lfam>bavier: How can I check if ImageMagick is built to support SSE2?
<lfam>Change of subject: the next release of Git (2.10.0) will no longer display the short PGP key id
<lfam>I'm not sure if it's the long ID or the fingerprint, but some improvement :)
<retroj2>alezost: ok, thanks
<lfam>This is the ImageMagick configure log with my --without-gcc-arch patch. No mention of SSE2 or SIMD: http://paste.lisp.org/+6Y0T
<retroj2>i'm not seeing anything that clearly looks like a parallel situation in a grep for PYTHONPATH
<sankey>how come there is no runtime dependencies listed for python-magic: http://hydra.gnu.org/build/1363961#tabs-runtime-deps
<sankey>but in the package definition, the "file" package is an input: http://git.savannah.gnu.org/cgit/guix.git/tree/gnu/packages/python.scm#n8699
<sankey>how do i make it so that file is a runtime dependency?
<lfam>sankey: Sounds like a bug. Can you check if the 'hard-code-path-to-libmagic' build phase is actually working?
<lfam>We haven't noticed since AFAICT there are no python-magic users in our tree
<sankey>lfam: can you explain "there are no python-magic users in our tree"
<lfam>diffoscope can potentially use it, but it can also use `file` directly, so I prefer that.
<lfam>Sorry for the jargon :) I mean that I don't think there are any other packages using python-magic in the Guix package collection
<lfam>So, we wouldn't have noticed those other packages not working right
<sankey>ah
<sankey>mhm
<lfam>I didn't look that closely, just a quick `grep`. I could be wrong
<sankey>the custom build phase hard-code-path-to-libmagic works because on my system it complains that /gnu/store/3wxmzclxa0yjpjjagjw55nrn33g3s833-file-5.25/lib/libmagic.so doesn't exist
<sankey>until i manually run "guix build file"
<sankey>(but every time i guix gc, it deletes file again :P )
<lfam>Hm, I'd say that counts as "not working"
<sankey>oh
<sankey>does libmagic.so exist elsewhere?
<lfam>The python-magic package should refer to the libmagic.so in the file package. That reference should prevent the garbage collector from deleting the file package
<lfam>I'm looking at it now. There is also the package python-file, which does something similar
<lfam>Okay, python-file works as expected
<davexunit>bummer, gnome isn't recognizing the external monitor I plugged into my laptop :(
<lfam>sankey: I think the problem is that the egg gets compressed, so the Guix reference scanner can't look into the binary and find store references. Can you try this patch? http://paste.lisp.org/+6Y0W
<lfam>With that patch, `guix gc --references $(./pre-inst-env guix build python-magic)` shows the reference to the file package, so it should be protected against garbage collection
<sankey>lfam: thanks for looking into it! i will try later
<lfam>sankey: Okay, I'm going to go ahead and push the patch. It appears to solve the issue on my end
<bavier1>lfam: for sse2 in imagemagick, it might need the -msse2 compiler flag
<bavier1>I think that's what other packages use
<lfam>bavier1: Okay, I'll add a new patch to the series
<bavier1>of course, only for x86_64 and i686
<lfam>bavier1: Something like the package definition of setbfree?
<bavier1>lfam: right
<lfam>Okay, will do
<bavier1>though, in that package, I'm surprised even -O3 is left out from other systems
<bavier1>but maybe it lead to test failures, idk
<lfam>I lack the hardware to check
<slim404>exit
<retroj2>ola provides a C++ library, a daemon, some utils, and some bindings for various other programming languages. thinking about the extra bindings, would a good way to set that up be to provide them as separate outputs of the package?
<bavier1>retroj2: I would consider it if it would trim down the closure size of a typical installation
<retroj2>i just don't want a dependency to pull in python or java if the person doesn't want that
<bavier1>retroj2: right, that'd be a perfect candidate for separate outputs
<bavier1>so, with the last llvm upgrade we did, we were wondering how many versions of llvm it makes sense to keep around. I was thinking about packaging a cool project, one that's linked on llvm.org even, and it requires llvm 3.4
<efraim>we have what, 3.{4..8}?
<efraim>and 3.9 is either out or should be out soon
<sneek>So noted.
<bavier1>3.{5..8} and 3.9 should be releasing soon
<efraim>qt-4 has been depreciated for years and projects still use it, llvm-3.4 should be fine
<bavier1>yeah, I guess 3.4 is only just more than 2 years old
<jmd>Why is constructing a profile such a slow operation?
<bavier1>jmd: it's io intensive
<efraim>well thats embarassing, cmake fails to build on core-updates, and i'm the one that bumped it
<efraim>i wanted to test adding libvdpau to mesa
<jmd>bavier1: io? You mean to the builders?
<efraim>disk IO i assume
<jmd>I can't think why it should be particularly disk intensive.
<bavier1>jmd: the union-build procedure has to scan through files in the store and symlink them all into the profile
<bavier1>its very expensive on my spinning disk with failing sectors, but completes in almost not time on my machine with SSD
<retroj2>when a package has multiple outs like i described, when more than one is installed, does each have its own directory in the store, or do they share a single build? when one depends on the other (like python bindings depending on the c++ library), does that involve multiple copies of the library being built?
<bavier1>retroj2: the outputs will be produced in a single build, but they'll be installed to separate store directories
<retroj2>thank you
<jmd>bavier1: Why does it have to scan? I thought the idea of the database was that it knows where to look.
<bavier1>jmd: the database doesn't know about all files in a store dir
<jmd>well it knows the package directories, doesn't it?
<mark_weaver>lfam, bavier1: I don't have time to read the whole log, but I saw mention of imagemagick and SSE. On i686, we shouldn't assume support for SSE. On x86_64, SSE2 is part of the base specification (always present), and GCC should know this without being told.
<mark_weaver>in fact, this is why double-rounding doesn't occur on x86_64, because SSE/SSE2 instructions and registers are always used for even scalar floating point operations, and the use of the registers is part of the ABI.
<bavier1>mark_weaver: oh good, thanks for clarifying
<bavier1>I should really be able to keep such things straight
<mark_weaver>no need to feel bad about it, I'm sure you have plenty of useful knowledge that I lack :)
<bavier1>:)
<mark_weaver>jmd: profile generation is as fast as I could make it without the database knowing the entire filesystem structure of every package.
<mark_weaver>jmd: the fundamental problem is that 'stat' is a synchronous operation. there's no way to ask the kernel to perform stats of 100 files and return the results all in one go.
<mark_weaver>(at least not without making 100 threads)
<mark_weaver>each 'stat' has to do a disk seek before returning.
<mark_weaver>I guess I could investigate the possibility of using multiple threads to work around that limitation, although I fear what it would do to the readability of that code.
<mark_weaver>anyway, I have to go afk. happy hacking!
<efraim>i'm bumping mesa in core-updates from 12.0.0 to 12.0.1
<davexunit>we ought to be able to eat docker's lunch for this use-case: http://blog.dubizzle.com/boilerroom/2016/08/18/setting-development-environment-docker-compose/
<davexunit>I've been thinking a lot about making 'guix environment' a more "first-class" thing, with a data structure for describing environments, much like you'd describe a package or operating system
<davexunit>I'd like to make containers an optional feature, rather than mandatory, for its use, though.
<davexunit>my current thinking is that the user can specify the packages they want in their environment as well as shepherd services that will be started by a shepherd instance running under their user account.
<davexunit>this way, in a single command you could create a development environment that, for example, spawned mysql, nginx, and redis daemons that are needed for a web application to work
<davexunit>this is even better than putting every single dependency in its own container
<efraim>wow
<davexunit>I was hoping to be able to use all of the existing GuixSD services, but that won't be possible because it would require root privileges.
<davexunit>and making containers
<efraim>so, how would this be different than a VM with a shared /path/to/my/source ?
<davexunit>efraim: no virtualization necessary
<efraim>cool
<davexunit>(unless you want it)
<davexunit>so much more lightweight
<davexunit>rather than running mysql as a system service, you can run an instance specifically for the application you are developing.
<davexunit>I think that's convenient. it's usually too annoying to do this on other distros so you just use the system service or resort to Docker
<davexunit>we have all the tools to make this happen. it just needs a decent user interface. I work on several large web applications that would benefit from this.
<davexunit>the other thing I really want to implement is some sort of 'containerized-application' procedure, that wraps an arbitrary binary in a script that starts it within a container.
<lfam>bavier1, mark_weaver: I saw your message about not needing to specify that ImageMagick should build with SSE on x86 architectures. So, I will not produce a patch for that :)
<davexunit>so you could make, say, an icecat wrapper that allows file system access to only ~/.mozilla and ~/Downloads.
<efraim>lfam: did you want to pull master (and your graft) into core updates, ungraft and update mit-krb5?
<bavier1>davexunit: that'd be really cool
<bavier1>I'm imaging an environment declaration much the system declaration
<davexunit>bavier1: yeah, that's the idea.
<bavier1>the two overlap I guess; it sounds close to something like 'guix system container'
<davexunit>yeah
<davexunit>the difference being that I don't want to run a whole OS
<davexunit>luckily the code will overlap, too. I can use a lot of what has already been written.
<bavier1>yup
<lfam>I'm about to extract a whole bunch of bug fix patches from the libtiff CVS repo. It's my first time using CVS, so I'd like some confirmation that I'm doing it right.
<lfam>For example, for this bug: http://bugzilla.maptools.org/show_bug.cgi?id=2543
<lfam>I use this command `cvs diff -u -r1.37 -r1.38 tools/tiffcrop.c`
<lfam>Is it normal to need to specify the filename like that?
<lfam>Hm, that command creates the patches at the incorrect "patch level". They would require `patch -p0` whereas the existing libtiff patches use `patch -p1`
<lfam>It would be nice if the patches could be recreated directly from the CVS repo, rather than requiring editing
<lfam>To make it easier to audit
<bavier1>is it possible for a guixbuilder to get root perms in the build environment?
<davexunit>bavier1: no