<davexunit>okay, so the container tests fail due to a case that I'm not sure how to handle regarding user namespaces <davexunit>I detect whether or not the user's id is 0, the root user, and if so I create a user namespace mapping of 65536 uids. if the user is unprivileged, only their uid can be mapped to the parent namespace, so I only map a range of 1. <mark_weaver>davexunit: that reminds me: while working on network-manager, I noticed that its test suite makes use of containers and user namespaces, and I found some code that might have some useful hints for you. <davexunit>I've been using a tool called 'pflask' as a reference mostly <mark_weaver>davexunit: in the NetworkManager sources, look at 'unshare_user' in src/platform/tests/test-common.c <mark_weaver>this bit caught my attention as something that might not be widely known: <mark_weaver>"Since Linux 3.19 we have to disable setgroups() in order to map users. Just proceed if the file is not there." <davexunit>mark_weaver: I actually already have that implemented. :) <davexunit>the test suite issue has to do with nested user namespaces. my logic doesn't hold up for this case. <davexunit>I try to create a >1 uid range, because the user is uid 0 in the container, but it fails because I don't have such permissions <davexunit>presumably because the *real* uid, going all the way parent to the root user namespace, is not 0. <davexunit>I didn't expect this to be a problem because the container has its own special proc file system. <mark_weaver>actually, I had to disable the container tests because they failed within the build container. at some point I'd like your input on that, but first I should get the package mostly working and postd. <davexunit>ah, but in this test... I'm not creating new pid or mount namespaces <davexunit>I should detect this and only map a single uid. <davexunit>getting user namespaces right will take a lot of fiddling, it seems. I still dn't fully understand them. <davexunit>mark_weaver: I'd be happy to help dig into that sometime. they probably didn't plan for their test container to be a nested one. <mark_weaver>davexunit: I don't know if it matter, but the code in network manager's tests actually writes deny to /proc/self/setgroups, not /proc/setgroups <davexunit>though it is actually the parent process that sets up the namespace in my case. <mark_weaver>it also writes to /proc/self/uid_map and /proc/self/gid_map. I guess you probably already know this... <davexunit>I have user namespaces working enough that I can create multi-user GuixSD containers when I am root. <yenda>I'm not sure how to run the tests once I'm into the env in the failed build dir <mark_weaver>probably run "make check" from the top-level directory, although it looks like you also need to set the HOME environment variable to something that can be written to (it's done in the pre-check phase) <yenda>I did set HOME but make check does nothing and make / make test returns sh: build/temp.linux-x86_64-2.7/multiarch: Permission denied <yenda>should I just chown the whole dir ? <yenda>It was posessed by the daemon <yenda>well mark_weaver the tests are all passing withing this env <yenda>(except one because it can't write to dir) <yenda>and network tests don't fail even when building because the test suites skipped the tests "network resource not enabled" <yenda>so there is a problem with the os module when building with guix <davexunit>okay, so it looks like I either need to calculate based on the /proc/<pid>/{uid,gid}_map files if the current namespace can accomodate a range of 65536... <davexunit>or, I just add additional arguments to call-with-container that specify the number of uid/gids to map. <davexunit>defaulting to 65536 for uid/gid 0, or 1 otherwise. <davexunit>I ended up adding a #:host-uids keyword arg that defaults 1 <davexunit>'guix system container' uses 65536 and everything else uses 1. <davexunit>and now tests/containers.scm passes in a container. need to verify that it works in the guix-devel package build as well. <davexunit>mark_weaver: container tests pass for guix-devel now. sending patches to list. <davexunit>wait... the build uses an old snapshot... it should have failed, then. <paroneayea>I should test my patch with the vm port forwarding stuff again <paroneayea>I also need to figure out how to add an nginx service. <davexunit>it's not too hard... just haven't gotten around to it <davexunit>The only service I've written so far is postgresql-service <paroneayea>*small sweatdrop falls from paroneayea's brow when he notices every server on his network backed up recently but his laptop, because he forgot to add the new ssh key after reinstalling* <paroneayea>I really need to set this up to email me backup status... <davexunit>paroneayea: oh yeah, is there a patch I need to apply to wip-deploy? <paroneayea>davexunit: nope, it's local, I was struggling to test when I was at oscon because pulling substitutes was unreliable on the oscon network <davexunit>I thought you sent me something awhile ago... <paroneayea>davexunit: gonna see if I can get it here though <davexunit>you had to make some change to get it to work on the latest master <paroneayea>davexunit: oh yeah that's right... I forget if you applied it or not <davexunit>ACTION wishes he could figure out how to get redis to pass its test suite <davexunit>2 big packages that I've been sitting on for ages <davexunit>I guess I'll try rebasing myself and see where the conflict shows up <paroneayea>time to dist-upgrade a server and hope it doesn't fail :( <davexunit>for me it's: time to run chef-client on a bunch of production servers and hope it doesn't fail <davexunit>sucks having to think of rollback strategies in case things go awry. <paroneayea>at least with chef you can "theoretically" rebuild the server. <davexunit>in fact, I prefer to build servers from scratch every time instead of update existing ones because it's more predictable, even though it takes longer <davexunit>our chef recipes do too much compiling from source <davexunit>going to push the rebased branch once I've confirmed that it works <paroneayea>davexunit: of course, if you find a problem on your server, then you try to rebuild everything, and it turns out everything's not rebuilding right now due to the state of changed system packages <paroneayea>assuming it was the distro's fault and not yours <davexunit>I feel like my coworkers don't really understand my constant, probably annoying, reproducibility concerns. <davexunit>I wrote a script to build hacky deb/rpm packages for one of our web applications and I just wanted guix the whole time. <davexunit>there are certain parts of the build that could be much faster if I had access to something like the store <davexunit>but I have no cache so the whole thing is one big blob. <davexunit>and I have to build it from a docker container to have a more reliable build. <davexunit>to make sure I'm building with libraries that are ABI compatible with the servers the package will be deployed on. <davexunit>I want to wedge guix in somehow but we'll probably be too invested in docker by the time it's ready. <paroneayea>davexunit: hopefully everything is as reproducible as you can get it if you're doing docker things <davexunit>docker is "good enough" to make guix look too risky. <paroneayea>I think we're going to see the equivalent backlash to docker that we saw with mongodb and friends <davexunit>I just don't want the concept of containers taken down with it. <paroneayea>but they will become tooling that people don't notice as much <davexunit>yeah, like how you can use guix or nix without even knowing that they're using containers. <paroneayea>you won't be structuring your whole company around the word "container" <davexunit>I recently saw someone in boston who I presume works for a startup called "atomic app", as that was on their t-shirt. <davexunit>and underneath the name it said something like "because containers need to be shipped" <davexunit>the container metaphor is all wrong. they're not like shipping containers, they're an environment for isolating processes. <davexunit>docker and the rest of 'em are all focused on the disk image aspect <davexunit>just ship around a disk image and run it anywhere <paroneayea>container is fine, maybe adding "shipping" to it and a million dollars worth of marketing <davexunit>oh wait, "atomic app" is project atomic, the red hat thing, I think <mark_weaver>bah, there's a fix for CVE-2015-4760 for an old version of icu4c, but nowhere can I find an equivalent fix the version of icu4c that we have, and applying the fix to the new code is not entirely obvious :-( <mark_weaver>the original fix was for the old copy of icu included in some java distribution <davexunit>mark_weaver: is there a particular mailing list that you follow to stay up-to-date on this stuff? <mark_weaver>mainly debian-security-announce@lists.debian.org and gwene.org.seclists.oss-sec for now <davexunit>we should have a news feed eventually to say when we've patched security issues <mark_weaver>debian and derivatives have icu-52.1, but the relevant code has changed quite a bit since then. <mark_weaver>it's kind of surprising how much of my time is spent dealing with security updates. <mark_weaver>it gives me a new appreciation of the magnitude of the problems with the way most people develop software <joshuasgrant>Okay, I threw caution to the wind and all boxes but one is running GuixSD now. Maybe over the next few days turned weeks, I can actually start contributing every once and awhile again. :^) <davexunit>joshuasgrant: congrats on your lovely new OS <mark_weaver>I also have a new appreciation for how nice it was to have someone else providing security updates for me. <paroneayea>joshuasgrant: whooo! I'm not there yet myself but am glad to see others dogfood so much :) <davexunit>mark_weaver: we should recruit people that are interested in doing security-related things for guix. <davexunit>so you don't always have to be "employee of the month" because there's only one employee. ;) <mark_weaver>davexunit: yeah, that would be nice. it's a bit overwhelming <paroneayea>ACTION puts this month's shiny award on mark_weaver's desk! <joshuasgrant>paroneayea: Really, besides having a semi-capable tiling wm ... until we have GNOME proper in GuixSD, I'm pretty happy with the config I've put together thusfar. Well, at least on the desktop end; Server end, there's a number of things I still actively want a good bit (including Mediagoblin ;^) ). <mark_weaver>yes, we need more work on the server side, for sure. <paroneayea>joshuasgrant: unfortunately our js libs might make packaging mediagoblin hard <davexunit>mediagoblin is probably not too out of reach these days <davexunit>with all the work that has gone into python packaging <paroneayea>joshuasgrant: I will probably make a guix-heresies repo or something <paroneayea>I think the python packages are probably possible, but will be hard... <davexunit>yeah, I still don't know what to do about npm. <davexunit>paroneayea: I point people to your blog post on the subject sometimes and people shrug it off. <paroneayea>davexunit: it does do a good job of hiding a lot of its problems <davexunit>using a systems package manager? what is this, the 90s? <paroneayea>davexunit: recursively including deps is a nightmare design from a design point of few <davexunit>the 2 important things to accelerate the packaging of any newly adding programming language to guix: the build system and the importer. <paroneayea>but it somehow gives you both static linking and dynamic linking properties together <davexunit>I would very much appreciate if someone was interested in untangling this and writing a node-build-system. <paroneayea>allows devs to link to versions they want, while giving "end-user ease of use" of static linking, combined with the reproducibility problems of static linking... <paroneayea>davexunit: would it need to "flatten" the imports? <paroneayea>davexunit: there were loops at like the second level of the most common packaging tools <paroneayea>davexunit: the packaging tools themselves appeared to have loops :( <paroneayea>davexunit: I should look again to verify, but I'm pretty sure there's something of the type <paroneayea>davexunit: for all the problems with python's deps, and there are many <paroneayea>at least you can't do cyclic dependencies like that. <davexunit>not looking forward to unraveling npm, but it needs to be done at some point. <paroneayea>davexunit: so my plan for mediagoblin right now: <davexunit>if you build it they will come, and if the build system and import scripts are there, it can enable newcomers to package all sorts of things with ease. <paroneayea>davexunit: get it "working" in an external package repo from guix proper, that does the nix style "eff it I'm out" approach of just putting the minified .js as the "package" for jquery and etc <paroneayea>so at least there is a route for installing mediagoblin and everything *but* the js stuff I can get packaged in guix. <davexunit>I don't think we're too far away with the python deps <mark_weaver>I just read paroneayea's article linked above. Wow. I had no idea how bad it was. <mark_weaver>jQuery is used in hydra, which is now a Guix package. it would be good to clean that up at some point. <davexunit>hydra likely bundles the minified version of jquery. <davexunit>unless it also bundles the unminified source <mark_weaver>I've been so naive, I just thought that the non-minified jquery.js was really the source code. <mark_weaver>I had no idea that there were hundreds of other dependencies in the *real* source code. <davexunit>to build it and run the tests and all that, yeah. <davexunit>we need to get to the bottom of it at some point. <davexunit>but I know javascript well, so I think with some help we can find some sanity. <mark_weaver>if we don't do it soon, the problem may become so large that it's impractical to fix it. <mark_weaver>I've never really worked in the Javascript world, apart from writing my own little tiny programs. <mark_weaver>and I haven't done anything in Python in about 20 years <mark_weaver>now I know that I don't want to touch it with a ten foot pole until it is fixed. <davexunit>we need to find the roots of the dependency graph for node packages <davexunit>node package's have a 'package.json' file that accurately list the dependencies for the package, so that metadata can help us automate packaging work. <davexunit>it's unfortunate that as far as paroneayea and I can tell, Nix does nothing to address this. <davexunit>and they just download pre-built binaries, essentially. <mark_weaver>and I can certainly understand why they would do that, given the magnitude of the problem. <paroneayea>my thoughts were, "nix has surely got this problem solved for us!" <davexunit>I'd like to raise more awareness of the issues, but the few times I've brought it up I haven't been able to convince anyone that it is a problem. <mark_weaver>does the use of nodejs result in some kind of uniformity that could be exploited to somehow automate (or semi-automate) the work of untangling this mess? <mark_weaver>I'm so ignorant of this world that it might be a dumb question. <paroneayea>though there are also tools on top of it, like bower and grunt <davexunit>so it's a package that would be part of the node build system. <davexunit>I think getting grunt packaged would be a big step forward. <davexunit>at least 2 of its dependencies have no additional dependencies, according to this website. <mark_weaver>makes sense, and 20 dependencies seems manageable, although I don't know how many deps those 20 have. <davexunit>though I fear that they are hiding what people now consider "development dependencies", which often include build tools and things needed to run the test suite. <mark_weaver>for now, we could just say "fuck it" to the test suites. that part could be dealt with later, incrementally. <mark_weaver>if we could just bring some sanity to the building process, that would be huge. <paroneayea>it's great that I live in a world where I have this filepath <paroneayea>/home/cwebber/programs/jquery/node_modules/grunt-jscs-checker/node_modules/jscs/node_modules/vow-fs/node_modules/glob/node_modules/minimatch/node_modules/brace-expansion/node_modules/balanced-match/test/balanced.js <davexunit>mark_weaver: it's the approach I've begun to take with regards to ruby <mark_weaver>things are accelerating so rapidly, and there's so much pressure to use whatever tech will allow rapid integration, that it hasn't allowed proper design or consideration of the long term consequences :-( <davexunit>and those are needed to build from source. :( <davexunit>gulp is *another* task runner, an alternative to grunt. <davexunit>so grunt depends on a library that requires a task runner developed *after* grunt in order to build. <mark_weaver>honestly, I was already feeling overwhelmed with the complexity of our OSes in the C world. <paroneayea>imagine you have a warehouse full of shipping containers <paroneayea>and each one of them has a machine inside it, made out of intricate parts <davexunit>mark_weaver: this is the tide we fight against. <paroneayea>since each machine is "too complex to build", most users have decided to trust a manufacturer to put together each machine, but many machines are made of other machines too <davexunit>docker takes the idea of static linking and brings it to a higher abstraction layer: the entire distro. <paroneayea>now imagine that one essential piece, let's say it's called openssl, has a serious bug in it which may cause these machines to explode, and your boss tells you to go fix all the machines, take them all apart if you have to <paroneayea>now imagine you have to do this once a week, or once a day :) <paroneayea>and now we know why 2/3 of docker images have medium to high security vulnerabilities! <davexunit>I am still trying to figure out the security story for docker. <davexunit>paroneayea: did you know that Docker Compose bundles a vulnerable version of OpenSSL? <paroneayea>davexunit: haha I think I saw you link to that but didn't realize it was Docker Compose <paroneayea>docker isn't going to be good enough, because there's no way for it to become so... unless it somehow forces reproducibility into the whole system anyway <paroneayea>and since apt-get update && apt-get dist-upgrade are sensibly considered "bad form" because you have to manually monitor an upgrade given various prompts and etc <paroneayea>there is no way to do it without ending up at a distro-level solution of things... <paroneayea>and that's why snappy starts to look a lot like nix I think... <davexunit>paroneayea: I add switches to prevent prompts from opening <davexunit>except with all the actually good parts of nix removed. <paroneayea>davexunit: but sometimes the prompts are important <davexunit>paroneayea: to complain more, people *think* docker gives them reproducible builds. <mark_weaver>well, that might not be quite right, now that I think about it more <paroneayea>mark_weaver: let's fix our packaging solutions via machine learning and genetic programming! <mark_weaver>in biology it's not practical to nest systems within systems within systems to the kind of nesting levels that we now see in the JS world. <mark_weaver>paroneayea: yeah right, that might well come to pass <mark_weaver>imagine applying security updates by probabilistic algorithms that search through the source code for patterns similar to the code that needs to be patchd, looking for all occurrences. <paroneayea>mark_weaver: hm, given humans with organs with helpful bacteria with mitochondria... <paroneayea>mark_weaver: well it might not be so bad if things are reproducible <mark_weaver>paroneayea: well, yes, that's what made me think of it, but the nesting depth seems even more in the JS world than in biology at this point <paroneayea>mark_weaver: btw I had some interesting conversations with Richard Fontana while at OSCON where I was trying to ask about how a hypothetical system that had some procedurally generated genetic programming system "extended" via logic programming in the kind shown with 'evalo' in the minikanren talk I linked not too long ago, would copyleft still apply as a defense? <mark_weaver>but yeah, this kind of deep nesting is not necessarily a problem as long as we aren't simply making copies of the code everywhere. if the dependencies are explicitly given and we avoid duplication <paroneayea>and richard thought that in an example where the algorithm had collaborative feedback shaping the direction of the program, it might not be far off from users procedurally generating artwork by moving around knobs in blender or etc <paroneayea>I submitted a talk to a conference recently titled "Free Software Futurism" that's all about some of these potentially-near-future-but-presently-scifi ideas that may affect free software, both problems and opportunities <mark_weaver>I guess at some point mixing will become so widespread that it will become impractical to apply copyright law in any reasonable way. <paroneayea>mark_weaver: assuming no other legal tools for restricting things come into place, could be good <paroneayea>but assumimng copyleft is our only defense, and other legal lockdown mechanisms exist <mark_weaver>sure, I would welcome the end of copyright law applied to functional code, certainly. <mark_weaver>yeah, somehow I think it's not going to be clean. things are getting messy, the rule of law is dissolving. <paroneayea>well we might run out of enough energy sources for computing to continue at its present state before it becomes an issue anyway ;) <paroneayea>that's one way to defeat proprietary software.. defeat all software! <mark_weaver>well, I don't see that happening, but efficiency will certainly become more important, and that has already started. <mark_weaver>and security will also become more important, I think. <mark_weaver>but in all areas, the changes in the world are accelerating rapidly. it occurred to me the other day that what is sometimes called "the singularity" might just be another example of what is called "punctuated equilibrium" in evolution. it has happened many times. <paroneayea>and just as in evolution, sometimes old patterns come back with a vengance... :) <mark_weaver>at some point, we will find equilibrium again for a long time, but before that happens, the world will not be recognizable to us at all, if we even still exist. <mark_weaver>humans cannot continue in their current form. our biological bodies as they are now are increasingly vulnerable to security compromise as we learn more about our biology and brains <mark_weaver>many things are now possible that were never possible before, and they make it untenable to continue life as it has been. <mark_weaver>and the world will continue to change as those possibilities are exploited, until it all runs its course. <davexunit>this conversation got deep. fascinating stuff. <mark_weaver>I believe, perhaps naively, that we may have some influence on how this ends, if the things we choose to work on don't have fatal flaws. <mark_weaver>well, if the things we choose to work on can be more fit than their competitors in the long term. but it takes a long view to ensure that. <davexunit>paroneayea: I recommend running 'guix environment --ad-hoc node -E "npm faq"' <davexunit>and reading the answer to "Why can't npm just put everything in one place, like other package managers?" <mark_weaver>yeah, in the area of integration of computing systems based on C and this kind of machine architecture, I have the most hope in Guix as a sane integration strategy <mark_weaver>at some point, I think it probably makes sense to phase out the C bits and move to something more abstract that allows for a more radical redesign of the lower-level architecture, but that can (and IMO must) be done incrementally. attempts to start from scratch are doomed to fail, IMO. <mark_weaver>so yeah, I think Guix is a good beginning to clean up this mess. <mark_weaver>'Stack is the new term for "I have no idea what I'm actually using".' <davexunit>it's true. I frequently have no clue what I'm using. <mark_weaver>"Ever tried to security update a container? Essentially, the Docker approach boils down to downloading an unsigned binary, running it, and hoping it doesn't contain any backdoor into your companies network." <davexunit>I want to move to a system where I'm building *all* of my docker images from scratch without using the images on dockerhub. <mark_weaver>I like to think that if there was a viable way out of this mess, people would take it. <davexunit>a lot of people think that docker has greatly improved things <davexunit>whereas we see it as papering over serious problems <mark_weaver>well, it has greatly improved their ability to get complex systems up and running, right? <mark_weaver>it has improved their ability to get their jobs done in a way that their boss accepts <davexunit>because it allows you to forget about the fact that the systems package manager you are using has no way to have 2 versions of python installed, or that it has no rollback capabilities, etc. <davexunit>so you just make a new disk image for each application you run <mark_weaver>I think that people tend to be blind to problems when there's no solution in sight. <mark_weaver>for example, I am continuing to use modern computers even though I'm pretty sure they are all owned by the NSA, because I'm addicted to them, and because I see no viable alternative in the near future. <mark_weaver>but if there was a computer that did what I needed it to do, and I had confidence was not owned by the NSA, I would switch to it in a heartbeat. <davexunit>I think people don't know there's a better way to manage their systems, and not surprising because nix and guix are relatively new things. <mark_weaver>I suspect that if we can come up with a much better way of doing things that still allows people to get their jobs done, then people will be able to acknowledge how horrible is today's way of doing things. <davexunit>that's what I'm hoping for. a lot of my effort is put into tools that will make development more pleasant. <davexunit>with the selling point being that the very same tools can be used to manage development environments *and* production environment. <davexunit>paroneayea: my current thoughts about the nested node_modules directory. <davexunit>the nesting is most likely necessary to preserve so the system works on guix. <mark_weaver>also, I suspect that at some point the current way of doing things will become untenable, most likely for security reasons, and that will force people to look for other approaches. if we can solve the biggest problems by then, our approach could become much more popular. <mark_weaver>davexunit: those nested directories could become symlinks <davexunit>paroneayea: but, we can avoid rampant duplication by building those nested directories manually and symlinking the store items <davexunit>mark_weaver: yeah, I think we have the right long term vision. <mark_weaver>davexunit: well, not really, I could tell where you were going with it :) <davexunit>I'm going to fall asleep before I finish, but I'm drafting a node-build-system. <mark_weaver>thank you for all your work, davexunit! you're pouring a lot of awesomeness into guix, and building a better future I think :) <davexunit>without further build complications, it seems that one just needs to run 'npm pack' in the source tree and then 'npm install foo.tar.gz' in the store directory. <davexunit>mark_weaver: and thanks for everything you've done! <davexunit>I wouldn't be the avid schemer I am now if you didn't help me out when I knew nothing of either Scheme nor Guile. <mark_weaver>it makes me proud to have played some part in persuading you to become part of our little community :) <davexunit>I've never had a better time hacking than writing Guile. <sprang>should I submit small patches for typos in the docs via the mailing list? <sprang>also, I'm trying to get a sense for how the "big picture" stuff is tracked... I've read the ROADMAP and TODO files <sprang>seems like most of the activity involves writing new packages vs hacking on the tool itself, but I just started paying attention recently :) <davexunit>but yeah, lots of other people do packaging. <davexunit>but ludovic, our maintainer, mark_weaver here, and others do a bunch of work on improving the core framework <sprang>I guess I was wondering it there is a tracked list of desired features... it the bug tracker used for that? <davexunit>the roadmap and todo should probably be the place <davexunit>and the mailing list documents things to do before releases <sprang>ok, I'll keep working immersing myself <mark_weaver>that's probably the right place to keep track of these things though. <paroneayea>mark_weaver: I'm going to bed. But I enjoyed talking tonight! <paroneayea>mark_weaver: btw, whether I make it in next for libreplanet or the fsf 30th thing, I hope we can meet up. <mark_weaver>paroneayea: yes, definitely, it would be great to meet you! <paroneayea>also, a curiosity that will have to wait till tomorrow: <paroneayea>/gnu/store/3bfhzm7y0lyml1fiw691mlydgl4wd1pf-grub-2.00/bin/grub-editenv: error: cannot write to `/fs/boot/grub/grubenv.new': No space left on device. <paroneayea>while testing `guix deploy spawn /home/cwebber/sandbox/guixops/deployment.scm` <paroneayea>guess I'll have to investigate that tomorrow. pretty sure it's happening in-vm. <mark_weaver>ah, I was mistaken about the icu patch. the existing patch for icu-52.1 applies to icu-55.1 without modifications. <phant0mas>mark_weaver: when I was rebasing wip-hurd on the then latest core-updates I faced a issue with cross-gcc that I had to change the header inputs <phant0mas>Cross-gcc needs linux-headers on linux system and gnumach, hurd, hurd-minimal on hurd systems <phant0mas>and my work on commencement showed me that cross-gcc is not the only one with the issue <phant0mas>I will rename the kernel-headers in hurd.scm to hurd-kernel-headers <phant0mas>and now that I think about it I should maybe create a kernel-headers macro in base.scm so we will never have to worry about who needs what <zacts>is full disk encryption available yet for guix? <zacts>it's the only real feature preventing me from using guix as my main distro <yenda>There is more than 500 packages affected by my upgrade of python2, guix refresh --list-dependent lists them but is there a command to build them all ? or is it unnecessary ? <davexunit>yenda: for such large upgrades, we should make a special branch for the upgrade to be done in and ask hydra to build it. <davexunit>once everything builds successfully, we can apply the patch to master. <cehteh>such could be automated even .. have a 'next' branch and let hydra build and merge everything <davexunit>I wouldn't want anyone but a human making commits <wgreenhouse>zacts: it seems to be, yes. the manual now covers luks setup <cehteh>well i aim for some automatic system to assist the human <yenda>davexunit: so I should just send the patch mentionning it should be pulled in a branch ? <cehteh>human commits to 'queue' .. build system build and tests, and when successful merges that to lets say 'prepared' .. and to make you happy a human may need to merge 'pepared' to 'master' <davexunit>yenda: sure, yeah. mention that it triggers lots of rebuilds. <cehteh>while i am not really sure if the last step is really useful, it is all initiated by a human commit anyway <yenda>also python3 inherits from python2, were the patches inherited too ? Because the new version of python2 doesn't need them anymore <davexunit>the source field is different in python2, so no the patches aren't inherited <davexunit>since patches are part of the <origin> object <yenda>so I can safely delete all traces of those patches ? <davexunit>yeah, if you are sure they are no longer necessary <yenda>grep only finds them in gnu-system.am, Makefile and Makefile.in <davexunit>the latter 2 are automatically generated files <davexunit>so remove them from gnu-system.am and delete the patches themselves from gnu/packages/patches <paroneayea>davexunit: I ran into an odd problem I didn't have before... I wonder if you have ideas <paroneayea>$(guix system vm build-aux/hydra/demo-os.scm) # <- this still works fine, builds the vm, loads it <paroneayea>guix deploy spawn /home/cwebber/sandbox/guixops/deployment.scm # <- modified version of your deployment.scm... *used to* work, but after running latest rebased wip-deploy, I get: <paroneayea>df -h only shows /gnu/, but /gnu/ is bind-mounted to /home/ so it's the same size. <paroneayea>er, bind-mounted to /home/gnu which is on /home/ <davexunit>this is about the temporary file systems created by qemu, I think. <paroneayea>davexunit: I can install new packages fine, my normal disk is not out of space <davexunit>I guess there's some discrepancy between the size of the images that 'guix system' and 'guix deploy' make <paroneayea>davexunit: maybe I should see if the older wip-deploy stuff really did run... <davexunit>argh. so, I wrote a node build system and immediately noticed that the tarballs uploaded to npmjs.com don't have test suites... <davexunit>oh, good news potentially. I'm checking out the top packages on npm and they all seem to come with test suites. <mark_weaver>yenda, davexunit: at least one of those patches is still needed for our python (3) package, as I recall. <mark_weaver>zacts: iirc, we don't yet support an encrypted root partition. our initrd-equivalent needs modifications to support that. <mark_weaver>davexunit: please don't let the lack of test suites deter you from solving the more important problem <mark_weaver>I agree that it's bad to not have tests, but if we could at least build things from sources in a sane way that would be just a huge improvement over the current way of doing things that it's worth making that step if we can. <yenda>mark_weaver: I couln't find any mention of the patch anywhere else <mark_weaver>yenda: okay, in that case it should be removed, as davexunit said. <mark_weaver>phant0mas: okay, if ludo suggested it, I'll go along. hurd-kernel-headers sounds like a good name. thanks! <davexunit>mark_weaver: okay, but I'm also concerned that what is uploaded to npm is not the CCS <davexunit>however, it's possible to obtain the CCS in other ways. <mark_weaver>davexunit: you're right, it is important to get the CCS <mark_weaver>although I'm not sure the test suite needs to be part of the CCS <mark_weaver>the CCS definitely needs to include all of the sources (preferred form for modification) needed to build it. <davexunit>I haven't confirmed, but I am worried that this isn't always the case on npm <davexunit>someone may have a build step that produces an artifact that is uploaded in the release <yenda>mark_weaver: I already did that in the patch I submitted. <mark_weaver>the GNU GPLv3 says "The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities." <mark_weaver>and there are more important details that follow that quote, but nothing about tests. <mark_weaver>I'll review it sometime today and get hydra building it after icu4c-update is merged. <mark_weaver>(icu4c-update needs to have the full capacity of hydra now because it is an important security update) <mark_weaver>davexunit: interesting. so is our 'nodejs' package another example of "fuck it, I'm out"? are we just copying the generated code? <mark_weaver>anyway, we can handle multiple versions of the same package, so I don't see that as a show-stopper. <davexunit>it's the sheer number of dependencies that is mind boggling. <mark_weaver>davexunit: that's why I inquired to the possibility automating or semi-automating it. <mark_weaver>and if that's feasible by sacrificing the test suites, then I think we should do it. <davexunit>yeah, a good enough npm importer should be able to assist. <davexunit>and yes, npm + all of its dependencies are bundled with node <mark_weaver>we could then incrementally deal with switching the packages over from npm to some source repo that included the tests later, if we have the energy. <mark_weaver>we should not let the perfect get in the way of implementing the good. <mark_weaver>this job is so big that we need to deal with it in manageable pieces. <mark_weaver>so, for example, let's not worry about the fact that our nodejs includes bundled libraries, yet. <mark_weaver>because it sounds like npm will be an essential tool for getting anywhere on this. <mark_weaver>we already have things like Qt that include lots of bundled libraries. <mark_weaver>in fact, it might be better to do this from the top-down. <mark_weaver>e.g., instead of refusing to package something until we can package all of its dependencies, maybe it makes more sense to accept packages with bundled dependencies, and then work on unbundling them over time, as we have energy to do so. <mark_weaver>our icecat package started out using most of its own bundled libraries. <mark_weaver>over time, as I've had energy, I've been working to have it use more and more of the system libraries. <mark_weaver>but that's better than living without a modern web browser in the meantime, because otherwise GuixSD would not really be usable for me or most people, and we'd be dead in the water. <paroneayea>davexunit: it would be good to have an npm importer that generates the full recursive set of packages to the extent it can <davexunit>I should apply the the same strategy to the rubygems.org situation <paroneayea>"Towards a Foundation for Extending microKanren Constraints" <- would love to see that one. <paroneayea>argh, why am I hitting these "no space left on device" issues <davexunit>mark_weaver: so, an important library for javascript is 'underscore', which provides missing functional things like 'fold'. the repo includes the CCS, a file called 'underscore.js', but there's also a minified version checked in. <davexunit>should our guix package remove the minified file? <davexunit>since we cannot yet generate it independently. <paroneayea>seeing if I have the same problem if I load this via just "guix system vm" <paroneayea>davexunit: "guix system vm" is working fine here with the same configuration <davexunit>paroneayea: did 'guix system vm' build a new vm? <davexunit>'guix deploy' is probably screwing up the amount of resources the vm should have. <paroneayea>davexunit: want me to email you the configuration I was using and running into trouble with? <paroneayea>davexunit: it's a modified version of the recipe you posted to the list. <davexunit>got underscore working via an early version of the npm importer :) <davexunit>guix environment --ad-hoc node node-underscore -E "node -e '_ = require(\\"underscore\\"); console.log(_.map([1,2,3], function(x) { return x*x; }));'" <rekado->after I explained to a scientist user at the institute what features Guix provides his response was: "so, why would anything still be packaged without Guix?!" <paroneayea>rekado-: I explained to some of the people I contract with Guix's ideas and they were like "what??? so what are the downsides" <paroneayea>which my answers were pretty much it's alpha and also doing it right means that life is tough when people do crazy things like in npm land <paroneayea>but hey, maybe even that can be solved (good luck w/ yer current hacking davexunit !!) <davexunit>I showed guix to our network security admin this week, including my container stuff, and he thought it was awesome <mark_weaver>davexunit: that's awesome that you were able to import underscore! woohoo! <mark_weaver>regarding the minified file: what would be needed to generate the minified file ourselves? <mark_weaver>paroneayea: in some ways, I'm not sure that we are deserving of the "alpha" label. it's true that we are missing a lot of important packages, and that some things don't work right, but on the other hand once you know those limitations, I find that the system is rock solid. <paroneayea>mark_weaver: I agree, I think "beta" might be better, to the extent those labels go :) <mark_weaver>so we should definitely warn people that there are issues, but "alpha" somehow conveys the wrong impression to people, I think. <paroneayea>the main "alpha" thing is not stability, but number of packages people need ***davi_ is now known as Guest55083
<davexunit>mark_weaver: we'd need a tool called 'uglify-js' <davexunit>which requires a lot of additional node packages <mark_weaver>as we did with nodejs, can we just import that one with all of its bundled libraries for now? <davexunit>mark_weaver: we can import it and just not use the minified file <davexunit>the minified file is actually for use in web browsers. afaict it is not loaded when I run require('underscore') at the node repl <davexunit>I was wondering if I should actively remove such files <davexunit>I can easily write a phase that deletes files with a ".min.js" file extension <mark_weaver>well, it sounds like web developers will want the minified files, so it would be good to have them, but we should generate them ourselves using uglify-js. <davexunit>but in the meantime, underscore's release tarballs have a pre-built minified file in addition to the source file. <davexunit>should I consider it benign or remove it in a build phase? <mark_weaver>and for now, to allow progress to be made, our uglify-js package could just use its bundled dependencies, like our nodejs package. <mark_weaver>davexunit: I don't think we should include pre-build minified files <davexunit>mark_weaver: okay, thanks. I had the same thought. <mark_weaver>so I guess my inclination would be to remove the minified file in a snippet, and then generate it during the build. <davexunit>I was hoping to just use a build phase to delete everything with a ".min.js" file extension. <mark_weaver>well, I don't feel strongly about it, but a snippet sounds like the right approach to me. <davexunit>so I'd like it to be as automatic as possible <mark_weaver>at some point it could be part of the importer or origin method, but for now let's just make it a build phase. <mark_weaver>I could imagine writing our own minifier in guile at some point, if it helps with the circularities. <mark_weaver>(if uglify-js depends on a lot of stuff, and all of our js packages depends on uglify-js, that's a nasty circularity) <davexunit>it would require altering people's build systems, so while I think such a tool would be useful, it might be difficult to integrate. <sprang>what does CCS stand for? (from earlier discussion) <paroneayea>bootstrapping compilers? how about bootstrapping bootstrap.js <sprang>ah, thanks couldn't work it out :) <sprang>right, just don't think I've noticed the acronym before <davexunit>not sure, but the GPL gives it meaning in legalese. <davexunit>it's a handy definition to cite when people try to tell you that minified js is source code. <davexunit>because it's a text file and not a native executable <mark_weaver>GPLv3 defines "corresponding source", but not "complete corresponding source". <mark_weaver>GPLv2 mentions the "complete corresponding machine-readable source code" <mark_weaver>but I've definitely heard the term used quite a bit. <sprang>I found a few minor typos in the docs, should I submit them as one patch, or one for each instance? <davexunit>sprang: one patch should be sufficient if they are small edits <mark_weaver>I guess that at some point during the GPLv3 process they decided to drop the word "complete" from the term. <paroneayea>I've heard th term plenty, but hadn't thought about GPL being the origin <paroneayea>though it makes sense given, you know, compliance. <paroneayea>I doubt the term would have originated from the proprietary world or expat-using crowd :) <dmarinoj>Has anyone experienced the error "failed to parse derivation: expected string `Derive(['"? <rekado->dmarinoj: maybe the derivation is an empty file. <dmarinoj>It happens when I try to install packages (guix package -i foo) or run guix pull. Do you have any suggestions on diagnosing it? <dmarinoj>It allows me to run guix pull not as root though... <paroneayea>I'm very confused by racket's packaging situation <paroneayea>and whether or not the racket that comes with guile comes with the "batteries included" standard library or not <paroneayea>or if that gets pulled down over the network as programs are loaded