<davexunit>okay, so the container tests fail due to a case that I'm not sure how to handle regarding user namespaces
<davexunit>I detect whether or not the user's id is 0, the root user, and if so I create a user namespace mapping of 65536 uids. if the user is unprivileged, only their uid can be mapped to the parent namespace, so I only map a range of 1.
<mark_weaver>davexunit: that reminds me: while working on network-manager, I noticed that its test suite makes use of containers and user namespaces, and I found some code that might have some useful hints for you.
<davexunit>presumably because the *real* uid, going all the way parent to the root user namespace, is not 0.
<davexunit>I didn't expect this to be a problem because the container has its own special proc file system.
<mark_weaver>actually, I had to disable the container tests because they failed within the build container. at some point I'd like your input on that, but first I should get the package mostly working and postd.
<yenda>I'm not sure how to run the tests once I'm into the env in the failed build dir
<mark_weaver>probably run "make check" from the top-level directory, although it looks like you also need to set the HOME environment variable to something that can be written to (it's done in the pre-check phase)
<yenda>I did set HOME but make check does nothing and make / make test returns sh: build/temp.linux-x86_64-2.7/multiarch: Permission denied
<davexunit>going to push the rebased branch once I've confirmed that it works
<paroneayea>davexunit: of course, if you find a problem on your server, then you try to rebuild everything, and it turns out everything's not rebuilding right now due to the state of changed system packages
<mark_weaver>bah, there's a fix for CVE-2015-4760 for an old version of icu4c, but nowhere can I find an equivalent fix the version of icu4c that we have, and applying the fix to the new code is not entirely obvious :-(
<joshuasgrant>Okay, I threw caution to the wind and all boxes but one is running GuixSD now. Maybe over the next few days turned weeks, I can actually start contributing every once and awhile again. :^)
<davexunit>joshuasgrant: congrats on your lovely new OS
<joshuasgrant>paroneayea: Really, besides having a semi-capable tiling wm ... until we have GNOME proper in GuixSD, I'm pretty happy with the config I've put together thusfar. Well, at least on the desktop end; Server end, there's a number of things I still actively want a good bit (including Mediagoblin ;^) ).
<davexunit>not looking forward to unraveling npm, but it needs to be done at some point.
<paroneayea>davexunit: so my plan for mediagoblin right now:
<davexunit>if you build it they will come, and if the build system and import scripts are there, it can enable newcomers to package all sorts of things with ease.
<paroneayea>davexunit: get it "working" in an external package repo from guix proper, that does the nix style "eff it I'm out" approach of just putting the minified .js as the "package" for jquery and etc
<mark_weaver>things are accelerating so rapidly, and there's so much pressure to use whatever tech will allow rapid integration, that it hasn't allowed proper design or consideration of the long term consequences :-(
<davexunit>and those are needed to build from source. :(
<davexunit>gulp is *another* task runner, an alternative to grunt.
<davexunit>so grunt depends on a library that requires a task runner developed *after* grunt in order to build.
<paroneayea>since each machine is "too complex to build", most users have decided to trust a manufacturer to put together each machine, but many machines are made of other machines too
<davexunit>docker takes the idea of static linking and brings it to a higher abstraction layer: the entire distro.
<paroneayea>now imagine that one essential piece, let's say it's called openssl, has a serious bug in it which may cause these machines to explode, and your boss tells you to go fix all the machines, take them all apart if you have to
<paroneayea>now imagine you have to do this once a week, or once a day :)
<paroneayea>and now we know why 2/3 of docker images have medium to high security vulnerabilities!
<davexunit>I am still trying to figure out the security story for docker.
<davexunit>paroneayea: did you know that Docker Compose bundles a vulnerable version of OpenSSL?
<paroneayea>davexunit: haha I think I saw you link to that but didn't realize it was Docker Compose
<mark_weaver>well, that might not be quite right, now that I think about it more
<paroneayea>mark_weaver: let's fix our packaging solutions via machine learning and genetic programming!
<mark_weaver>in biology it's not practical to nest systems within systems within systems to the kind of nesting levels that we now see in the JS world.
<mark_weaver>paroneayea: yeah right, that might well come to pass
<mark_weaver>imagine applying security updates by probabilistic algorithms that search through the source code for patterns similar to the code that needs to be patchd, looking for all occurrences.
<paroneayea>mark_weaver: hm, given humans with organs with helpful bacteria with mitochondria...
<paroneayea>mark_weaver: well it might not be so bad if things are reproducible
<mark_weaver>paroneayea: well, yes, that's what made me think of it, but the nesting depth seems even more in the JS world than in biology at this point
<paroneayea>mark_weaver: btw I had some interesting conversations with Richard Fontana while at OSCON where I was trying to ask about how a hypothetical system that had some procedurally generated genetic programming system "extended" via logic programming in the kind shown with 'evalo' in the minikanren talk I linked not too long ago, would copyleft still apply as a defense?
<mark_weaver>but yeah, this kind of deep nesting is not necessarily a problem as long as we aren't simply making copies of the code everywhere. if the dependencies are explicitly given and we avoid duplication
<paroneayea>and richard thought that in an example where the algorithm had collaborative feedback shaping the direction of the program, it might not be far off from users procedurally generating artwork by moving around knobs in blender or etc
<paroneayea>I submitted a talk to a conference recently titled "Free Software Futurism" that's all about some of these potentially-near-future-but-presently-scifi ideas that may affect free software, both problems and opportunities
<mark_weaver>but in all areas, the changes in the world are accelerating rapidly. it occurred to me the other day that what is sometimes called "the singularity" might just be another example of what is called "punctuated equilibrium" in evolution. it has happened many times.
<mark_weaver>yeah, in the area of integration of computing systems based on C and this kind of machine architecture, I have the most hope in Guix as a sane integration strategy
<mark_weaver>at some point, I think it probably makes sense to phase out the C bits and move to something more abstract that allows for a more radical redesign of the lower-level architecture, but that can (and IMO must) be done incrementally. attempts to start from scratch are doomed to fail, IMO.
<mark_weaver>so yeah, I think Guix is a good beginning to clean up this mess.
<mark_weaver>"Ever tried to security update a container? Essentially, the Docker approach boils down to downloading an unsigned binary, running it, and hoping it doesn't contain any backdoor into your companies network."
<mark_weaver>it has improved their ability to get their jobs done in a way that their boss accepts
<davexunit>because it allows you to forget about the fact that the systems package manager you are using has no way to have 2 versions of python installed, or that it has no rollback capabilities, etc.
<davexunit>so you just make a new disk image for each application you run
<mark_weaver>I think that people tend to be blind to problems when there's no solution in sight.
<mark_weaver>for example, I am continuing to use modern computers even though I'm pretty sure they are all owned by the NSA, because I'm addicted to them, and because I see no viable alternative in the near future.
<mark_weaver>but if there was a computer that did what I needed it to do, and I had confidence was not owned by the NSA, I would switch to it in a heartbeat.
<davexunit>I think people don't know there's a better way to manage their systems, and not surprising because nix and guix are relatively new things.
<mark_weaver>I suspect that if we can come up with a much better way of doing things that still allows people to get their jobs done, then people will be able to acknowledge how horrible is today's way of doing things.
<davexunit>that's what I'm hoping for. a lot of my effort is put into tools that will make development more pleasant.
<davexunit>with the selling point being that the very same tools can be used to manage development environments *and* production environment.
<davexunit>paroneayea: my current thoughts about the nested node_modules directory.
<davexunit>the nesting is most likely necessary to preserve so the system works on guix.
<mark_weaver>also, I suspect that at some point the current way of doing things will become untenable, most likely for security reasons, and that will force people to look for other approaches. if we can solve the biggest problems by then, our approach could become much more popular.
<mark_weaver>davexunit: those nested directories could become symlinks
<davexunit>paroneayea: but, we can avoid rampant duplication by building those nested directories manually and symlinking the store items
<mark_weaver>yenda, davexunit: at least one of those patches is still needed for our python (3) package, as I recall.
<mark_weaver>zacts: iirc, we don't yet support an encrypted root partition. our initrd-equivalent needs modifications to support that.
<mark_weaver>davexunit: please don't let the lack of test suites deter you from solving the more important problem
<mark_weaver>I agree that it's bad to not have tests, but if we could at least build things from sources in a sane way that would be just a huge improvement over the current way of doing things that it's worth making that step if we can.
<yenda>mark_weaver: I couln't find any mention of the patch anywhere else
<mark_weaver>yenda: okay, in that case it should be removed, as davexunit said.
<davexunit>I haven't confirmed, but I am worried that this isn't always the case on npm
<davexunit>someone may have a build step that produces an artifact that is uploaded in the release
<yenda>mark_weaver: I already did that in the patch I submitted.
<mark_weaver>the GNU GPLv3 says "The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities."
<mark_weaver>and there are more important details that follow that quote, but nothing about tests.
<mark_weaver>this job is so big that we need to deal with it in manageable pieces.
<mark_weaver>so, for example, let's not worry about the fact that our nodejs includes bundled libraries, yet.
<mark_weaver>because it sounds like npm will be an essential tool for getting anywhere on this.
<mark_weaver>we already have things like Qt that include lots of bundled libraries.
<mark_weaver>in fact, it might be better to do this from the top-down.
<mark_weaver>e.g., instead of refusing to package something until we can package all of its dependencies, maybe it makes more sense to accept packages with bundled dependencies, and then work on unbundling them over time, as we have energy to do so.
<mark_weaver>our icecat package started out using most of its own bundled libraries.
<mark_weaver>over time, as I've had energy, I've been working to have it use more and more of the system libraries.
<mark_weaver>davexunit: that's awesome that you were able to import underscore! woohoo!
<mark_weaver>regarding the minified file: what would be needed to generate the minified file ourselves?
<mark_weaver>paroneayea: in some ways, I'm not sure that we are deserving of the "alpha" label. it's true that we are missing a lot of important packages, and that some things don't work right, but on the other hand once you know those limitations, I find that the system is rock solid.
<paroneayea>mark_weaver: I agree, I think "beta" might be better, to the extent those labels go :)
<mark_weaver>so we should definitely warn people that there are issues, but "alpha" somehow conveys the wrong impression to people, I think.
<paroneayea>the main "alpha" thing is not stability, but number of packages people need