IRC channel logs

2015-06-16.log

back to list of logs

<mark_weaver>ewemoa: ah, thanks for finding that!
<ewemoa>:)
<ewemoa>hmm, motivation for a pdfgrep package...
<civodul>:-)
<civodul>Rastus_Vernon: this is a bug in our gnu-package? predicate
*civodul -> zZz
<ewemoa>gonz_: fwiw, only lightly tested, but for clojure, this is working for me so far: https://pastee.org/8r48e
<ewemoa>haven't figured out how to build leiningen from source though
<gonz_>ewemoa: And where are custom recipes to be put?
<mark_weaver>gonz_: what do you mean by "custom"?
<mark_weaver>well, I guess you mean recipes that aren't in official guix.
<mark_weaver>you can put that file from ewemoa in <DIR>/gnu/packages/clojure.scm
<mark_weaver>and then: export GUIX_PACKAGE_PATH=<DIR>
<mark_weaver>where <DIR> is a directory of your choosing
<mark_weaver>note that the module name, in this case (gnu packages clojure), must be placed in <DIR>/gnu/packages/clojure.scm where <DIR> is in GUIX_PACKAGE_PATH.
<mark_weaver>the module name has to match its location
<mark_weaver>in the filesystem
<mark_weaver>see section 6.5 (Package Module) of the guix manual
<gonz_>Thanks, found it
<gonz_>If I want to move the store to another partition because it may end up getting very large, what's the best course of action?
<cehteh>i'd try rsync -aHSP ...
<mark_weaver>gonz_: "cp -a" is another good option, and should be faster
<mark_weaver>but after you are done moving things around, it will still need to be at /gnu/store
<cehteh>does cp -a preserve hardlinks?
<cehteh>eh .. does the store use hardlinks?
<davexunit>the store uses hard links
<cehteh>thought so
<cehteh>also .. copying may take some time, u know you always do a reset or whatever to break it while its in progress, rsync may turn out to be faster and more reliable then
<mark_weaver>cehteh: yes, cp -a preserves hard links
<mark_weaver>rsync has the nice feature that it can be run after a partial copy, that's true. but it is slower.
<mark_weaver>at least in my experience, I've found it to be quite a bit slower.
<cehteh>yes maybe, at least for local copies, but i always prefer it because of its reliablility
<mark_weaver>reliability? are you saying that 'cp -a' is unreliable?
<cehteh>in case you reset or killed the process you may copy all over, or end up with paritally copied files
<cehteh>also you can start rsync while working on the system modifying the tree
<mark_weaver>rsync can always be used as a second pass, but cp -a is faster as a first pass, and I see no disadvantage to using it as a first pass.
<cehteh>and later stop all things which mutate the tree, run rsync again (quite fast this time, only transfering changes, maybe needs --delete) to do a final sync
<cehteh>and rsync has a nicer progress indicator :D
<mark_weaver>the only mutations allowed in /gnu/store are adding new things to it, and that's only done by guix-daemon.
<mark_weaver>which only happens when you build or add new packages.
<cehteh>well of course you can use cp .. i am just talking about my habbits
<mark_weaver>I use rsync a lot too, fwiw.
<mark_weaver>the progress indicator of rsync doesn't show what percentage has been done so far, and can't give a time estimate, so I'm not sure it's much of a benefit here.
<mark_weaver>the progress indicator only shows percentage for individual files.
<mark_weaver>so it's nice when copying a few a very large files.
<cehteh>its more than than cp has at least
<mark_weaver>at the cost of substantially longer copy time, but whatever, I'm done talking about this :)
<mark_weaver>so, just to quantify the difference in speed, I tried copying /gnu/store/9s5pj17bjavnzig42wi0zhsjc08qcwxm-texlive-texmf-2014 with both cp -a and rsync -aHSP
<mark_weaver>cp -a took 57 seconds
<mark_weaver>rsync took 6 minutes and 35 seconds
<mark_weaver>:-P
<mark_weaver>so, about 7 times faster
<davexunit>guix-web has a new home on my git server: https://git.dthompson.us/guix-web.git
<davexunit>and I've finally adapted some of ludo's patches to it from his FOSDEM demo.
<davexunit>so it can install one package at a time again.
<davexunit>and I fixed some other front-end bugs.
<davexunit>I'm curious how I could go about making it easy to build a manifest transaction in the web interface
<mark_weaver>davexunit: I guess the place to look is guix/scripts/packages.scm
<davexunit>mark_weaver: I'm more curious from a UI perspective. I know how to build manifest objects, but how to make a user interface that is simple to use for this purpose is slightly tricky.
<davexunit>in guix.el, things are keyboard driven, so you can mark rows of a table with a particular action and then apply the transaction
<davexunit>but that doesn't translate well to a graphical web interface
<mark_weaver>well, I guess the important actions are install and remove. I suppose you could just have dedicated buttons for those on every package.
<davexunit>yeah, but it wouldn't make sense to have a remove button for a package that isn't installed, or an install button for a package that is already installed.
<mark_weaver>well, more precisely, for packages that are already in the profile, you need "upgrade" and "remove"
<davexunit>though determining that can be expensive.
<mark_weaver>and for packages that aren't in the profile, just "install"
<davexunit>but yeah, some kind of context sensitive button will do it.
<mark_weaver>yeah, I guess so
<davexunit>and then maybe a dedicated place on the page that accumulates the transaction details
<davexunit>so the user doesn't have to page through the huge list of packages to see what they decided to install/remove/upgrade
<mark_weaver>I guess that determining whether a package of that name is already in the profile isn't expensive
<mark_weaver>but determining whether a package is upgradeable is more expensive, yeah.
<davexunit>yeah, need to compute the hash of the inputs
<davexunit>which I believe involves package->derivation
<mark_weaver>yeah
<davexunit>I could do it lazily, on a page by page basis. my web UI paginates the package list.
<mark_weaver>for all packages in the profile, yes
<mark_weaver>at least you only need to compute the derivations for packages that are already installed in the profile, which is typically less than 200 or so.
<mark_weaver>much less than the total number of packages in guix
<davexunit>but wouldn't I need to compute the derivations of the other packages, to test if they are the same hash?
<mark_weaver>yeah, doing is lazily by page sounds good
<davexunit>if they are the same, I can provide a remove button
<mark_weaver>you need to provide the remove button even if the hash is different, I think.
<mark_weaver>you should be able to remove packages even if the version in your profile isn't the current version, right?
<davexunit>yeah, I suppose you're right.
<davexunit>yeah
<davexunit>this complicates the UI, but maybe it should fold up packages with the same name into a group
<davexunit>so the remove button would be associated with the group
<davexunit>since it doesn't matter which specific package we're talking about
<mark_weaver>of course, most of the time people will want to upgrade all (or almost all) packages in a single transaction.
<mark_weaver>especially when dealing with libraries that might be used to build software manually, it is important to keep the libraries in sync.
<davexunit>quick buttons can be provided for marking all profile packages for upgrade.
<mark_weaver>yeah
<davexunit>I think I have some direction now. guix-web will be somewhat usable with this addition.
<mark_weaver>btw, another issue: you should be able to remove a package in your profile even if it no longer exists in guix at all.
<davexunit>ah yes
<mark_weaver>I guess that's a edge case, but it would be good to get it right :)
<davexunit>absolutely.
<mark_weaver>occasionally we rename packages, for example.
<davexunit>thanks for hashing this out with me. I've gotta head to bed now.
<mark_weaver>okay, good night!
<davexunit>bye!
<ewemoa>gonz_: it's not lxrandr, but here's a tutorial for packaging arandr: https://pastee.org/2fa3h
<gonz_>mark_weaver: About copying the store; the reason I want to copy it to somewhere else is because I have a root partition of 20 GBs that seems to be filling up.
<gonz_>ewemoa: Thanks. :)
<ewemoa>gonz_: didn't know about httpie, thanks for mentioning it :)
<gonz_>ewemoa: Yeah, it really is great. As soon as you have some API that needs checking it's perfect. Or just some testing on your own stuff, because it has such simple and good features that make everything more accessible and visible.
<gonz_>Now, for a short blackout, as I say goodbye to my EC2 instance.
***gonz_ is now known as Guest64450
***gonz___ is now known as gonz__
***gonz__ is now known as gonz_
<civodul>Hello Guix!
<efraim>hi
<civodul>i like the very concise "What does Homebrew Do?" section at http://brew.sh/
<civodul>i wonder if we could have something similar on the web site
<civodul>and do we need a 'guix edit' command, maybe not
<civodul>uh
<rekado_>I think the website should more prominently feature Guix the package manager.
<rekado_>the package manager is mentioned but the link goes to the manual, which is somewhat less attractive and a little too dense to casual visitors.
<mthl>civodul: you mean integrate "Binary installation" chapter in the website?
<efraim>in my ./pre-inst-env I keep on getting the error could not find bootstrap binary 'guile-2.0.9.tar.xz' for system 'x86_64-linux'
<civodul>rekado_: yes, we probably need an additional page or something
<civodul>mthl: no, i was referring to the usage examples at brew.sh
<civodul>it's concise and immediately clear what it does
*civodul has a working 'guix edit', probably worth adding it
<mthl>IMO it would make sense to have such thing if the installation process was more simple.
<davexunit>wassat?
<mthl>civodul: quite fast implementation :)
*davexunit thinks installation is quite simple right now
<davexunit>a guided installer for noobs would be cool, of course, but we're not at a good point for that yet.
<mthl>It is but it's have much more step than the Homebrew (see log)
<mthl>s/The//
<civodul>mthl: right, but it'll always have more steps than Brew i guess
<civodul>because we have a daemon to run, etc.
<civodul>but to me the interesting part at http://brew.sh/ is what's below "What Does Homebrew Do?"
<davexunit>oh, this is installing the package manager on a host system
<civodul>right
<davexunit>I'm sorry but 'ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"' is *not* an acceptable means of installation
<civodul>indeed, definitely not
<efraim>the user creation could be a shell script, and as for starting the daemon that could be a systemd.service or something
<davexunit>I don't think easy is worth that monstrosity.
<mthl>:)
<civodul>efraim: yeah, we could maybe provide a SysV init or .service file
<efraim>i bet they saw the hatred to curl|bash so they made it ruby $(curl)
<civodul>:-)
<davexunit>if other distros would package our software, it would help, but we don't abide by the FHS which makes it hard.
<davexunit>so we want an easy binary installer, but we also don't want to say "just run this magic script as root!"
<mthl>the purpose of such page is that people can type commands along and get a simple overview of what the Guix do. But if we have on top of that a link to the manual for installation, it misses the point.
<efraim>you just need to think more creatively, one is a post-install script, then we need a post-remove script to delete the store and the users
<efraim>wrap it in a .deb and call it a day :)
<davexunit>civodul: but yeah, I think we should have something nice and concise like homebrew does
<davexunit>civodul: and what does 'guix edit' do? I read the chat log but it wasn't clear
<davexunit>ohhh I missed a snippet on the homebrew site
<davexunit>rekado_: I pushed some new commits to the guix-web repo. things are a bit more in shape now.
<rekado_>davexunit: nice!
<rekado_>I'm still wondering how the very same guix-web could be used by multiple users.
<davexunit>rekado_: we need PAM auth
<mthl>maybe 'guix package --edit' would be better than 'guix edit'
<davexunit>'guix web' could be run as root and users would login to manage their profile
<davexunit>but I haven't figured out how to do that part
<efraim>it could be a package and individualized that way
<rekado_>I'd rather not run it as root. I was hoping that spawning processes as other users could be done by an unprivileged process that gained user privileges via PAM ... somehow.
<davexunit>or that
<davexunit>whatever worked
<rekado_>(I really want the Hurd; dynamically adding privileges to processes is a useful concept.)
<davexunit>the Hurd has many features that would make it better than Linux if only it had some hackers.
<civodul>davexunit: 'guix edit' is like, ahem, 'brew edit' ;-)
<civodul>mthl: i think 'guix edit' is fine
<civodul>if we are to seduce brew users anyway ;-)
<rekado_>while installing blast+ I got this warning: "GC Warning: Repeated allocation of very large block (appr. size 33554432): May lead to memory leak and poor performance."
<rekado_>this happens during the validate-runpath phase
<rekado_>blast+ has immensely large outputs.
<civodul>rekado_: (guix build gremlin) naively loads ELF files in memory
<civodul>this is usually not a problem, but this one must be pathological or something
<civodul>is it C++, and is it stripped?
<rekado_>it's C++.
<civodul>unstripped C++ is huuuge
<rekado_>stripping happens in the phase before that.
<civodul>and are you sure it's actually stripping?
<civodul>because even for libQtCore & co. we don't get that warning, AFAIK
<rekado_>the outputs after stripping are a little nicer: 1.3 GB for $out/bin and 1.6GB for $out/lib
<civodul>whhaaat?!
<civodul>1.6G of .so?
<rekado_>these are all .a files
<civodul>oh
<rekado_>and they are dupes, e.g. libaccess.a and libaccess-static.a
<rekado_>exact same size.
<civodul>these are not subject to the validate-runpath check anyway
<civodul>so there must be some DSOs in there that are very big
<civodul>or the executables
<rekado_>it took me a long time to beat this build system into doing what it should. Seems that there's still some work to be done.
<rekado_>it's annoying that building blast takes a third of my day in the office.
<rekado_>been working on it for two weeks.
<rekado_>I hate custom build systems.
<rekado_>and I've never seen something more complicated than the ncbi build system.
<rekado_>so, bins are 1.3G, libs are 1.6G, includes are 32M --- I'd like to split the outputs into "out" for bin, "lib" for libs, and "include" for the headers.
<rekado_>Is that okay?
<civodul>sure
<civodul>maybe "doc" as well?
<civodul>(if there's a lot of generated HTML)
<davexunit>wow that's a massive package
<civodul>yes, that's crazy
<rekado_>heh, not a single documentation file to be found.
<mthl>:)
<rekado_>it's possible that I could reduce the size a little with a few more configure flags. I'll try building it again.
<civodul>maybe you could also remove the .a files
<mark_weaver>civodul: building all of core-updates sounds good to me.
<mark_weaver>in fact, I was going to say that if that's not done soon, we should have a separate 'libtiff-update' branch and jobset.
<civodul>yeah indeed
<mark_weaver>I've been agressively testing it on armhf, and it looks good.
<mark_weaver>*aggressively
<civodul>excellent!
<civodul>thanks also for all the CVE patches
<mark_weaver>np!
<mark_weaver>I'd like my system to be secure, so I'm strongly motivated to do it :)
<civodul>i realize we're still building gcc-4.7 in the core set, which is not so useful
<civodul>good :-)
<mark_weaver>yeah, gcc-4.7 can be removed from core
<civodul>done
*civodul mthl je vois déjà des SED
<civodul>uh, ECHAN
<mark_weaver>I'm going to need some help on security updates at some point, though. In particular, someone who cares about qt will have to take care of applying security updates to its bundled libraries.
<mark_weaver>and at this point, I know there are quite a few unpatched CVEs in there.
<mark_weaver>or better yet, to avoid the bundled stufff
<davexunit>yeah, we need that bundled stuff out of there.
<davexunit>too much maintenance burden.
<mark_weaver>yeah
<mark_weaver>mplayer is another one. it has a bundled copy of ffmpeg, and there have been several security fixes to ffmpeg since the ancient release of mplayer we have.
<mark_weaver>I use mplayer because vlc doesn't work for me for playing videos. that should be looked into as well :)
<mark_weaver>or maybe just dump mplayer in favor of mplayer2, which uses the system ffmpeg.
<mark_weaver>bah, bundling stuff is evil!
<davexunit>I recommend never looking at the chromium source distribution
<mark_weaver>hahah
<mark_weaver>and qt bundles chromium, so it's a recursive nightmare.
<davexunit>... dear god.
<mark_weaver>the bundled chromium in qt is one of the things that I know needs security updates, incidentally.
<mark_weaver>but if we package it separately, we'll have to deal with its FSDG issues.
<mark_weaver>maybe we do anyway, actually...
<rekado_>yay, down to 19MB for bin, 203MB for lib, and 32MB for include.
<rekado_>the magic configure flags were: "--with-dll", "--without-static".
<rekado_>that's much more acceptable than several GB each.
<civodul>rekado_: much better, indeed :-)
<civodul>mark_weaver: yeah Qt is terrible
<civodul>plus the bundled libs might be patched
<mark_weaver>civodul: I'm astonished that acec3be fixes #20824. how does removing a rule for doc/guix-daemon.1 affect whether guix-daemon is built? is this some crazy automake magic?
<mark_weaver>oh, I see.
<mark_weaver>automake adds a rule doc/guix-daemon.1: guix-daemon
<mark_weaver>correction, we add that rule explicitly in doc.am
<civodul>right
<civodul>the trick is to remove guix-daemon.1 from dist_man1_MANS
<mark_weaver>*nod*
<mark_weaver>thanks :)
<civodul>yw!
<civodul>i hadn't tested that config in a while
<mark_weaver>I use guix configured with --no-daemon to build nginx on hydra
<mark_weaver>well, it should already be built, so maybe there's a reasonable short cut.
<rekado_>I'm confused about something: I get a gnutls error when building MISO. The host machine does not have the guile bindings for gnutls installed. However, this only seems to be a problem when a redirection from http to https is followed.
<mark_weaver>but using the pre-built guix in the store doesn't work because it has a different localstatedir
<mark_weaver>maybe it could be hacked around, dunno.
<rekado_>when I update the recipe to point to the https URL directly I do not get the error.
<mark_weaver>rekado_: that's expected
<mark_weaver>we include gnutls as an input to the derivation that downloads the code based on whether the URI starts with https:
<rekado_>ah, I see.
<mark_weaver>s/based on whether/only if/
<rekado_>should the URL be updated? It's going to pypi.python.org, which always redirects to https.
<alirio>civodul: about #20814, I tried that commit, same results. the gettext failures are deterministic, retrying doesn't solve. not sure if I will use substitutes or disables the tests
<alirio>cloog has (arguments '(#:configure-flags '("--with-isl=system"))), I wonder how it's trying to compile the bundled one
<civodul>indeed, weird
<civodul>it's surprising that you can consistently reproduce the gettext test failure and that we've never seen it before
<civodul>might have to do with ASLR
<civodul>the daemon disables it nowadays, BTW
<iyzsong>I just bump Qt to new version :-)
<mark_weaver>rekado_: yes, any http origin URI in Guix that ends up being redirected to https should be changed to https.
<davexunit>we should be super lispy and add "Made with λ by the GNU Guix hackers" to the footer instead of "Made with ♥ by humans" :)
<daviid>i'd drop the made with love and just say "by the GNU Guix hackers using Guile", my 2c
<davexunit>daviid: but what about made with lambda? I think it's cute. anyway, it's not important, I was just having some fun.
<daviid>ah that did not print well here. but i'd drop it anyway in favor of what i wrote above
<alezost>yay for λ!!!
*mark_weaver ♥ λ :)
<davexunit>god gave us lambda and saw that lambda was good.
<davexunit>this is how the Docker website tells me to install their software, as root: wget -qO- https://get.docker.com/ | sh
<DusXMT>davexunit: kinda reminds me of those "magic Ubuntu one-liners"
<civodul>util-linux broke on i686 in core-updates :-/
<zacts>salut
<zacts>bonjour
<civodul>:-)
<zacts>I'm going to be learning francais again for school
<zacts>I knew some in secondary school, and lived with french cook for a while.
<zacts>I haves to learns ancient greek and francais...
<zacts>perhaps there is an #fsf related irc channel for francophones or whatever
<civodul>nice, maybe you can practice with some of us ;-)
<davexunit>time to run a docker container for the first time
<davexunit>let's see what guix must beat. ;)
<civodul>:-)
<davexunit>"debian:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security."
<davexunit>hehe
<civodul>ah ah, fun
<davexunit>there's security disclaimers all over this thing
<civodul>heh, they probably have a bunch of lawyers being paid full time
<civodul>they have to spend their millions now ;-)
<davexunit>the containers launch damn quick
<davexunit>what are they doing...
<civodul>BTW, the build process of zero-output derivations is not run
<paron_remote> https://microca.st/clacke/note/vm93I4TYToSqVURTOJEpxw hehe
<civodul>this is unfortunate because that would be another way to do eval-in-container
<davexunit>oh boo
<davexunit>paron_remote: 403
<paron_remote>oh
<paron_remote>the post was:
<paron_remote>> Ha! Apparently I spread an article on the initial release of guix to my friends three years ago with the laconic comment “because the existing ten or more package managers just aren't enough”.
<paron_remote>> Yet here I am, subscribed to three guix mailing lists and terribly excited to be reading them all and looking forward to some day maybe finding the time to contribute.
<paron_remote>> Basically it's mostly cwebber's fault. ;-)
<paron_remote>note! I had that initial reaction too, though I don't think I said it publicly, but you can see where my opinions are now :)
<paron_remote>we're slowly winning 'em over :)
<civodul>paron_remote: heh, of course, everyone had the same reaction
<civodul>even i found it crazy ;-)
<davexunit>civodul: do you know much about these "union mount" file systems? https://docs.docker.com/terms/layer/
<civodul>of course, learned about it in Hurd-land in the 2000s ;-)
<paron_remote>civodul: :)
<davexunit>hahahaha
<civodul>our install image uses the FUSE-based unionfs
<davexunit>I've been wondering how exactly I should be dealing with writable mounts for guix containers
<davexunit>civodul: ah, cool.
<davexunit>so much to learn.
<civodul>unionfs-fuse is pretty easy to use
<civodul>you give it a list of directories to union
<civodul>and you can specify which ones are writable and which ones aren't
<davexunit>civodul: that's probably the direction I should head in with guix containers then
<davexunit>I need to give processes somewhere to write to.
<civodul>yeah
<civodul>see also (gnu build linux-boot) which can use unionfs
<civodul>from the initrd
<davexunit>okay
<davexunit>civodul: do you have an idea of roughly what we'd need to do inside the container?
<civodul>well i don't know if a unionfs is always needed
<civodul>you could bind-mount user-specified directories
<civodul>à la 'guix system vm --share'
<davexunit>yeah
<davexunit>but one of the expectations with this stuff is to just give me some disk space without having to explicitly share things
<civodul> /var could be a tmpfs, for transient containers
<davexunit>I haven't figured out everything that's going on with docker, but it has /var/lib/docker with a bunch of subdirs
<civodul>for that you could use the #:volatile-root? option in boot-system, maybe
<civodul>persistent containers are slightly more difficult and may require unionfs
<civodul>but transient containers can just use #:volatile-root? and possibly bind-mount specific directories
<davexunit>I think transient containers is all we need.
<civodul>(making them not-so transient)
<davexunit>they're intended to be transient, anyhow.
<civodul>yeah
<davexunit>like, a container for a postgresql database would bind-mount some persistent state directory from outside the container
<civodul>right
<davexunit>but the rest of the fs could be that tmpfs
<davexunit>I think #:volatile-root? will get me what I want
<civodul>yes, looks like it
<davexunit>thanks for explaining civodul
<davexunit>you are a valuable resource.
<civodul>i'm not a "resource"! ;-)
<davexunit>;)
<davexunit>I have your brain insured for a lot of money.
<civodul>heheh
<davexunit>now if only I could figure out why I can't join a mount namespace...
<davexunit>and figure out a reasonable way to handle user namespaces...
<davexunit>I'm very excited that guix is perfectly positioned to solve the issue of rampant file duplication amongst containers
<davexunit>we just bind-mount /gnu/store and call it a day.
<davexunit>though I wonder if we should only bind-mount the closure of the system
<davexunit>perhaps that would be for the best
<davexunit>so the official debian base image is 51mb... I wonder how big a minimal guixsd system closure is...
<civodul>i'm afraid it's more than this
<civodul>but it comes with batteries, bells, and whistles
<davexunit>civodul: do you think it's safest to only mount the relevant closure in the container, rather than all of /gnu/store?
<davexunit>seems better from a reproducibility standpoint as well.
<civodul>it would make startup slower
<civodul>i guess you could try both
<civodul>well, start by bind-mounting the whole store, and then try the other option
<davexunit>okay
<civodul>'guix system vm' mounts the whole store
<paron_remote>I know it probably makes me unpopular, though I tend to think it might be interesting to be able to flag inputs to packages that are for build-only
<paron_remote>eg git
<paron_remote>so when copying over system closures for containers, servers, or deploying to say a beaglebone or something lightweight
<paron_remote>you can copy over a lot less
<davexunit>in this case, git wouldn't be included in the closure if it's store item isn't referenced in the output directory
<civodul>paron_remote: store items only keep references to things they actually need at run time
<paron_remote>civodul: oh well okay :)
<paron_remote>carry on then! ;)
<davexunit>sometimes there have been accidents that lead to including too many things as references.
<davexunit>I remember there being an issue with emacs
<civodul>what makes closures big compared to Debian is that we often have one directory with the whole package, including libs, binaries, doc, etc.
<paron_remote>"it accidentally contained an editor!" -- some vim user
<civodul>davexunit: and that, yes
<paron_remote>civodul: ah yeah
<civodul>Debian is really fine-tuned in that respect
<civodul>hard to compete
<paron_remote>"ya gotta keep 'em separated" -- the offspring
<civodul>haha, reminds me when i was younger ;-)
<davexunit>civodul: ah yes
<davexunit>so yeah, we can't hope to make smaller images than debian.
<davexunit>not without much customization and output splitting.
<civodul>and some argue that there's too much splitting in Debian
<davexunit>yeah I've felt that way on several occasions
<davexunit>though if you want to make things "lean and mean"
<civodul>yeah, it's a tradeoff
<davexunit>and I guess docker also has deduplication via the union file systems
<davexunit>it just layers on the various images
<davexunit>so if you had 100 containers all using the same image, they'd all be referencing the same files I think.
<davexunit>so we get no big win there beyond being able to use a simple bind-mount instead of N file-system layers
<zacts>I'm so glad I don't have to work with docker right now anymore
<zacts>I hated docker