IRC channel logs

2021-03-19.log

back to list of logs

<rekado>civodul: re gzip substitutes: Guix at the MDC still fetches them because I keep pushing the upgrade to ‘next week’
<rekado>I wanted to wait until I had more focus, because if I don’t pay attention and after the upgrade localstatedir is incorrect I might end up losing *most* of /gnu
<rekado>I’ll try to upgrade this weekend.
<civodul>rekado: hi! ah, that's an interesting data point
<rekado>we upgraded a similarly old installation on a separate (mini) cluster a week ago; no non-standard localstatedir, so that was trivial.
<civodul>ok
<rekado>oh, I need to announce the maintenance period to our users, so probably *next* weekend…
<civodul>heh, no rush anyway, don't worry
<civodul>it seems we can now afford having 3 kinds of substitutes
<civodul>we could try that for a while and see how it goes
<rekado>I’m looking forward to using the latest daemon, though
<rekado>I’ve been putting off upgrading for much too long
<rekado>I’m running ‘guix pull’ on AWS now.
<rekado>interesting to see how many packages it downloads (maybe graft-related…?)
<rekado>unexpected to see curl-7.71.0-doc as a requirement for compute-guix-derivation
<rekado>yuck, python2
<rekado>and: pulling from the last release I see a lot of garbage printed to the screen: all these ‘@ download-progress’ are not caught by the daemon, which makes for an ugly first impression
***zimoun` is now known as zimoun
<zimoun>rekado: yeah, I have often these ’@ download-progress’ too. But I do not have a reproducer. I am feeling as in the “shit happens” Forest Gump movie scene. ;-)
<civodul>rekado: yeah, this is terrible: https://issues.guix.gnu.org/41930
<rekado>curious to see how very slow applying grafts is on this EC2 instance.
<rekado>I wonder if it’s due to hitting a bandwidth cap, which limits read/write to /gnu on EFS
<civodul>s/how very slow/how fast/ ? :-)
<civodul>ah now, it's a statement
<civodul>bah
<civodul>perhaps it's not using a really local file system?
<rekado>I have no measurements, but I can count more than 20 seconds between grafts that locally would be almost instantaneous
<rekado>EFS is not local
<rekado>EFS is “managed NFS”
<rekado>so I expect things to be somewhat slower
<rekado>but I think what’s happening here is that the connection of the VM to the NFS is rate limited.
<rekado>so writing slows to a crawl
<rekado>I’ve been waiting for grafts to be completed for approximately 30 minutes now.
<rekado>downloading from ci.guix.gnu.org also starts out reasonably fast but then drops to ~50kiB/s
<rekado>network performance of EC2 instances is really vague
<rekado>this instance I picked offers “Up to 5 GBit”, but I have no idea what this means.
<rekado>“up to”? “5 Gigabit” for how long?
<rekado>I’m having a hard time understanding why “moving to AWS” is a popular goal for HPC people.
<rekado>it’s really difficult (or really expensive) to make it work *well*.
<civodul>you say it's rate-limited, but is it because you'd have to pay more to waive that limitation?
<rekado>yes
<rekado>you can always spend more money
<rekado>but pricing is really vague, and I believe that’s on purpose
<rekado>it’s hard to find numbers that make it easy to estimate what you need, so people often overallocate.
<rekado>it’s also disappointing to see that network limitations apply for all traffic. Traffic within the VPC is not privileged, so if all you want is good NFS performance you’ll have to pay for bigger VMs, even if none of that traffic is outbound.
<civodul>"pay us more!"
<civodul>still, it's problematic that it makes Guix less practical
<civodul>impractical, even
<civodul>perhaps AWS instances have to be considered a bit like volatile/throw-away containers?
<civodul>you build the whole image once, upload it, run it
<civodul>and when you need to upgrade, you rebuild + reupload