IRC channel logs

2022-02-04.log

back to list of logs

<zimoun>hi
<zimoun>rekado: I am attending to an online conference about reproducibility &co. And one speaker talks about https://www.charite.de/fileadmin/user_upload/portal_relaunch/die-charite/Strategie-2030/Charite2030_Strategie_11_2021_engl.pdf It is interesting when comparing to your recent message about patents for funding.
<rekado>I think page 35 (4.3.1. Translation as a central feature of Charite) matches what I’ve been observing
<rekado>it’s like the ultimate win for the basic research institutes is to spin off a startup or a product that gets used in a clinical context, e.g. at Charite.
<rekado>“Development of new forms of interaction with partner organizations, start-ups and spin-offs”
<rekado>do any of you have practical experience with OpenStack?
<rekado>we’d like to start playing with OpenStack, primarily as a go-between from traditional HPC to AWS. I’d think that we could prevent all of HPC from moving to AWS (due to perverse cost incentives once you start using AWS) by having a “local cloud”.
<rekado>I’m looking for arguments for setting up OpenStack without making people think it’s meant to be a competitor to our HPC cluster.
<zimoun>Maybe you can reach out Pierre-Antoine from GriCAD, I think they are using OpenStack there. Or maybe Yann from CCIPL.
<civodul>rekado: OpenStack is terrrrrible, tells me a friend who's been working on it for Red Hat for several years
<civodul>terrible in the sense that it's a huge bloated thing
<rekado>I’m sure it is.
<rekado>do you know of anything lighter?
<rekado>one thing that appeals to me is that they have an EC2-compatible API
<rekado>so going from AWS to local OpenStack would seem like a possibility
<rekado>(and the other way around, which is a selling point for those who’d rather have us ditch HPC and move everything to AWS)
<civodul>rekado: i don't really know tools in this space
<civodul>i don't really relate to the needs i guess
<rekado>it’s a bit of an odd in-between space that’s ripe with marketing speak and ill-defined needs
<rekado>we currently set up custom VMs for users whenever they want to build a service
<rekado>since it’s not possible (for users) to define infrastructure with software (and on demand) they usually never go beyond their VM
<rekado>the VM is usually mismatched to the requirements, because it doesn’t scale up or down
<rekado>so we end up with 50 or so custom VMs that are all little pets.
<rekado>hard to maintain responsibly, unable to scale, and with the expectation that users take care of the VM even though they really only care about their application.
<rekado>the marching direction is of course set by those who envy “cloud-first pioneers” like the Broad Institute, and assume that using AWS “somehow” would cut out IT and thus free them from admins saying “no”.
<rekado>so there’s this vague “we must do something with the cloud” combined with our inability to provide a flexible solution for those where neither HPC nor custom VM is the right answer.
*civodul nods
<civodul>IT here has opened a CloudStack instance (they picked the wrong one...) for the purposes of continuous integration
<civodul>that allows research teams to spin up machines for this purpose without having to ask anyone
<rekado>“picked the wrong one”?
<zimoun>«using AWS “somehow” would cut out IT and thus free them from admins saying “no”», bah “somehow”, there is no free lunch. ;-)
<civodul>rekado: in the sense that OpenStack is now much more popular than CloudStack, AIUI
<civodul>(and popularity is everything, isn't it?)
<rekado>yeah, there’s a mental shortcut they take. I think that’s inspired by their dislike for IT in general.
<rekado>civodul: oh, see: I read CloudStack as OpenStack. “cloud” and “open” are such empty non-words that they don’t even register for me :)
<rekado>I’m pretty annoyed by these *huge* software stacks. I’m not saying they’re “doing it wrong”, but it seems like a *lot* of software for a pretty limited feature set.
<rekado>I’d like to have an API thing (ideally one that looks like AWS, because then I don’t need to write new code) that I can use to spawn VMs, configure networking between them, and move around workloads.
<rekado>but all these “solutions” are way more than just that API implementation
<rekado>they are web interfaces, storage management thing, …
<rekado>*things
<zimoun>Parisian hospitals invested in “cloud“ with a team of 6+ skilled people. They built a really good solution (Debian, Docker, frontend, etc.). It works well, from what I am seeing. Some MDs are still wanting to go to AWS because Broad Institute and they think it will be better by cutting stuff. Field is always more green for the neighbour (french proverb :-)).
<rekado>‘the grass is always greener on the other side’
<rekado>bioinfo/med people looking at Broad Institute for best practices is like web startups copying what Google does.
<zimoun>thanks for the good proverb. :-)
<zimoun>from my understanding, Yann of CCPIL is using doing experiment with Guix to manage many VMs; as you are describing. I do not know about the API though.
<zimoun>Maybe, it could be worth for you to connect with them about this topic. :-)
<rekado>yes, sounds like I should :)
<rekado>we are not at all committed to OpenStack
<zimoun>civodul: arf, for voting to your question, one needs an account to Mastodon I guess )-:
<mbakke>I never read up on CloudStack, but as a former OpenStack admin, I can hardly imagine a scenario in which it is the right choice
<mbakke>bloated is one thing, but it is outright buggy, difficult to diagnose, misleading error messages (permission denied, but actually ConnectionRefused), etc
<mbakke>and it still does not have a proper VM scheduler after all these years, from what my colleagues tell me :P
<mbakke>thousand-line long config files that are also Python scripts
<mbakke>(I've missed ranting about Openstack, thanks!)
<mbakke>high-availability managed by rudimentary shell scripts fired by keepalived, leading to some interesting races with flapping networks
<mbakke>database corrupting itself when an operation fails somewhere in the huge complex stack of components, but all components chug along happily like nothing happened
<mbakke>message queues filling up for no good reason...there was a bad reason but the details elude me
<mbakke>I've worked a bit with Openshift recently (essentially Red Hats Kubernetes offering), and it's been a pretty good experience (as an end user)...it only supports containers though.
<rekado>oh dear… this sounds… interesting.
<mbakke>just recently I heard that one installation had placed almost all VMs on a single machine, and live migrating from it failed ... seems like not much has changed since I worked with it :P
<mbakke>a Kubernetes-like API for managing Guix deployments would be a dream
<mbakke>I've been fantasicing about writing a Guix "hypervisor" for Ganeti (it supports Xen, KVM and LXC), but the Ganeti API is not really multi-tenant, so it would be a hard sell
<civodul>mbakke: what would be a "Kubernetes-like API for managing Guix deployments"?
*civodul knows next to nothing about Kubernetes
<mbakke>civodul: Kubernetes has a declarative deployment API, where you can describe complete services in terms of "pods" (one or more instances of the same container image), along with HTTP routes (/ to this pod, /archive to that pod), persistent storage for some pods, etc.
<mbakke>this paradigm would lend itself well to Guix containers (or VMs), but currently I have to build and upload Docker images separately: https://github.com/unioslo/mreg/blob/master/ci/manifest.scm
<mbakke>here is an almost-complete "hello world" example deployment to give a better picture: https://paste.debian.net/1229641/
<mbakke>I can make tweaks to that file, send it off to Openshift, and it will gracefully roll out new pods and shut down the old ones
<mbakke>pretty neat
<civodul>nice
<civodul>so each of the "apps" running in those pods must provide an HTTP interface, right?
<mbakke>not necessarily, you can expose and load balance any odd network port