<paroneayea>so of course, going across the vm layer isn't going to either....
<paroneayea>maybe time to test with an ssh server or something.
<mark_weaver>sneek: your home directory is probably on the root partition, obscured by the mounted filesystem. you should copy the dot files from your home directory as created by guix. you can get to your hidden home directory by bind-mounting the root filesystem somewhere else, using a command like "mount --bind / /rootfs"
<mark_weaver>sneek: later tell tyrion-mx: your home directory is probably on the root partition, obscured by the mounted filesystem. you should copy the dot files from your home directory as created by guix. you can get to your hidden home directory by bind-mounting the root filesystem somewhere else, using a command like "mount --bind / /rootfs"
<mark_weaver>I have baby duty right now, so can't talk reliably or much right now, but one question I had is: why did you make scripts to run the nginx start/stop commands instead of just running those commands directly?
<davexunit>mark_weaver: because I'm not good at gexps :)
<marusich>I have reviewed the section of the manual on using the guix configuration system; however, I'm not sure how to specify multiple file systems. If I want to mount a root fs and also another fs (e.g., for the users' home directories), how would I specify that in the config file?
<marusich>I see in the Guix manual that it is possible to roll back system updates. I understand how to roll back a change to my own user profile using something like guix package --roll-back, but I do not see how one does this for the system itself.
<davexunit>marusich: that is done via the grub boot menu
<marusich>Oh, so you'd just select the previous version from there?
<marusich>Are there changes that can't be rolled back? for example, if I update my system's config file to include a new file system, but later I decide I want to remove it after all, can I do a "roll back" or do I need to run something like "guix system reconfigure" on a modified copy of the config file which happens to be identical to the original one?
<davexunit>in that case, if nothing else is different that you want to keep, you can just reboot and choose the old version of the system without that file system.
<davexunit>the things that can't be rolled back are the inherently stateful things outside of guix's control: home directories, databases, etc.
<marusich>My understanding is that GRUB "basically" just chooses what kernel to load and passes in some arguments to it. How is it that you can have totally different system configurations defined by different GRUB menu entries? Where might I look to find out more about that?
<marusich>I don't want to take up your time with such minutiae, but I'm interested to know how to find out how those kinds of nitty gritty things are impliemented in the case of Guix, so I can understand it and hopefully help contribute later on.
<sprang>I'm trying to integrate with geiser. The manual suggests, under "The Perfect Setup", doing this: (add-to-list geiser-guile-load-path "~/src/guix")
<sprang>but I'm not sure where that goes... looks likes emacs/init, but 'geiser-guile-load-path is not defined
<marusich>During boot, the system's output stopped for a while, so I figured it was waiting for my input, even though there was no prompt on-screen. When I typed in my password for the encrypted device, the system immediately resumed booting up. Is this expected behavior?
<amz3>marusich: sorry, i don't know. If nobody else can help you come back at 12:00 AM GMT. During the summer there is less devs around this time
<sneek>Camel_, davexunit says: we do not yet have an nginx system service, though we do have a package. writing the service has been on my TODO list for awhile and I would be very happy if someone beat me to writing it. :)
<Camel_>I've tried set screen resolution in Guix, but failed.
<marusich> Camel_: I was able to change my system (specifically, I was modifying mapper and file system mounts) by doing "guix pull" once and then doing "git system reconfigure $path_to_config_file" followed by a reboot, multiple times, until I got what I wanted. You can probably do something similar.
<paroneayea>but it should be adding the right arguments, but for whatever reason I just can't get it to happen... maybe similar to why I can't netcat over localhost on the vm though I can on my own machine
<davexunit>can you get something to work without using guix at all?
<paroneayea>davexunit: good question, using the same arguments to say a debian image with qemu or something?
<davexunit>paroneayea: yeah, something to reduce the problem set
<davexunit>is it guix that is the problem, or is it the networking setup on your machine?
<mark_weaver>ultimately, dmd will be better for us because of the guile integration, and also systemd is not an option for us since we intend to support the hurd and the systemd developers are unabashedly uninterested in portability to anything other than linux (the kernel)
<rekado->I just patched the build process until it compiled, i.e. by readding stuff Andy threw out.
<quasinoxen>dmd at its state is indeed rather minimal, but the fact that it's Scheme in of itself bears plenty of leverage
<davexunit>I spent some time reading the dmd code last night
<quasinoxen>though i suppose many find that difficult to appreciate
<davexunit>one feature that I would like it to have is repl server integration to facilitate live coding
<mark_weaver>I'm currently working on one of the steps toward GNOME: network-manager, and my preliminary work has uncovered some serious problems in our current polkit and some other issues with the underlying frameworks.
<quasinoxen>Does anyone actually know what features from systemd people want in dmd?
<quasinoxen>Actual justifications, not just buzzwords like "socket activation".
<davexunit>if we can hack a running dmd instance and add/remove/replace services easily, it will be a lot easier to work with.
<davexunit>quasinoxen: hehe, "socket activation" *is* one of the things I want.
<mark_weaver>quasinoxen: for starters, I definitely want the socket/dbus/automount activation stuff
<quasinoxen>Most "activation" schemes are simply fancy terms for conditional execution to begin with.
<quasinoxen>Given dmd being Scheme, these would be relatively trivial to add.
<mark_weaver>the basic idea is that systemd starts listening to all the sockets (and dbus services and for automount requests) immediately, before starting anything, and then lazily starts up the daemons upon the first attempt to talk to them over sockets/dbus or by attempting to access something within a mount point.
<mark_weaver>so first of all, this means that dependencies don't have to be declared at all.
<mark_weaver>and it massively improves the parallelism of system startup
<mark_weaver>it would be a huge improvement for us to have this in dmd
<mark_weaver>and it's probably not that hard to implement. there's just so much to do.
<quasinoxen>This isn't exactly true. Dependencies are still used for ordering and relationships. In fact, the complex dependency and transaction management is the cornerstone of systemd.
<quasinoxen>Parallelism, I'm not sure if it's even correct. It's more like asynchronicity than anything else.
<quasinoxen>You have fundamental I/O constraints per Amdahl.
<quasinoxen>launchd was the one that thought up the buzzword "socket activation" and it still requires sysadmins to manually set up synchronization and ordering policies, usually via Mach IPC or some other mechanism.
<mark_weaver>CPU is not 100% utilized while starting up most services
<mark_weaver>often they have to wait for I/O, or for some hardware device to become available, or whatever
<mark_weaver>and when service B depends on service A in some way, it's a waste to have to wait for A to completely start up before even beginning to start B.
<mark_weaver>these lazy activations allow A and B to start in parallel
<mark_weaver>and maybe B won't even try to contact A until near the end of its startup procedure
<quasinoxen>It's more concurrency here than it is parallelism.
<quasinoxen>Since the point is to have deferred resource access.
<quasinoxen>mark_weaver: Thank you for the invitation. I'm rather occupied with other projects, however. I just finished conducting an experiment with Android init on GNU/Linux. I expect to post the article today or tomorrow.
<quasinoxen>davexunit: Do you have any suggestions for promoting, other than working on X activation support (which in all honesty I do not see as high priorities, but providing a generic solution may be technically interesting)?
<mark_weaver>another thing that DMD desperately needs is to way to update its services safely without rebooting the system.
<mark_weaver>and also, to have it not die and thus cause a kernel panic if something goes wrong with one of its service definitions.
<mark_weaver>the last time I added a service (wicd-service) to DMD, debugging was a pain because DMD would die and the kernel would panic (as it does when PID 1 dies), and it was hard to see what was wrong.
<mark_weaver>it ended up being a simple typo; I mispelled an identifier somewhere.
<mark_weaver>ideally, DMD would run the code associated with service definitions within a subprocess
<mark_weaver>and this also seems like a good approach to allowing those service definitions to be replaced without restarting DMD.
<mark_weaver>in a sense, DMD allows everything we need, because you can connect to it and get a REPL and change whatever you like in the running DMD process.
<mark_weaver>however, it's unsafe. make a mistake, and boom, your system panics.
<davexunit>DMD's REPL server integration should guard the evaluated code.
<mark_weaver>we could improve the safety somewhat using Guile's exception catching mechanisms more comprehensively within DMD
<mark_weaver>and maybe that would be good enough in practice, I don't know.
<davexunit>it would certainly be an easy thing to start with
<davexunit>that's how I keep typos from crashing the games I write in Sly.
<mark_weaver>the thing is, since we also want to be able to update service definitions, it would seem clean to have those service definitions in separate guile scripts on disk, rather than loaded internally into PID 1.