*vagrantc is building all guile packages in Debian against both guile versions <vagrantc>doing it kind of sloppily, though ... guile-zstd has botht he guile-2.2 and guile-3.0 stuff ... because i don't want to deal with the administrative overhead of doing versioned package names ... this is the sort of thing way nicer in guix :) <vagrantc>mostly for beurocratic reasons ... e.g. waiting for NEW review when you add a new binary package <leoprikler>mroh: There's probably a long way to go still, given that it's built on Electron *vagrantc sighs waiting for linux-libre builds on aarch64 <vagrantc>this is one package i really wish i could reliably get substitutes for... <lfam>vagrantc: Yes, "we" should look into that <lfam>vagrantc: Do you know the .drv filename? <lfam>I can poke around on the build farm <vagrantc> /gnu/store/k5hzp9y9pwm1c42g2fv14d574zl37y42-linux-libre-arm64-generic-5.10.9.drv <vagrantc> /gnu/store/qv39ppfhjifpiyjdhvp0a1qphlz8phjs-linux-libre-5.10.9-guix.tar.xz.drv <vagrantc> /gnu/store/0hx8nyhiab9lyzv465c1lp94imk7fvdq-linux-libre-5.10.9-guix.tar.xz.drv <vagrantc>even just the tarballs would be great; the kernel build itself doesn't take long <lfam>The tarballs are sooooo annoying. I lost count of how many times they are unpacked and repacked <lfam>I will see if we can adjust these timeouts <lfam>In the meantime, I'll start building them by hand on CI <lfam>That should at least give you substitutes <lfam>I think that things are really looking up for CI *lfam starts build with 10000 second timeouts <lfam>It's annoying that Guix doesn't let you build the source tarball on x86_64 <vagrantc>yeah, that different architectures can't share something like a tarball, unless it's from a third-party source <lfam>Anyways, it's been offloaded to one of the overdrives <lfam>So, if it timed out after 1 hour of silence, we can assume it will take a few hours to build the source code, and a few more hours to build the kernel <lfam>Maybe the kernel will build more quickly, because it can be parallelized effectively <lfam>There won't be substitutes for several hours, is my point <lfam>We have overdrive 1000, which is a quad-core A57 <vagrantc>it takes a few hours on rockpro64 and pinebook-pro <vagrantc>the tarball process doesn't make much use of multiple cores, from what i've seen <lfam>It's also heavily I/O bound <lfam>It's a case where we shouldn't use lzma <lfam>I'm curious, do you have a fan in your rockpro64? Can the kernel control it "properly"? Or is it always running at the same speed? <lfam>I'm considering purchasing one <vagrantc>lfam: you can adjust the fan speed with some hwmon interface in /sys <lfam>I was reading about that :) But, it doesn't get adjusted automatically based on thermal conditions? <vagrantc>it's got 255 settings, in theory ... but i suspect anything slower than ~48 will just stall the fan <vagrantc>lfam: out of the box, no ... but i think it has all the hooks where a daemon could implement something like that <lfam>Alright, I'll keep it in mind <vagrantc>or i just don't have the right kernel configuration :) <lfam>Yeah, it seems like the best of what's currently available <vagrantc>for a reasonably affordable, moderately fast system, it's not too bad <vagrantc>i don't think u-boot supports loading from sata yet, unfortunately *vagrantc really wishes for a split /boot partition in guix <lfam>I might just wait for Olimex Tukhla. We'll see <vagrantc>heard good things about olimex, but never managed to get any ***catonano_ is now known as catonano
<vagrantc>lfam: guess i had a head start on building the tarball ... finished locally already :) <lfam>vagrantc: Oh well. It will be available from CI eventually. And I filed a ticket about increasing the timeout <vagrantc>lfam: yeah, that would be nice to fix it longer term, thanks! :) <vagrantc>ok, my tests of guix from debian experimental went ok, now building and going to upload to debian unstable and will hopefully land in bullseye in ~10 days ... and make it into the next debian stable release :) <apteryx>lfam: on core-updates it won't happen anymore <apteryx>also; core-updates has xz multi-thread compression, which brings the compression time down from 6 minutes to 30 s on a fast machine <apteryx>vagrantc: curious, what advantages would a split /boot have? <apteryx>mbakke: I'm always amazed how long building Haskell stuff takes (mostly because the deps are so deep) <vagrantc>apteryx: i could put /boot on microSD so u-boot can boot to guix but keep /gnu on sata <apteryx>ah; so a u-boot limitation (no sata?) <apteryx>and I reckon u-boot is the only option for that board? <apteryx>I wonder what prevents GRUB from competing in the ARM space. <vagrantc>apteryx: currently, u-boot only, far as i'm aware <sturm>Anyone else having trouble connecting to gitlab.com with IceCat? It's working in Chromium for me, but I'd rather not run two browsers if I can avoid it. <sturm>(Issue is something to do with the JavaScript DDOS protection challenge page that says "Checking your browser before accessing gitlab.com") <sturm>Thanks mroh, I'll check that out <mroh>you're welcome. Hope it helps. <sturm>mroh: yes, confirming that it's the same issue - installing the User Agent Switcher add-on for IceCat allows me to access gitlab.com. rekado_: I found your mention of this in the December IRC logs - just FYI in case you hadn't resolved that <mroh>Ben Sturmfels: thank you for confirming. <leoprikler>raghavgururajan: nah, I don't think we'll get calls to not crash downstream. ***apteryx is now known as Guest53393
***apteryx_ is now known as apteryx
<raghavgururajan>leoprikler, I didn't understand what you said. "nah, I don't think we'll get calls to not crash downstream." <leoprikler>meaning we (downstream) probably won't get calls to not crash until upstream does something <pkill9>why is the 'time' command not on my system? <pkill9>and the 'time' package provides a different one <pkill9>ah it's a bash builtin, im using fish <PotentialUser-34>I'm following gnunet.org Use documentation, after installing it via `guix install gnunet... <PotentialUser-34>...and I'm stuck to start gnunet-gns-proxy: the doc says to run /usr/lib/gnunet/libexec/gnunet-gns-proxy, but the command gnunet-gns-proxy is not found <leoprikler>PotentialUser-34: substitute /usr/lib/.../libexec with $GUIX_PROFILE/libexec, where $GUIX_PROFILE is the profile in which gnunet is installed <PotentialUser-34>I'm confused with guix concepts of profile. I just set the GUIX_PROFILE as guix output suggested... <leoprikler>manually setting GUIX_PROFILE does nothing, it's more of an idiosyncratic variable than anything else <leoprikler>setting GUIX_PROFILE followed by sourcing $GUIX_PROFILE/etc/profile "activates" the profile <mroh>PotentialUser-34: try something like `$(guix build -q gnunet)/lib/gnunet/libexec/gnunet-gns-proxy` <leoprikler>my bad, so substitute /usr/lib with $GUIX_PROFILE/lib <leoprikler>i.e. you should get lib/gnunet/libexec like mroh said <pkill9>how do I compile a package for a rasp berry pi on guix? <ekaitz[m]>hi everyone, does gcc-toolchain provide a risc-v cross-compiler or is it packaged somewhere else? <ekaitz[m]>I see there's a arm-none-eabi-toolchain but I'm unable to find a risc-v compiler toolchain <leoprikler>how do I launch a desktop file from command line? <stikonas>grep Exec line and exectute it in subshell? <nixo_>guix environment --ad-hoc dex -- dex $FILE.desktop <nixo_>I moved to define-foreign-object-type as you suggested, added doc generation and started adding more of the missing bindings when I saw the changes in your repo. When I have some time I'll rebase, I've been busy those days. I'll try to do something today <leoprikler>Ahh, my bad, it appears I've inadvertently caused you extra work. <pkill9>it could do with a few improvements, maybe I will contribute them <theruran>ekaitz[m]: you're already making your own guix packages? <ekaitz[m]>i packaged many things for guix already, yeah <theruran>it will make it easier to do RISC-V development <ekaitz[m]>I think it's better to make all the packages separately, what do you guys think? <ekaitz[m]>what I'm not sure is what's the repository providing by itself <ekaitz[m]>i mean, does it make anything else than traversing all the submodules compiling them separately? <rekado_>ekaitz[m]: doesn’t upstream GCC already support RISC-V? <rekado_>then all you’d have to do is build it for that target <rekado_>just like the arm-none-eabi-toolchain <ekaitz[m]>I asked that in the morning but got no answer <ekaitz[m]>installing gcc-toolchain it provides: x86_64-unknown-linux-gcc... <ekaitz[m]>i guess it's not as simple as installing it... i need to build the xgcc like arm does <ekaitz[m]>anyone can help me a little bit to get started with this? <ekaitz[m]>so I need to make (cross-gcc "riscv-unknown-linux" gcc) or something like that? <ekaitz[m]>and just "copy" everything in arm's toolchain to make a separate one ***amiloradovsky1 is now known as amiloradovsky
<leoprikler>(cross-gcc "riscv-unknown-linux") (cross-binutils "riscv-unknown-linux"), etc. <ekaitz[m]>but has to be because... why don't we have the toolchain already packaged then? :D <leoprikler>I think we prepare native bootstrap, as that can in theory be done all on one machine. <leoprikler>Corollarily, cross-gcc is rather something you'd end up invoking through --target, for development environments, or in some niche packages. <ekaitz[m]>but I need it raw as a tool for my own development and testing so i have to expose those packages as arm toolchain does, right? <leoprikler>you simply write a manifest, that has whatever cross-gcc and cross-binutils you need in it <leoprikler>then you can spawn one-off environments from that using guix environment -m <theruran>ekaitz[m]: there's newlib in the guix repo too, so you can target bare metal <ekaitz[m]>in gnu/packages/bootstrap.scm line 306 it's doing some weird stuff with the dynamic linker when I try to make a cross-gcc for riscv32 <ekaitz[m]>can I just say i don't want a dynamic linker? <ekaitz[m]>should I do the same thing that AVR does with no-ld.so? <rekado_>ekaitz[m]: what I meant is to define the toolchain just like the arm-none-eabi-toolchain packages are defined, except that you’d use the latest GCC as the base. <ekaitz[m]>well but leoprikler proposed a different thing <ekaitz[m]>i tried to do an environment that has (cross- thingies <ekaitz[m]>rekado_ is proposing to make package like arm toolchain but for riscv, which is what I said and, if i understood right, leoprikler proposed just to use an environment instead <leoprikler>I think we're talking about different things here. <ekaitz[m]>so i proposed to build a toolchain like arm's to be able to work on that easily <leoprikler>Yep, and that should work if you get a riscv32 gcc and riscv32 binutils in your environment. <ekaitz[m]>but I can't do that because i get some errors related with the dynamic linker <ekaitz[m]>which shouldn't have anything to do with this all i think <leoprikler>Doing that through a "-toolchain" package works, assuming you can cross-gcc to build for your target architecture. <leoprikler>You can do that through cross-gcc and cross-binutils if those build for your target platform <leoprikler>But, it might happen that the bootstrap for your target platform is different enough to require its own package. <leoprikler>In that case, you need to start hacking on your own on that package :) <ekaitz[m]>so in the end, making a -toolchain package is just a simpler way to make an enviroment to work in an specific architecture, right? <leoprikler>no, you still need to spawn an environment --ad-hoc your toolchain package <leoprikler>you don't save yourself the effort of creating a programming environment <ekaitz[m]>but you don't need to choose the packages one by one <leoprikler>Well, sure, but it's just two packages that you can do with function calls if its now different from (package (propagate cross-gcc cross-binutils)) <ekaitz[m]>in any case I need to create a cross compiler, and it's not possible to make it through (cross-* because it's failing with a very weird issue related with the dynamic linker <ekaitz[m]>looks like the glibc-dynamic-linker cond doesn't specify a dynamic linker for riscv32 <leoprikler>I don't think we have much of a riscv32 bootstrap story yet, so that's somewhat to be expected <ekaitz[m]>so... should I forget about that and make a package for riscv32 gcc or should I add a new line there with the linker path? <leoprikler>Tracking down a linker would give you a lot of stuff for free, so if it's doable, that's certainly worth investigating <ekaitz[m]><leoprikler "Tracking down a linker would giv"> i'll change that and try to make it work then! <GNUtoo>(define-zone-entries example.org.zone ("@" "" "IN" "A" "127.0.0.1") <GNUtoo>Would it be possible to instead use a standard zone file? <rekado_>ekaitz[m]: a quick grep of “riscv” in the GCC 10 sources shows me that it should be supported <rekado_>so you really just need to build the toolchain (with newlib as the C library) <rekado_>I don’t see why you would need anything else <ekaitz[m]><rekado_ "I don’t see why you would need a"> basically doing (cross-gcc "riscv32-linux-gnu") fails, that's the main issue <GNUtoo>For instance "knot-zone-configuration" has "file" which is described as "The file where this zone is saved." <ekaitz[m]><leoprikler "is newlib a libc replacement?"> it's a libc that is more oriented for embedded devices, yes <GNUtoo>Is that what I'm looking for or is the generated configuration saved there instead? <rekado_>ekaitz[m]: you need to use a recent gcc package as the base <leoprikler>GNUtoo: looking at it, it appears as though you'll have to schemeify your zone file <rekado_>if you haven’t yet done so, please look at the definitions in (gnu packages embedded) <ekaitz[m]><leoprikler "ekaitz: try adding #:libc newlib"> i even tried with #f hahaha <ekaitz[m]>but stillit's trying to make the gcc and checks the "riscv32-whatever" string and it fails in the glibc-dynamic-linker, which is something I don't need in this moment.... <GNUtoo>it uses outputs the generated config files when that matches <GNUtoo>and it does that otherwise: format #t " file: ~a\n" file) <ekaitz[m]><leoprikler "#f does nothing"> that's supposed to avoid the installation of a libc (which I don't need btw) <ekaitz[m]>GNUtoo: equal? it's a comparison function so (equal? file "") is checking if file=="" <GNUtoo>I was more thinking of "(format #t " file: ~a\n" file) <GNUtoo>Though maybe leoprikler checked it well already, but I'm unsure as I'm a newbie in lisp <ekaitz[m]>my "format" skills are too bad to help you with that :( <leoprikler>GNUtoo: what exactly are you trying to do? If you just want to set up your zone file, writing it as Scheme data should not be too tasking <GNUtoo>leoprikler: I want to share a zone file between 2 DNS servers running a different distribution and managed in a different way <GNUtoo>I guess if that doesn't work out I probably need to look in how to do it dynamically from within the DNS settings instead <GNUtoo>(like with zone replications and so on) <GNUtoo>But I'd feel more confident in having the same file replicated twice for security reasons and simplicity <leoprikler>if you generate your zone file in Guix, you should be able to upload that to whatever other machine you have, no? <GNUtoo>The idea would be to do the opposite: having the zone file taken from a Trisquel install <leoprikler>well, in that case you'd write (zone-file->scm "trisquel-zone-file") <leoprikler>where zone-file->scm is a function you implement :) <GNUtoo>So I've to parse the zone file (which is standard) and get the data out of it? <leoprikler>yep, and you pass that parsed data to the service <ekaitz[m]>I have to add this line to the bootstrap.scm on the glibc-dynamic-linker ((string=? system "riscv32-linux") "/lib/ld-linux-riscv32ifd_ilp32.so.1") <ekaitz[m]>but in any case i think this doesn't make any sense so I need to: 1- choose the correct path 2- make a separate "-toolchain" package <katco>hey all, is there a canonical guix way to parse html in guile? i just need to pluck a meta tag out, and string matching isn't cutting it. <katco>but html is not xml, so i'm not sure about the wisdom of using `sxml` to parse it ***nckx[2] is now known as nckx
<pkill9>i like that guix profiles remember commandline customisations to the packages <rekado_>katco: guile-lib has htmlprag, a “pragmatic” html->sxml converter <katco>rekado_: ya, i'm looking at that... how would i pull that in as a dep for guix? <rekado_>what’s “a dep for guix”? Guix has so many parts! For build side code? For host side code? For a package in Guix…? <katco>yeah, sorry for the imprecise question! for code in `(guix import ...)` <lfam>jonsger: Are you around? <rekado_>you can make it optional like guile-json used to be. <rekado_>but it would become an input to the “guix” package <katco>rekado_: ok, thanks for the pointer! i'm afraid that's not enough, but at least it's a direction i can begin investigating in. <leoprikler>ekaitz[m]: that linker looks like it's packing more than just "risc32" in it? What do those other letters mean? <ekaitz[m]>leoprikler: i have to dig deeper, i'll keep you informed <leoprikler>okay, but where in there would you require an html parser? <katco>leoprikler: `fetch-module-meta-data`. i need to pull `meta` tags out of html <lfam>Anybody up for testing the staging branch? `guix pull --branch=staging && guix package -u . && guix system reconfigure ...` <civodul>katco: depending on the HTML you're parsing, it could be that the sxml stuff is good enough, sometimes with minor tweaks <sneek>Welcome back civodul, you have 1 message! <sneek>civodul, raghavgururajan says: Can you do `/mode #guix +b *ozark*`? <leoprikler>and the servers currently respond with invalid html? ***ChanServ sets mode: +o civodul
***civodul sets mode: +b *ozark*!*@*
<katco>leoprikler: well, maybe? but with the diversity of the internet, they are bound to return things my simple line matcher doesn't expect. the first of which was a meta tag spread across multiple lines. <civodul>lfam: i'll see if i can upgrade my user profile to begin with! <katco>leoprikler: so i think the code needs to retreat to an actual html parser that takes all of that into account <lfam>I think the branch is ready, civodul <katco>civodul: yes, i'm looking at that too. my scheme is terrible, and it's a little hard to figure out how to use this lib. <leoprikler>requiring proper xhtml and bailing if the server fails that would be a reasonable way of doing things, for instance <katco>leoprikler: what do you mean? <civodul>katco: for (sxml simple), there's xml->sxml, which takes an input port <leoprikler>Go is seriously fucked if proxy.golang.org is incapable of producing proper xhtml <katco>leoprikler: it's not only proxy.golang.org <Rovanion>joshuaBPMan: Run another OS in a virtual machine and getting a GUI to click around in. <katco>leoprikler: it's any website that the module reports as its homepage <Rovanion>joshuaBPMan: The GUI of that OS that is. <katco>leoprikler: my point being, the code should make a best effort, and me duplicating html parsing is not that. there are robust libraries for this. <civodul>Rovanion: you probably don't need the libvirt service then, but maybe something like gnome-boxes? <katco>civodul: i saw `xml->sxml`, but (1) i wasn't sure if all html pages would be valid xml, and (2) i was trying to do something a bit more efficient than parsing the entire document <leoprikler>why does Go call out to random web servers on the net? <leoprikler>katco: you can first regexp-match <head/> and then just parse the entire head <leoprikler>I don't think we should necessarily ping "private repositories". <katco>leoprikler: i don't know what to say to that. this is how go modules work, and if we want a recursive go importer, this is what's necessary. <leoprikler>There's no reason to go out of your way to leave proxy.golang.org <lfam>Go is kinda hard to handle from a packaging perspective <lfam>Every time I thought "it can't be like that", it turned out to be like that <joshuaBPMan>civodul May I ask what a good use of libvert would be? <lfam>Sometimes it's easy to fall into the pattern of thinking that something isn't correct so we shouldn't support it. But if the person writing the code aims to support some messy inputs, it's worth thinking of how to help them, especially since they probably have a more complete understanding of the problem space <civodul>joshuaBPMan: i don't know :-) but i think it has to do with managing typically swarms of VMs <lfam>In the past, I spent a lot of time working on making Go work in Guix. My work took us to a certain point and then my motivation ran out. But I think katco is on the right track <civodul>it's not something you'd use to try out a single image interactively <lfam>There is definitely a "happy path" for Go dependencies, but I found a lot of cases where it didn't work, and we had to work around things <lfam>Now, Go dependencies have been overhauled upstream, and we have to start ironing out the wrinkles all over again <lfam>I had basically the exact same experience as this conversation, where other Guix hackers were like, "It can't be! You don't need to do it that way." But it did have to be that way. It's frustrating <jackhill>Is there a guide from migrating from monolithic texlive to the modular package? I guess I'll have to figure out all the dependencies I need by seeing what can't be found, but which package should I start with? <katco>lfam: yes, i am particularly upset that proxy.golang.org doesn't serve all the needed info, and the way to get that data is to go pull down html. i do not love the proliferation of markup languages, but more ecosystems have json/yaml parsers on hand than full-on html parsers. <katco>html is just such a broad target. it's not unreasonable to expect the go toolchain to handle this, but it does make anything that's not the official go toolchain very difficult to write. <Rovanion>civodul: Gnome-boxes complains that I don't have virtualization extensions enabled in BIOS even though they are enabled in there. <lfam>I wonder if something needs to be part of the kvm group <lfam>Does the user that runs gnome-boxes have permission to access it? <Rovanion>Did not, does now, no change unfortunately. <Rovanion>joshuaBPMan: Was thinking of another linux distro. <OriansJ>the corrective permissions would be sudo chown root:kvm /dev/kvm and sudo chmod 660 /dev/kvm and the user account in the kvm group traditionally. <lfam>Rovanion: As always, it helps if you share the exact error message <lfam>If libvirt is not supposed to be required for gnome-boxes, you could at least try it and see if it works, and then we could poke around figure out what the libvirt service is doing that makes gnome-boxes work <avalenn>katco: if you find a way to do it I would be glad. I am stuck with meta tag too. <lfam>Ideally, we can make it so the gnome-boxes package "just works". If that's not possible, we could add a section to the Guix cookbook <lfam>(Assuming it's not already there) <katco>avalenn: the only options i see are (1) perhaps the `.info` endpoint will begin returning that info someday (2) we could take the stance that the go importer creates packages that fetch from the proxy server specified by `guix import go`, but that seems bad. <katco>sorry, packages that fetch the code from the proxy server <avalenn>I was wondering if downloading from proxy server could be the route to take indeed. <katco>avalenn: it would be much easier for the importer to generate packages that download the code from the proxy, but i don't think that would be a good package, really. <katco>i know we archive code, but it centralizes a whole lot of packages on one site, company, etc. it obviates where the code originally is sourced from. etc, etc <avalenn>On html parsing the Go stance is that they only implement the necessary subset for meta tags. <katco>bleh... elides, not obviates <katco>avalenn: yeah. but since it's the wider internet, i think we want as robust a parser as possible to handle weirdness. <avalenn>Specifically : "The <meta> tag should appear early in the document to avoid confusing the go command's restricted parser. " <katco>i think sxml might be sufficient, but i don't know <katco>i'm trying to implement that now. maybe i will take civodul's suggestion and just do the dumb, easy, thing and parse the entire document and pluck the sexp out. i would rather use `ssax`, or something written specifically for html. <jonsger>lfam: you are right, the node issue is due to third party repo which uses node 10.22 as node-10.22 <jonsger>so sadly I can not test staging, but I hope for the best :) <lfam>jonsger: Alright, thanks anyways :) <Rovanion>lfam: Those instructions did help. Seems the libvirt service was required. <lfam>I wonder if that is expected for gnome-boxes <civodul>katco: yeah before going for Guile-Lib's (htmlprag), i'd try the dumb sxml hack, it might be good enough <lfam>rndd: It would help if you showed your custom package as well <ekaitz[m]>isn't it better to keep the compilers on their own to be able to create cross compilers easily? <lfam>rndd: I'm guessing it's caused by the kconfig replacement <rndd>lfam: ye, i downloaded linux-libre sources and made defconfig. then put this config in directory with my package. and build of kernel was ok <lfam>Hard to say what's wrong without seeing your changes <lfam>rndd: You're just using the defaults? <leoprikler>ekaitz[m]: iow 32_ilp32 makes little sense, does it? <rndd>lfam: yep, "make defconfig" for x86_64 <lfam>Hm... I don't know exactly what is wrong, but I'd ask, "why?" <lfam>And also, I wouldn't expect it to work <rndd>lfam: well, i was reading guix manuals and found section about custom kernels and decided that it would be fun <ekaitz[m]><leoprikler "ekaitz: iow 32_ilp32 makes littl"> dunno, I don't know what ilp means :S <lfam>rndd: You could look in the config and see what it does regarding ahci <lfam>The code in (gnu system linux-initrd) expects ahci support to be modular <lfam>Maybe the default is to make it built-in <lfam>Compare that to the configs used by Guix, found in 'gnu/packages/aux-files/linux-libre <rndd>if i copy from guix to mine <lfam>Perhaps something else will break, but this part should work <mhj[m]>Hi all, new to Guix, although I run NixOS on my laptop currently. Anyways, all I want to do to get started is to be able to run sshd in its default state by reconfiguring the system. It’s just oh the home network. I’m very new to guile(unless it’s the street fighter one), and only have experience with python and c++. I’m a super n00b and want to learn more tho! <rndd>mhj[m]: you can start sshd by adding "(service openssh-service-type)" in your services <lfam>Guix System is configured with the config.scm file, which is usually at /etc/config.scm. Do you have that file handy? <lfam>Alright, then it's lke rndd said. Make that change, and then do `guix system reconfigure /etc/config.scm`. I don't recall if you'll need to reboot for the change to take effect <mhj[m]>Tried that rndd and maybe there’s something I don’t get. I kept getting invalid field specifier and other errors. <leoprikler>ekaitz[m]: I think that detail should be part of the triplet <lfam>It would also help if you shared your config.scm <mhj[m]>Ok, hold on. Might have to wait a bit... <lfam>mhj[m]: Make sure you've imported the module containing the ssh-service-type. That module is (gnu services ssh) <joshuaBPMan>Hmmm. Is there a simple way to define a guix service to start a guile web application....wait I'll take a look at the guix data cordinator service. Sweet...brb <Sharlatan>I try to pack 'sextractor' which depends on 'atlas' which is present in upstream but provides just static (*.a) libraryes <rekado_>Sharlatan: ATLAS tries to tune itself to the CPU where it is built. <rekado_>that’s why we don’t provide substitutes for it. <rekado_>we do have a package for it, though. <mhj[m]>Ok, I uploaded to Debian paste net under my username <Sharlatan>mhm, when I include it to 'inputs' it builds locally but produces only static libs <rekado_>Sharlatan: we do ask ATLAS to build shared libraries. Perhaps it’s broken…? <lfam>mhj[m]: You need to put your (service openssh-service-type) within the (services) field <lfam>mhj[m]: It should go in the list you append to %desktop-services <lfam>Cheers! Let us know how it goes <Sharlatan>rekado_: I see no shared library produced after install atlas package <rekado_>Sharlatan: yes, so perhaps this broke at some point. <bavier[m]>it is, and it's not as performant on any systems I've checked.