<apteryx>leoprikler: regarding that epiphany fonts issue, I can reproduce it (I think), from a GNOME session, but not from my ratpoison session? <leoprikler>apteryx: what exactly do you mean? fonts are showing up normally in ratpoison? <nckx>Ah, OK, the source, of cource. <nckx>Yeah, so 16.2 MiB/s over meh Belgian home wi-fi. Someone is clearly stealing our internets from under the ocean. <nckx>vivasvat: I was expecting (for no clear reason) the built IceDove package, but it doesn't matter. <vivasvat>wait I'd expect a popular package like that to have a substitute... maybe not <nckx>vivasvat: I agree, but unfortunately not. I updated it earlier today, and it's a relatively heavy package, and the build farm scheduling is buggy & slow, so I would wait another day before worrying. <nckx>vivasvat: Popularity is not a factor in anything though. <vagrantc>i have noticed some slowdowns downloading substitutes from western north america lately ... although i have a local nginx cache <vagrantc>even just setting up an nginx cache in north america would be a huge gain <nckx>There should be ‘Average Dload’ and Current Speed columns. <apteryx>leoprikler: yeah, I see japanese fonts in ratpoison on machine A but on machine B (which uses GNOME), I don't see them. <leoprikler>could be because it's a bubblewrap problem at epiphany on ratpoison doesn't have bubblewrap <leoprikler>do you have both on the same machine or on different ones? <vivasvat>second question is: how can I download the last available substitute rather than having to compile it myself cuz thats...work <vivasvat>like I want the substitute instead of source in this case <nckx>vivasvat: If you're running Guix System that will happen automatically. <nckx>I think the non-System installation script asks you whether you want to enable them. <nckx>No no. Since you're downloading from the substitute server, they are enabled 😛 <nckx>Guix transparently falls back to building from source. <vivasvat>is there like a cronjob to rebuild the lastest versoin every now and then? <nckx>Substitutes are an optimisation. They should be identical to what you build at home. <nckx>vivasvat: What do you mean? On your system? <nckx>It's a bit fancier than a cron job. <apteryx>leoprikler: I was trying on different machines, but I'll try with ratpoison on the same machine that is exhibiting the problem in GNOME. <vivasvat>cuz don't most binary distrobutions like always have 'substitutes' ready <nckx>vivasvat: But because of Bugs the system sits idle for >98% of the time last I checked :-/ <vivasvat>or does the build farm just not have the resources to compile everything <nckx>It takes a long time to even start the builds (=evaluation), and too long to build them for all the powerful hardware that's available. <vivasvat>so like if the latest substitute of a package is version X while the source is version Y (Y > X), is there a way to default to the lastest substitute if the newer version hasn't built yet? <vivasvat>so I'd have to specifically call guic install package-X or something to get the substitute <nckx>You can guix pull --commit=nnn, and choose a commit that's a day or two old and hence ‘likely’ to have been completely built. <apteryx>leoprikler: interesting, the font problem persists in a ratpoison session on that same machine <nckx>vivasvat: So ideally, while Guix will always remain a source-centric package manager, there would be a separate git branch managed by the CI system that only receives commits once they have been built. <leoprikler>try on your ratpoison machine with bubblewrap installed <nckx>This just hasn't been written yet 🙂 <apteryx>so... it seems it's not 100% reproducible (at least one machine gets it working), but perhaps GNOME does something (try caching fonts itself) that causes the issue? <vagrantc>guix time-machine is essentially a way to call "guix pull --commit=nnn && guix some-other-action" <nckx>vagrantc: True, I always forget about that. But I think in this case guix pull is better since it only needs to be done once, then ‘guix install foo’ is always guaranteed to have substitutes. *vagrantc is not explaining well <vivasvat>if I pull an old version, some of my packages will be temporarily reverted right? <vagrantc>nckx: guaranteed for a particular set of packages... <vivasvat>is there a way to do it only for 1 package <nckx>vagrantc: Yeah, it's more like (guix pull && foo && now back to your old guix), but that's not much better 😛 <vagrantc>there are certainly no guarantees about substitute availability :) <vagrantc>does guix pull do nothing if it's already pulled that commit? <vivasvat>what a rule of thumb on sustitute availability, even if there is no guarentee <vivasvat>like will a commit from 5 days ago be built <nckx>vivasvat: For one-off packages, guix time-machine … -- guix install foo is the way to go. <vagrantc>vivasvat: they're available when they're available? <nckx>vagrantc: Then it returns relatively quickly but not instantly. But makes no changes. <vagrantc>vivasvat: sometimes stuff fails to build on older versions ... so a time-based might approximate it, but might lead you astray <vagrantc>e.g. it's actually built on the current commit and fails to build on the old commit <vagrantc>and some packages take wildly different amount of time to build, and the entire dependency graph can trigger rebuilds even if the package itself isn't changed <vagrantc>so you could have thousands of updates that build very quickly, or one update that takes many hours <nckx>Which, by the way, says that's it's been succesfully built :-/ <vagrantc>works better than reality, apparently :) <vivasvat>won't the build server be able to handle almost anything with less than an hour <vivasvat>I remember reading the specs before, it seemed pretty powerful <vagrantc>but lfam the other day walked through checking ci.guix.gnu.org, finding the last successful build of something, looking at what commit that build evaluation used, and then calling ... guix time-machine --commit ... <nckx>vivasvat: It's 25 extremely (to me anyway, but also in general 🙂) powerful machines. Hardware is not the bottleneck. As I said earlier, it was sitting 98% unused a few months ago. <nckx>Judging by substitute availability I don't think much has changed. <vivasvat>ohhh then I read about how they got a new one of something <vagrantc>there were recurring disk space issues for a while ... i seem to have a knack for pushing changes when the substitute servers are idle <vagrantc>tends to happen around this time of day or a bit later <vivasvat>cuz then your changes will be quickly compield <nckx>vivasvat: Yeess... that is correct. <vagrantc>vivasvat: it means i usually push changes and then nothing is being compiled <vagrantc>well, i've tested my changes locally and know it'll probably work fine in the end :) <nckx>Oh, there's also The Big GC lock that makes the head node idle for hours a day (here: night). That's a totally different issue (less of a bug, just an ugh.) <nckx>Guix scans the entire store looking for things to delete. On spinning hard drives. Meanwhile, nothing else can happen. <nckx>I still think there's a different bug but that doesn't help matters. <vivasvat>how can you tell if its a substitute or building from source? <nckx>The tar.xz extension in this case. <nckx>Built store items will never have that extension. *vagrantc releases the latest version of software.tar.xz.zip <nckx>Damn, I typed that by hand and it's apparently wrong. <vivasvat>see I come from arch (which is ports/binary based) and pacman handles .tar.xz/.tar.zst <nckx>The substitutes are compressed using lzip. But we don't add an extensions to the URL. <vagrantc>the hash of the .drv will be different than the hash of what the .drv produces <vivasvat>cuz apparently I pulled more than 6 hours ago <nckx>I'm surprised that you didn't get a substitute for that, though, but let's not debug old things. <nckx>vivasvat: Yeah, when I said ‘earlier today’ I meant it 🙂 <vivasvat>ok at least i'm learning a lot about how this all works <nckx>‘Works’. Sorry about that. <vivasvat>hahaha, almost every works perfectly except download speeds, which is fine i guess <nckx>If you don't get a substitute for 68.11.0 please let us know. <nckx>Although I'll be heading bedwards soon. <nckx>(Including the hash; that of the derivation is fine.) <pkill9>does there exist a mail client that integrates mailing list archives? <pkill9>i didn't know about mbox, thanks <vivasvat>for some reason, derivation hash is different, and its still downloading the source instead of substitute <vivasvat>anyways, its fine, I can figure it out tomorrow <pkill9>i think i only really need something that can read mbox <pkill9>and then set it to compose message when i select a mail <pkill9>i want to avoid signing up to mailing list <pkill9>it just fills my inbox with messages i won't read <nckx>Well, you'll still need to import newer message (Guile scripting time!). Wouldn't a filter be more appropriate? <nckx>vivasvat: Does ‘guix build --{dry-run,no-grafts} icedove’ return the expected hash? <vivasvat>and for the record, it has been about an hour and im still getting about 20Kib/s download speed (@ kmicu, nckx) <vivasvat>wait for the first time, it also gave me an error <vivasvat>substitution of /gnu/store/kiawv51z8dd41mq3sxva8sqkma5ysgc7-icedove-68.11.0 failed <vivasvat>guix install: error: some substitutes for the outputs of derivation `/gnu/store/1r85wnjr9vq4qmqj1j0d488cc407phl8-icedove-68.11.0.drv' failed (usually happens due to networking issues); try `--fallback' to build derivation from source <nckx>vivasvat: It means that you should have downloaded a substitute, for one. A graft, without going into detail, is a rewritten (binary-patched; not rebuilt) variant of a package used mainly for security updates. <nckx>But this operation can be performed locally relatively cheaply, and Guix should have downloaded the ungrafted binary to graft on your machine, not the source to build from scratch. <nckx>No... So imagine your compiled icedove contains a reference to /gnu/store/aaaa-vulnerable-openssl. If we patch openssl to fix the vulnerability, we need to rebuild thousands of packages -- not good, because we want people to have safer packages ASAP. Instead, we make sure that the fixed openssl is binary-compatible with the old one, and mark it as the old openssl's ‘replacement’, then we can just patch the icedove *binary* to contain /gnu/store/bbbb-fixed-opens <nckx>I say ‘we’ but this is all automated and functional like the rest of Guix. <nckx>If that horrible, tired explanation confuses you, as it should, there's probably a better one in the manual (info guix grafts). <pkill9>maybe grafts should be symlnks to other packages, so then you can use a name of different length <vagrantc>it requires modifying the binary from one set of hard-coded paths to another <nckx>pkill9: Not that it couldn't work, but why? We have full control over the name, it's trivial to ensure it's not weird. <nckx>You still create thousands of new packages but it's a very fast operation vs. rebuilding everything. <pkill9>nckx: what if you need to change the name to be more descriptive <vagrantc>vivasvat: they're dynamically linked against a static path, typically <nckx>pkill9: Not worth the added complexity of rewriting all this to use symlinks just to have a pretty name. <vivasvat>wait anyhow, if I was getting grafts, then how come I was unable to get the substitute <nckx>In fact, static libraries are considered a (mild) security no-no in Guix because they can't be grafted. The Guix way gives you the reliability of static linking without losing a certain introspective quality. <nckx>vivasvat: Well that is the 64k question now isn't it. <vivasvat>ok ill figure this out later, not that big a deal :p <nckx>sneek: later tell vivasvat: All I know is that https://ci.guix.gnu.org/nar/lzip/kiawv51z8dd41mq3sxva8sqkma5ysgc7-icedove-68.11.0 returns 404 while /gnu/store/kiawv51z8dd41mq3sxva8sqkma5ysgc7-icedove-68.11.0 *exists* on the server. It has nothing to do with you (nor can you do anything to fix it). Trying to get any sensible answers out of ‘guix gc’ takes ages so I gave up. My current bet is negative caching; we'll now if that's true in 1h. <nckx>Caching negative responses. In this case there's a proxy_cache any 1h directive that could be to blame. ***catonano_ is now known as catonano
<apteryx>leoprikler: I'll try it with bubblewrap installed. Do I need any configuration, or does Epiphany makes use of it if it finds it? <apteryx>It's probably just matter of exposing the fonts through a --ro-bind, right? <apteryx>when I launch epiphany in a 'guix environment --ad-hoc epiphany bubblewrap' environment, it makes use of bwrap, as can be seen in 'ps -eFww' <apteryx>but the fonts still work on my ratpoison box <telior>hi guix! just reporting back on my font problem, I was looking at some guix config files on github and found the solution, running `fc-cache -f` solved it :) <apteryx>telior: info guix -> i fonts RET suggests 'fc-cache -rv' <telior>that was the first command I tried after seeing it recommended on the manual, but it didn't solve my issue <nckx>telior: Then -f didn't either. <nckx>-r is just a stronger -f. <nckx>I dunno man, fontconfig is weird. <telior>hmm weird, after running both I tried setting the fonts on the customize faces menu and it didn't work after -rv, but it did after -f <nckx>It would have been interesting to run that with -v to see if anything had changed but too late now. 🤷 <nckx>telior: Does running fc-cache -rv again break it? <telior>not sure but the new fonts are still working, I guess it didn't <telior>btw, I get that `info guix` shows the manual, but I don't get the `-> i fonts` bit, I swooped for fonts but didn't get any matches, at least in the main TOC, what am I missing? <nckx>telior: Type ‘info guix fonts’. <nckx>At run time, ‘i fonts’ does the same thing: visit the index entry for ‘fonts’. <nckx>(Hit i RET to see the actual index.) ***ezzzc2 is now known as ezzzc
<apteryx>leoprikler: does g_get_system_data_dirs include g_get_user_data_dir ? <str1ngs>apteryx: not it returns XDG_DATA_DIRS not XDG_DATA_HOME <str1ngs>apteryx: say you had XDG_DATA_DIRS=/usr/share:$HOME/local/share it would return and array of strings { "/usr/share", "$HOME/local/share" } <apteryx>webkitgtk buublewrap launcher usually would only honor XDG_DATA_HOME instead of XDG_DATA_DIRS, but with leo's patch it only considers XDG_DATA_DIRS for the fonts. <apteryx>seems easy to fix, by dropping the two lines they removed before the loop on dataDirs <apteryx>the pointer business seems a bit odd to me but I guess that's common in the GTK+/glib world. <apteryx>I'd only keep the added lines; which should have the effect of considering XDG_DATA_DIRS in addition to the XDG_DATA_HOME already considered. <str1ngs>that is my understanding here as well. To also mention the pointer business is more C++ related then how you would normally do things in GTK+/Glib. <apteryx>modern C++ would typically refrain from using raw pointers and pointer arrays, IIUC. <apteryx>I think it just produces the args that are later fed to bwrap (bubblewrap) <str1ngs>probably using GUniquePtr<char> etc due to fontconfig. Assuming that's where fontCache functions come from. or maybe it's webkit functions. <apteryx>I'll try the patch without the deletion <apteryx>and then validations that 1) it works with Guix XDG_DATA_DIRS, and 2) the workaround of symlinking $HOME/.fonts to somewhere the fonts exist still works. <str1ngs>this patch does have const char* dataDir = g_get_user_data_dir(); so XDG_DATA_HOME *is* considered <apteryx>that line gets removed in Leo's patch <apteryx>see the little '-' following the '+' left of it <apteryx>it's confusing because it's patch of a patch <apteryx>a patch adding a patch, I should say <str1ngs>I guess the ~/.fonts remains all that is missing is ~/.local/share/fonts <str1ngs>if I'm reading this right and assuming XDG_DATA_HOME is set to default ~/.local/share <apteryx>emacs has been doing this to me a lot lately: [118437.582312] .emacs-26.3-rea[16831]: segfault at 7 ip 000000000051e044 sp 00007ffc0bf03918 error 4 in .emacs-26.3-real[418000+1da000] <apteryx>[118437.582328] Code: 8b 40 18 e9 9e fe ff ff bf 50 65 60 00 31 c0 e8 62 80 04 00 bf f0 33 00 00 e8 c8 1e 03 00 0f 1f 84 00 00 00 00 00 48 83 ef 01 <48> 83 7f 08 00 74 05 48 8b 47 18 c3 48 83 ec 08 bf 50 65 60 00 31 <Kimapr[m]>how did elogind end up a dependency (maybe very indirect) of kdiamond (a game)? *rekado merged wip-haskell <rekado>with just a few minor changes we could let people mirror the store on ci.guix.gnu.org <rekado>there are mirrors for other distros that probably could be convinced to also mirror Guix packages. <rekado>my notes say that “guix publish” needs a patch to fix permissions on some files it generates. <rekado>once that’s done we just need to chmod all the files in our cache and start the rsync daemon again <rekado>it’s puzzling to me why people sometimes see slow download speeds from ci.guix.gnu.org; I’m out of ways to debug this <rekado>there does not seem to be anything left I can do here <leoprikler>regarding the XDG_DATA_DIRS vs. HOME business: XDG_DATA_HOME should be part of XDG_DATA_DIRS according to the standard <leoprikler>the only instance where this is not the case would be guix environments that specifically set them otherwise (which I think should be honored in that case) <leoprikler>Kimapr[m]: kauth requires polkit requires elogind <leoprikler>You can answer similar questions through guix graph --path <fnstudio>hi, is there any high-level tool for handling profiles, i'm thinking of the equivalent of what virtualenvwrapper is for virtualenv, if we want to use that metaphor <fnstudio>i understand that'd largely overlap with (and get in the way of?) `guix package` but it'd save a bit of typing eg when it comes to activating a profile... <brendyyn>ok i didnt quite understand, was just trying to be helpful <leoprikler>Imagine a scenario, where you have ~/.guix/profile-{a,b,c} and want to swap them based on whim <fnstudio>brendyyn: i'm still exploring, yes i've seen them mentioned <leoprikler>We'd usually do that using `guix environment -m` around here <fnstudio>leoprikler brendyyn: oh, i see, tell me if i'm wrong: this means that i'd need to (manually) maintain the profile's manifest file? <fnstudio>which would be a perfect solution for me by the way <leoprikler>You can do that, you can also have your environment be a list of packages <leoprikler>guix environment is very flexible in that regard <fnstudio>leoprikler: would you still read the list with `-m`? <leoprikler>you can also wrap a real profile through some scheme code, that extracts its manifest <leoprikler>nope, you'd either load it with -l or specify them on the command line, before/after --ad-hoc <fnstudio>cool, excellent, plenty of options, thanks leoprikler <fnstudio>a somewhat related question, concerning the manifest file: i see a manifest file in my profile <fnstudio>the file says it's auto-generated and it's not meant to be fed to `guix package --manifest` <fnstudio>i was wondering why that's the case, and how that'd diff from a "proper" manifest file <fnstudio>well, i shouldn't call it "proper" i suppose <fnstudio>coming from a python background, is the difference the same one would have between a manually maintained `requirements.txt` and the output of `pip freeze`? <leoprikler>The manifests that guix read are scheme code, that expand at a given time to the manifests in your profile <leoprikler>since -m expects scheme code, that's what you need, but you can write scheme code to read your profile manifest (the thin wrapper I spoke about earlier) <leoprikler>It's not quite the same as `pip freeze` vs. `requirements.txt` <leoprikler>`pip freeze` generates your `requirements.txt` in the same manner that a manually maintained manifest.scm generates $GUIX_PROFILE/manifest <leoprikler>(Or a series of Guix commands are made into manifest transactions, which produce a manifest from a manifest) <fnstudio>leoprikler: great thanks, so did i get it right: in a manually maintained manifest.scm i'm going to add various packages that i need ("epi"-packages if i'm allowed) leaving the system to later figure out the dependencies and "de-compress" this to a automatically generated manifest? <fnstudio>and in my manually maintained manifest i may or may not want to pin packages to specific version, i suppose? <leoprikler>Your manually maintained manifest.scm does generate a different profile based on your current channels (see `guix describe`). <fnstudio>i was mentioned that yesterday when i asked about backing up my installation <leoprikler>Ahh, yes, and you can pin stuff to specific versions using inferiors. <fnstudio>i'll go and experiment with a few different manifests for as many profiles, and see what happens :D <leoprikler>btw. Guix has a built-in "requirements.txt" in the form of `guix environment <package>` giving you a profile with all the dependencies ;) <fnstudio>leoprikler: trying that now, pretty impressive <fnstudio>are guix environments disposable? what i mean is, are they marked for garbage collection or otherwise discarded as soon as i exit them <brendyyn>yep. the files sit there in the store doing nothing until you run guix gc <brendyyn>fnstudio: no wories. you can even find the old binaries directly in store and run them <fnstudio>the reason why this fails `guix environment python-2.7 -- python` is because that version of python is not made available by my default channel, if i understand it correctly? <fnstudio>wait... no it might be me typing it wrong <str1ngs>leoprikler: The standard does say XDG_DATA_HOME should be part of XDG_DATA_DIRS. I don't know why I over looked that. <iyzsong>heh, i'm using guix system on a 16G slow disk & Intel Celeron N2830 router box.. <brendyyn>I notice with git-fetch, if the version tag for a release has a 'v' at the start, like v3.1.1, using the version "3.1.1" will fail to checkout that version <str1ngs>brendyyn: I use this to get around that. (version (string-drop commit 1)) lets say commit=v0.4.1-28-gd459ca1 <str1ngs>and git describe outputs v0.4.1-28-gd459ca1 <brendyyn>oh genius. i just completely overlooked that (version commit) bit <leoprikler>actually you want (string-append "v" version) in your case <str1ngs>nope if the tag has v then you can drop it. <brendyyn>I'm starting to think node dependencies go on for a literal eternity <str1ngs>node is like a all you can eat spaghetti buffer! <fnstudio>i can't see Tor Browser in my default channel; while i can imagine a few reasons why that might not be there, is there an obvious workaround/solution/way of addressing this? <str1ngs>you know you use Emacs to much when buffer is an analogy for buffet. <str1ngs>fnstudio: umm don't quote me on this but I think there is some issues building tor browser on Guix due to rust. <str1ngs>I hope to add gnunet,tor and ipfs support to Nomad one day. <leoprikler>Not sure about that, I think it has something to do with it being a repackaged Firefox. <leoprikler>We do have Icecat, which removes all of the branding, but I'm not sure how well Tor Browser does on that front <leoprikler>apart from that, there is an effort of getting it packaged for guix <str1ngs>you know your browser technology is not modular enough when people have to fork your implementation browser to use it. <fnstudio>str1ngs leoprikler: i see, thanks, yes i imagine it may have to do with FF branding (or maybe some dependencies) <str1ngs>I do have hope for firefox's servo project though <brendyyn>people are super enthusiastic about rust and theyve apparently grown a healthy community, so i dont imagine such things just dying <str1ngs>I wonder if any GNU projects will ever use rust. <Formbi>fnstudio: you can install it thru flatpak <Formbi>str1ngs: I think it could happen when Rust stops being a bootstrap nightmare <str1ngs>boostrapping is not easy. I like how go language handled this. they just adopted version go1.4 as the version that bootstraps all things. <str1ngs>It helps that go language has a social contract that version is pretty much frozen till version 2. maybe rust is like that too though. I mainly use go and C myself. sometimes I have to use C++ :P <leoprikler>my personal opinion is, that of all efforts to improve C, C++ sucks the least ;) ***catonano_ is now known as catonano
<Formbi>every rust version depends on the previous version <Formbi>the last mrustc-able version is 1.29 (now we have 1.45) <brendyyn>is rust an attempt to improv C, or C++ though? <Formbi>it's more of an attempt to make a modern equivalent of C <dlowe>It does have the killer feature of being able to export the C ABI which almost nothing else does <dlowe>which means it can be imported into ~everything <dlowe>I'm still amused by it. Was the plan to support multiple ABIs? extern "pascal" maybe? <str1ngs>dlowe: even guile can export C API. so can go language <leoprikler>no, it's because of extern int main(int argc, char** argv); <dlowe>str1ngs: ah, so I can gcc -lmyguilelibrary and call into it? <dlowe>or I can link this guile library into a python module? <str1ngs>yes, both guile scheme and go language can build C libraries. <str1ngs>yes though in the case of guile it's not so useful using it from python <dlowe>it's code you don't have to write yourself <str1ngs>for guile it depends on the project. for go language yes you need to write the C API yourself to some degree. but only what you need to expose externally <dlowe>also, golang builds C libraries via gccgo but guile scheme you can... pull in all of guile scheme? <leoprikler>I'm not sure what you are going on about here. Guile Scheme has no native compilation yet. <str1ngs>dlowe: no that's not true. gccgo is just a go implementation using the gcc frontend <leoprikler>You can link the Guile library to your application to run Scheme code inside it and do other fun stuff, but there's no compile-time API as with C++, Rust, Vala, etc. <str1ngs>leoprikler: you can write C API in guile scheme. they are called guile extensions. <leoprikler>There are other Schemes that do compile to C/native code though. <str1ngs>that's not way we are talking about we are talking about producing C libraries. <leoprikler>Guile extensions are C libraries, not Scheme libraries. <leoprikler>Although you can also use Vala or Rust for those. <str1ngs>that was the whole point of bringing it up :) <str1ngs>but guile extensions can bud scheme libraries <leoprikler>I'm pretty sure it's a set of scheme modules built around a small extension. <str1ngs>if you load the extension in guile. it's still scheme modules is the point <str1ngs>you can even mix and match extensions and SCM_DEFINE along with pure scheme <leoprikler>if I load the scheme module, it's scheme modules <str1ngs>that's my point, you are agreeing with me while correcting me <str1ngs>actually loading the extension is not just loading a shared library like dynamic-link. you know now should have scheme modules. <leoprikler>You're right, loading an extension is actually calling dynamic-link, resolving the init function and then calling that <leoprikler>but I still don't get the Scheme side unless the extension itself somehow loads that <leoprikler>which can lead into a little chicken and egg problem, that I don't want to go in atm <str1ngs>they work similarity but load-extension is primarily provide C scheme modules. it's better IMHO since they extensions are usable via a REPL <leoprikler>you won't do it for purposes other than debugging, but you can <leoprikler>the important point I want to highlight here is the "C API from Scheme thing", which does not exist <str1ngs>you can actually create your modules spaces with load-extension that's why it has an init argument. <str1ngs>nobody mentioned C API from scheme that your addition to the conversation :) <leoprikler>let's assume you have (lambda (a b) (+ a b)), this does not neatly compile down to SCM __lambda_asdf(SCM a, SCM b); <leoprikler>str1ngs: "leoprikler: you can write C API in guile scheme. they are called guile extensions." <leoprikler>Guile extensions are C/Rust/insertlanghere libraries, that are made available to Scheme. <str1ngs>right guile extensions are just that C libraries. you can intertwine SCM_DEFINE and export C functions <leoprikler>SCM_DEFINE et al. really is just syntactic sugar to call Guile library functions, hence the associated snarfing ;) <str1ngs>fact, almost all guile scheme functions are availible from C take for example scm_c_public_ref. this is a pure C function. <leoprikler>The SRFIs, sxml etc. are implemented in Scheme and you don't get that from C code. <leoprikler>I can't just call string_to_sxml(xml) without first resolving it. <str1ngs>think your over looking my point that guile can provide C API <leoprikler>Although it probably would not be so difficult to write a guile module, that takes a guile module and produces a header file and C code with actually usable definitions <leoprikler>isn't nyacc the opposite (parsing C from Guile)? <rekado>I thought the FFI helper supported that <rekado>yes, I misremembered an example I saw on the mailing list <str1ngs>that's what libguile does. I'm not sure if having "glue" makes a difference if the results you end up with a C library. usable from C. Which was the basis for the origins of the discussion. <leoprikler>You don't end up with just "a C library usable from C", though. You have a C glue library and a VM running Guile bytecode. <str1ngs>of course that's implied if you want to use guile <leoprikler>Note, that at the time of writing, you still have to stitch that glue itself together using scm_public_ref etc. <pkill9>tl;dw the installer went back to the start when selecting either automatic or manual partitioning <leoprikler>Distro-hopping eight times in 24 hours is very sane behaviour, that lets you judge all distros fairly. <str1ngs>maybe this distro will be more like windows..... inserts boot cd <leoprikler>"Do I want to figure out this minor problem before I continue to hop onto another a distro?" <nckx>Thanks for saving me a click buddies. <rekado>i think it’s good that this exists. *nckx is paranoid about feeding The Recommendation Algorithm. <rekado>some people really don’t want what Guix offers, or don’t want it at the given price <nckx>Is there a way I can watch it without doing so? <rekado>people who identify with the author can then avoid wasting their time (and ours) <rekado>so while it doesn’t look good for us, it’s really no problem. <rekado>(I haven’t watched the video and probably won’t. I get easily frustrated watching people do something wrong.) <apteryx>leoprikler: do you know if webkitgtk builds with only 8 GiB of RAM? <leoprikler>I haven't tried with swap disabled yet, but it should <apteryx>great, I'll try building it with your patch as soon as I'm done with ungoogled-chromium <brendyyn>i like that youtuber. he made a vid on doom emacs i founds useful <leoprikler>from personal experience webkitgtk is less of a hassle than chromium ;) <Formbi>his video about StumpWM was so stupid it's crazy <OriansJ`>Formbi: how exactly would you improve his StumpWM video? <Formbi>he didn't know what he was doing pretty much <OriansJ`>Formbi: ok but that doesn't answer the question of how exactly it could be improved <Formbi>there's nothing wrong with that itself, but he was talking that the WM is terrible <Formbi>I did leave him some suggestions in a comment <OriansJ`>and if you don't think someone knows what they are doing, it is the perfect time to share and teach; you'll benefit by having to examine the topic from a seperate angle as you teach them. <Formbi>but generally it's a bad approach to not know much about something and talk how bad it is <OriansJ`>Formbi: well that is how people who are new to a thing react when they can't seem to understand it. <OriansJ`>They don't care the reasons why things are the way they are, they are just more frustrated about not being able to do the things they care about or discover the answers to their questions. <Formbi>I ask people for help instead of making a video or something <OriansJ`>Formbi: not everyone responds the same as we do; we need to take it as a chance to see a different way of viewing it and growing from the experience. <OriansJ`>Thus consider for a second that everything they say is terriable is true. What should be done to address those problems? Then how to engage with them to find other areas where improvements can be made. <Formbi>when you put a glass of water bottom side up, the water will spill out <OriansJ`>Because people who complain care about the software; because if they didn't care enough to share their experience, then you would see nothing. <OriansJ`>Formbi: not if you use a little bit of the water to create a gas seal and the formed vaccum in the glass keeps the rest of the water from leaving. <Formbi>it will behave differently when you are in outer space too <OriansJ`>Formbi: depends if your space station/ship is spinning at a sufficient speed <nly>no dvorak layout in xfce keyboard settings? <hendursaga>Is it OK to use hyphens inside descriptions, to split a word so as to prevent too long a line? <fnstudio>Formbi: thanks, understood re the possibility of installing the Tor Browser via flatpak, sorry for the late reply <fnstudio>Formbi: i'm actually using guix on a foreign distro where the TB comes packaged already, so i can stick with that, i suppose; but i'm trying to migrate as much stuff as possible to guix <pkill9>you could make a package that wrapas the tor browser bundle <fnstudio>pkill9: good point, i feel i steel need to familiarise myself a bit better with all the basics of guix, but i want to get there and build some simple package myself *pkill9 needs to write a wrapper that runs applications in a guix container <drakonis>is there any way to find out which services are available to shepherd? <pkill9>and `guix system search` for available services to add to config <kkebreau>It has come to my attention that my libplist upgrade broke the gvfs package. <kkebreau>A patch to fix this will be pushed momentarily. <kkebreau>drakonis: Doesn't look like it? I just looked in the Guix services directory for "kde" and "plasma", and I found nothing. <kkebreau>I'm not familiar with the state of the Plasma desktop on the Guix System, but IIRC Hartmut has been working on that area recently again. <apteryx>hendursaga I don't think you should manually split words. It'd be the renderer's job if it supported that. These descriptions can reflow according to the size of your terminal emulator, so the hyphen might end up somewhere else than at the end of a line. <apteryx>hmm, after 17 hours of compilation, ungoogled-chromium build failed because ld got OOMK (8 GiB machine). <apteryx>seen while building webkitgtk: ../../lib/libWTFGTK.a <nckx>apteryx: You need swap to build WebKitGTK with 8 GiB of RAM, but then it builds fine here. <apteryx>nckx: would 1 GiB of zram (zstd compressed) help? that'd be 7 GiB actual RAM + a swap space of compressed 1 GiB. <nckx>Guix doesn't seem to log how long a build took, but it had to be less than 24h or it would have been killed. <apteryx>I thought these timers were only active in the absence of output <nckx>This is my own timer and it is ruthless 🙂 <nckx>apteryx: I use zswap + zstd (and zsmalloc to actually make use of it). I guess zram would be the same. <nckx>Eh, somewhere from 0 to 24 GiB, no way to tell afterwards. <nckx>You don't preallocate zswap, but it's limited to the size of your real swap. <apteryx>I'm out of access to my 32 GiB, 12 cores offload build machine, so I need to be a bit more creative than usual. <apteryx>it'd be cool if we could split a single build (e.g., share the ungoogled-chromium build on X weaklings) <apteryx>I reckon there must be solutions to do this kind of thing in HPC <nckx>‘Web browsers are out of control, bring in the HPC.’ <apteryx>seriously. I takes 1 h 30 on this machine to build a complete operating system for an embedded device, yet Chromium somehow takes near a day. <apteryx>Modern browsers are the black holes of software, accreting mass at a steady rate. <mroh>The number of W3C specifications grows at an average rate of about one POSIX every 4 to 6 months... <apteryx>nckx: I'm going all-in on ZRAM (no real swap, too lazy to repartition my LUKS encyrpted Btrfs RAID1). <apteryx>Going to dedicate 4 GiB of my RAM for it (half the actual size). That should be close, theoritically to a 20 GiB RAM machine, which ought to be enough, right? <nckx>It sounds so inefficient but 🤷 <NieDzejkob>doesn't btrfs support swap files since linux 5.0? <apteryx>nckx: nothing can be more inefficient than compiling for 16 hours only to be killed by the OOMK ;-) <mroh>I guess ld needs _at least_ 20GB for chromium, maybe give it some more to not waste even more time... <nckx>Agreed. And this is a temporary hack for one very specific workload. You're not running with it permanently enabled like I am. <apteryx>I might just leave it on if it doesn't seem to cause slow down in every day usage <nckx>Next stop: setting up remote swap over a network block device. <nckx>A Modern set-up worthy of a Modern browser. <nckx>Download more RAM, today. <nckx>apteryx: See, I'd be hesitant to give up 50% of daily my RAM for that, considering how many cached files/dentries that is... :-/ I'd be interested in (subjective) benchmarks if you do. <apteryx>It'll be interesting for sure to observe how Linux handles going from no swap to mostly swap-based memory. <nckx>apteryx: Did you have to reboot just to add ZRAM? <drakonis>is there anything to do installs in place with guix? <apteryx>nckx: the swap didn't magically appear in top; I rebooted.. <apteryx>nckx: funny thing is it seems my preconception that allowing 6 GiB of ZRAM would mean the RAM amount showing as only 2 GiB was false. <apteryx>It's currently using 3 GiB of RAM and not touching the 6 GiB of 'swap'. <apteryx>which means having ZRAM on won't have any impact on my system performance as long as swaping doesn't kick in. <drakonis>rather, is there a way to remotely install guix system on a different machine? <drakonis>i have looked into it but assumed it needed a guix install to already exist <apteryx>there's currently support for Digital Ocean VPS via the digital-ocean-environment-type. You could try adding such support for your own VPS provider. <apteryx>then it's probably trivial adding support for it, although I'm not familiar with what adding a guix deploy machine 'environment' entails. <drakonis>i can use machine-ssh-configuration instead of digital-ocean-configuration <apteryx>The example in the manual of a machine using the machine-ssh-configuration uses the 'managed-host-environmen-type' environment, which is, IIUC, a Guix System instance. <apteryx>mbakke: would you know what needs to be done to enable hardware acceleration in ungoogled-chromium? <apteryx>mbakke: I've switched chrome://flags/#ignore-gpu-blacklist to enabled, and now it seems I have hardware acceleration everywhere, according to chrome://gpu/. <apteryx>Remains to see if it actually works. <drakonis>i wonder why icecat comes with adblock instead of ublock