IRC channel logs

2021-06-16.log

back to list of logs

<pabs3>xentrac: personally, I use one TLS client cert per network (and CertFP for nickserv) to authenticate to IRC, not passwords, and definitely rotate your libera password
<xentrac>I should probably switch to that
<pabs3>you can tell ssh to only present a certain key to a certain server or set of servers
<xentrac>oh? is there a -O for that?
<pabs3>I'll find a link
<pabs3> https://libera.chat/guides/certfp
<xentrac>thanks!
<stikonas>thanks indeed. It's worth switching to certs
<stikonas>after all I use them for home WiFi too... at least on some devices
<pabs3>for presenting different ssh keys to different servers, I think this does the trick: Host foo \n IdentityFile ~/.ssh/id/foo.org/pabs \n Host * \n IdentitiesOnly yes
<pabs3>you could also use IdentityAgent for separate agents
<pabs3>my only concern with lots of ssh keys is if you moved to a hardware token for ssh, how many keys will it store
<xentrac>aha! nice
<pabs3>ISTR one downside with IRC CertFP is that there is still a passphrase present, so be sure to rotate that and set it to a very long one and record it in a safe place
<stikonas>pabs3: yeah, that's why I use 1 ssh key, gnuk only supports 1
<oriansj>xentrac: I actually have my servers set to ban any connecting IP address for 24 hours in the event of a single incorrect attempt. So my .ssh/config file is set to ensure only the correct ssh-key is used.
<oriansj>combined with each ssh-key having a unique randomly generated password using -o -a $policy_defined_rounds to extend time required to unlock keys. And even if one gets my ssh keys and somehow guesses their passwords; due to port knocking requirements they still wouldn't get in.
<oriansj>Oh and the usernames are randomly generated too.
<oriansj>pabs3: I could understand hardware keys for luks volumes, keepass files, offline crypto-wallets and even a single set of user credential(s) to login to hardened computer(s) but I don't quite understand the use case for ssh-keys in a hardware token.
<oriansj>Could you help me better understand?
<pabs3>so that a local networked software exploit can't result in ssh key exfiltration
<oriansj>pabs3: understandably if one has exposed network services on their computer but it is harder to imagine when the network rules are: https://paste.debian.net/1201379/
<xentrac>your ssh client machine might get rooted by a malicious ssh server, and evidently you're also running http clients on it
<xentrac>maybe browsers
<oriansj>combine with things like jails and chroots for outgoing networking processes.
<pabs3>common ssh and https clients are complex enough that exploits probably exist. Firefox/Chromium exploits are relatively common and there were exploits in terminals recently
<pabs3>Qubes is a good start, but even then there are Xen exploits and Spectre/etc
<oriansj>xentrac: well then the compromise will be of the key used to access the malicious server. So I don't really see additional access being gained by the attacker.
<xentrac>no, I mean if the ssh client has an rce
<pabs3>or the terminal it runs in (more likely)
<oriansj>xentrac: still jail or chroot the process
<xentrac>the january hole in combining characters in terminals I think turned out to just be a DoS?
<xentrac>ah good!
<xentrac>I ought to do that
<xentrac>though I tend to use ssh a lot for things like git and rsync and scp
<oriansj>also cryptographic binary whitelisting; really makes remote code execution a good deal harder.
<oriansj>Since you'll have to sigh that binary with a randomly generated system specific offline private key that matches the public key installed on the system.
<oriansj>^sigh^sign^
<xentrac>well, it doesn't make RCE harder, just potentially less useful
<xentrac>pabs3: was the combining characters thing the one you were thinking of?
<pabs3>no, https://www.openwall.com/lists/oss-security/2021/05/17/1
<pabs3>"rxvt terminal (+bash) remoteish code execution 0day"
<xentrac>ugh, thanks
<xentrac>I missed that
<oriansj>So the attack requirements end up being convince me to connect to an untrust-worthy system with a trusted system, have a zero-day against the tool I use to connect, use it to run code that either has to break the crypto of the system or glue together pieces of functionality already on the system, escape from a chroot/jail and then somehow steal an ssh-key; to which you also need to steal the large randomly generated password and the
<oriansj>port knocking requirements to access the server it goes to and find the randomly generated username as well.
<oriansj>perhaps I am strange and never put sensitive things on servers but only luks volumes that are mounted when the contents are used.
<oriansj>for example I don't consider my github ssh-key sensitive because I know that any compromise on github's infrastructure (or National Security Letter sent) would be all that would take for someone else to claim and replace it with something malicious.
<oriansj>That is why every one of my commits is signed by a limited time GPG key, whose trust should be something you subjectively need to consider.
<siraben>ugh I still need to set up signing
<siraben>I've lost my GPG private key that I had in high school (maybe 10th grade), so I'd probably have to get a new key and update public keyservers
<oriansj>But everyone here should have a "gone crazy" rule in regards to your fellow developers. If person X you work with goes crazy. how long would it take to detect and regain control of the situation (for your own personal work)
<siraben>lol, I only have the passphrase but not the private key
<oriansj>keep local copies of source code you care about and notes for which commits you actually did audits on (or if not atleast looked at the code for obvious bad things and what you looked for)
<oriansj>can you fix the code you depend upon or would you need to do a rewrite to work around that bad situation?
<oriansj>Imagine for a second I died tomorrow, what would need to be done. Who would be picking up what pieces going forward and where would they be hosted? Who here needs me to grant them access on savannah?
<oriansj>Bad things will inevitably happen, that is just part of life.
<oriansj>So how do we minimize impact, while enabling growth as a community?
<oriansj>paranoid doesn't help open cooperation but blind trust is unwise. So each person needs to find their own balance for the people they work with and adjust based on the changing state of the relationships.
<oriansj>But planning and prepping for bus factors for critical pieces in this community is something we should all at least put a minute into consideration.
<oriansj>If someone has their IRC credentials/ID stolen, what general rules would minimize potential damage?
<danderson>Sounds like you're describing "trust but verify". Having faith in people you have a relationship with and also planning ahead for the unexpected aren't mutually incompatible
<oriansj>danderson: indeed
<oriansj>but as recently we have had a string of success and progress; it is best to use this high time to consider and prepare for what if something common/likely to go wrong does.
<oriansj>The more you sweat in peacetime, the less you bleed in wartime. I feel this deeply applies here. As bootstrapping at its core is dealing with the worst possible cases but still coming back to a good state which you can trust.
<oriansj>does one have verify-signatures = true in their ~/.gitconfig
<oriansj>does one consider gpgsign = true in their ~/.gitconfig a good idea or a really bad one and understand why they feel that why?
***amirouche is now known as amirouche`
***amirouche` is now known as amirouche
<siraben>FSF is mad https://www.fsf.org/news/update-to-the-fsf-and-gnus-plan-to-move-irc-channels-to-libera.chat
<stikonas>siraben: what's wrong with moving to libera?
<stikonas>that's what FSF is doing, isn't it
<siraben>stikonas: nothing, I fully support it!
<siraben>I was just amazed at the situation, they literally were planning a gradual migration and had good relationships with staff (I was there when rasengan was in #fsf agreeing with their decision to move)
<stikonas>siraben: it's not juts FSF
<siraben>and yet... without any warning, their channels were taken over
<stikonas>KDE is not in any better position
<stikonas>they were trying to do gradual migration, so that matrix rooms bridged to freenode can be switched to libera
<siraben>yeah, it's everywhere
<siraben>and they just shut off the last old server
<siraben>so, nothing beside remains
<stikonas>yeah, so I guess now it's unconnected Matrix and Libera rooms
<civodul>siraben: i'd say the FSF has been blissfully naive
<civodul>also publicizing a community process where very little's at stake
<civodul>while the real processes happen behind closed doors
<xentrac>happy John Tukey day
<siraben>"freenode" /list http://ix.io/3q9b
<stikonas>those channels are mostly dead I suppose
<siraben>indeed, or registered in advance
<stikonas>at least KDE is not registering channels there and actively moving to libera
<stikonas>and that's the biggest room after #freenode
<stikonas>possibly some people who are not actively following news?
<siraben>yeah why is KDE so big?
<stikonas>not sure... But that's not a dev channel
<stikonas>so might random community people
<siraben>Yeah
<NieDzejkob>your mentioning dr stone was a surprisingly effective DoS >.<
<NieDzejkob>or, has been, rather
<xentrac>haha
<xentrac>how much have you watched?
<NieDzejkob>10 episodes >_>
<oriansj>xentrac: well the manga is also readily discoverable online and 199 chapters to read so...
<oriansj>and that isn't even considering the possiblity NieDzejkob might also be attempting to make local copies of the images as well.
<stikonas>oriansj: what do you think about https://github.com/oriansj/mescc-tools-extra/pull/3 ?
<NieDzejkob>wdym by local copies of the images?
<NieDzejkob>anyway, sounds great, but I'll save it for some more serious japanese practice
<stikonas>oriansj: we can try to reduce the number of empty dirs that have to be prepared in posix bootstrap
<NieDzejkob>is git really that far-reaching?
<xentrac>git starts to have trouble with many-megabyte files
<stikonas>probably only the whole repo matters? not individual fioles
<xentrac>no, git deals with large repos a lot better than it deals with large individual files
<xentrac>well, "large"
<stikonas>hmm, so under the hood something is not scaling in blob handling...
<xentrac>git deals with repos with up to a few gigabytes of history spread across a few hundred thousand commits and a few tens of thousands of files
<xentrac>with great ease
<xentrac>but yeah, there are a lot of things that don't deal well with blobs of hundreds of megabytes
<xentrac>and it deals very poorly indeed with blobs over a couple of gigs
<oriansj>stikonas:it looks fine. merged
<xentrac>(a lot of things in git)
<oriansj>NieDzejkob: well some people like to have copies of the manga that they read be it physical or digital.
<oriansj>xentrac: I find git works great with large files if you add: bigfilethreshold = 100M or setup your .gitattributes correctly
<xentrac>at least in the git I'm familiar with, that makes it work less inadequately when you have a few versions of those files, at the cost of making it work even worse when you have many versions
<xentrac>how big have you gotten it to scale?
<xentrac>it's possible git has gotten better at this in recent years and I didn't know
<oriansj>xentrac: well assuming you are not editing video but only archiving, 10+TB seems to work fine
<oriansj>but honestly I never liked git-annex after it resulted in me needing to restore data from backups.
<stikonas>fossy: can we merge https://github.com/fosslinux/live-bootstrap/pull/125 ?
<xentrac>ugh. I haven't had that failure mode with git-annex but it also didn't really solve the problems I was hoping it would solve