IRC channel logs
2024-03-14.log
back to list of logs
<solid_black>for BTI failures, should we use generic SIGILL/ILL_OPN (like Linux), or dedicated SIGILL/ILL_BTCFI (like OpenBSD since very recently)? <damo22>i dont think ELF should have a field for cryptographic signature for verification purposes of the binary itself <damo22>it would be very difficult for users to replace the key with their own <kilobug>if the user controls which keys are trusted or not (with a GnuPG-like mechanisms) I wouldn't mind <damo22>yeah but imagine the case where the user wants to modify a binary and she has the source but cant compile it because missing the right key <damo22>would have to use a different one, and that might not be accepted by the system depending how its all configured <damo22>this would be horrible for people who want to hot-patch binaries that are mistreating them <kilobug>the right way to do it, in my view, would be for users to have their own keys accepted by the system, and binaries they compile themselves signed by their own keys <damo22>why is it imperative to know who owns a binary? <damo22>the user should be in control of the software anyway <kilobug>it should definitely be optional, but I can understand it adds some security, by making it harder for someone or something to temper with the binaries <damo22>then youre going to have official versions of binaries, and a means to blame someone else for your bad software <damo22>imagine if someone puts the public key into an enclave so you cant replace it <damo22>so users can verify binaries they already have, but cant change the key <kilobug>that would definitely be wrong, as I said, such a system can only be acceptable if the user has control over which keys are trusted or not <damo22>it will be sony playstation all over again <damo22>bash: ls: command not allowed (not signed) <solid_black>in other news, ext2fs on aarch64 runs up to failing to open the backing device <azert>I think that it makes sense to add signatures to ELF <azert>So that you can verify where they come from, why not? It’s just a question of trust <azert>Of course the user should be allowed to disassemble the file and change the signature <azert>The slippery slope is that you make an encrypted binary that runs on the properties hypervisor that you don’t control and you cannot access his memory. But that’s so evil that it’s not even being openly considered <azert>But from the commercial point of view: let’s say you are a console maker that sells their hardware at a loss and you don’t want it to be used by North Korean supercomputer facilities just because it’s cheaper. Then you might want the evil stuff implemented <azert>That’s more understandable then the US government doing this to install a backdoor on their citizens and use this power to profit a restricted 1%. That’s the real unavoidable fact <azert>Of course grub shouldn’t support PE.. why would it add this bloat <azert>When elf is the standard of all open operating systems <azert>This is the typical case when their are adding 900 lines of code for something that can be done in 100 <azert>Someone should ask who is going to maintain this mess <solid_black>pthread_create () works and sucessfully starts the new thread running <solid_black>I have a suspicion that on x86, Mach implicitly makes all readable pages executable <solid_black>because this is the second time that I run into stacks not being allocated as executable and crashing on aarch64, but apparently this worked on x86 <youpi>yes, that's a given on x86, see the NX / SMEP / SMAP todo item on the contributing wiki page ;) <solid_black>so memory not being executable is not even a baseline x86 feature? <solid_black>I'm sure buffer overflow vulnerabilities are nothing new <youpi>the world in the 80's was very different from today <youpi>no, at the time there wasn't the Internet <youpi>the word "vulnerability" didn't even exist <solid_black>the internet, maybe no, but even ancient Unix had UIDs and access control <solid_black>surely you'd want to protect against local privelege escalations <youpi>we're talking about the processor that was running essentially ms-dos <solid_black>yes, my point being people must have understood the need to enforce limits on software running locally <youpi>later on, windows with some protection, but mostly to protect processes bugs from each other, not against attackers <youpi>executable stack is not a problem before attackers came <solid_black>btw I should spend some effort some time to make glibc / hurd not require executable stacks <solid_black>iterators? it's required because of nested functions <youpi>we'd probably have to bite the bullet and allocate a structure to store the information needed for the iterator <youpi>yes, and nested functions are *very* convenient for iterators <solid_black>with sufficiently new aarch64 hardware (that supports EPAN), we can have execute-only mappings in userland, i.e. not readable or writable <solid_black>(we could have it without EPAN too, but then it breaks PAN) <solid_black>paging non-anonymous memory objects works on the Mach side & libpager w/ ext2fs work in the userland side <solid_black>and it is my fault, guess it's the time to implement the few remaining pmap routines <azert>I found modern C++ lambda a surprisingly clever syntactic sugar <ahenv>is it possible to emulate linux kernel inside GNU Hurd to use Linux's hardware drivers? <ahenv>Emulate execution of linux kernel <ahenv>During work of Hurn as kernel <azert>ahenv that had been tried many times in many ways and it never worked since Linux internal interfaces evolve way too fast <azert>only way it has not been tried yet is through a virtual machine as wsl2 <azert>damo22 worked with rump and netbsd drivers and that seems like it’s working <azert>Like emulating the Hurd inside Linux is a better approach if that’s what you want (using Linux drivers) <ahenv>also, there is an idea to develop a modular PC with completely different architecture so that it will be possible to prove (mathematically) that some PC (or "it's" mathematical model) is not vulnerable to some kind of attack <ahenv>may be, with FPGAs or something else (I do not know how FPGAs work). May be, some parts (or whole PC) will work without storing executed program in memory. May be, use some microprograms like processor microcode or use something else <ahenv>I currently feel unable to develop any details. May be, someone else can have some success <ahenv>I wanted to start studying something. But I do not know what to study. I only have little experience as c++ programmer (1 year or paid c++ work with little (but not 0) success). Some ideas is to study smallest computer's components and catalog of design patterns (design patterns for programming with languages like c++). <azert>Is electronic engineering available to your choices? <ahenv>I do not know. Can try to start learning something. <ahenv>Also, may be, some other people could have success starting with some ideas like mentioned above. <azert>Ideas are normally worth a dime a liter <azert>But you are welcome to gives us your ideas <azert>Maybe some good discussion can sparkle <ahenv>May be if you add something to some small "ideas" text, you get some working solution. <azert>Ok, try applying deep learning to circuits design, or llm to circuit code in formal form <azert>It’s something feasible theee is plenty of tutorials probably you will need to data mine a lot to get results <azert>Vhdl or Verilog source codes <azert>You can perhaps start by generating an embedding <azert>That will allow you to find patterns <ahenv>according to web search I try to guess that llm = large language models <azert>It’s the cool stuff right now <azert>Phi2 is a good one to start on commodity hardware <azert>Also, consider that what you are doing will most likely result in totally different things then what you have in mind, so keep an open mindset <Pellescours>ahenv: some compromises are often made, there is a trade between security and speed. Some design are really secure but they are really slow. <ahenv>where can I learn about "really secure" designs? I have only heard about some 6000$ (not sure about price) IBM computers. And some OS (IAX or AIX or can't remember) was mentioned as considered to be more secure. <ahenv>I have failed to study how x86 and OS with programs work and feel like it's impossible to prove for x86 that some solution will be invulnerable to some attacks or to prove something else. <Pellescours>I don’t think it’s possible to have a 100% secure system <Pellescours>For example meltdown and spectre were possible because cpu are trying to be fast efficent, so running ahead of time code without knowing yet if the code should run. Disabling such feature is more secure (no risk of running "wrong code") but it’s slow. <Pellescours>as when you have a branch, you loose your time of waiting the result. While if you run the code ahead of time, if your branch prediction is successful, then you don’t have lost some cpu cycle waiting for the result. <ahenv>may be, have some computer for fast tasks and some "secure" (and may be, slow) for some tasks. <ahenv>I sometimes had some success with qemu emulated OS running javascript on old laptop (without hardware virtualization). <ahenv>But success was little and it got "lot of time". <ahenv>And I failed to open whatsapp web version under that. <ahenv>May be, it could be "secure" to run everything under guest OS under qemu software emulation (without VT-x or AMD-V or something "similar"), but I feel like it would be hard to prove absense of some vulnerabilities. <ahenv>(I failed to learn how x86 works. May be, someone can have success.) <ahenv>"Best" would be if all or almost all users would be able to prove some security properties. <ahenv>So user can "trust" hardware without believing developers' or others' words. <jpoiret>for more secure cpus there are also CHERI architectures iirc <ahenv> https://www.gnu.org/links/non-ryf.html ----> "Also, if the computer uses an Intel processor, it most probably has the Intel Management Engine (ME) which is a backdoor and is known to have had security flaws in the past." <ahenv>I don't know what Intel ME can do. I don't trust computers with it. <ahenv>May be, AMD processors have not Intel ME. But I still do not trust x86. Yes, I "feel more secure" when using virtual machines, but am not sure that there are no vulnerabilities. <ahenv>Sometimes I run qemu software emulated guest (guest 2) OS when this qemu runs under qemu-kvm virtualized guest (guest 1) OS. <jpoiret>ahenv: nothing protects you from Intel ME <ahenv>Some motherboards have option to temporarily turn Intel ME OFF. Hope it does not do something "opposite" (like keeping Intel ME ON and also telling those who control system to keep special attention to this PC at this period of time). <azert>Gooberpatrol_66: that is an interesting topic <azert>Somewhat related to the ELF signing discussion. Imagine running software on computer that takes encrypted data as input and gives encrypted data as output without ever decrypting it. But that only the key holder can decrypt. I wonder why cloud services don’t work all like that <azert>could be used in biomedical fields: here it is the encrypted data, please run the analysis and give me the encrypted results! <azert>I’m wondering if one could think of using Hurd translators to implement a policy where all files are stored in encrypted form. And then file indexing and other automatic processing pipelines could use Homomorphic Encryption to work on encrypted data in the background <azert>What could those processing pipelines be? There isn’t much like this going on free operating systems