IRC channel logs


back to list of logs

<gnucode>dsmith: thanks!
<dsmith>Compiled code wise, basically supports all archs that llvm supports. But then there are lib and os differences too.
<azert>hello all, I've heard that gnumach uses qemu for testing but that qemu is not ported to the hurd
<azert>just out of curiosity I tried to take a glance of what would it take to compile qemu on the hurd
<azert>it's a quite straigthforward patch:
<azert>configure with: configure --enable-debug --disable-tpm --disable-tests
<azert>from git head
<azert>works very well!
<gnucode>no freaking way! azert your ported qemu to the Hurd already? Why don't you post a patch to ?
<youpi>to qemu-devel, rather
<youpi>with bug-hurd in Cc for comments
<youpi>qemu is actually very portable
<youpi>except this kind of small bits
<youpi>of course, no hardware acceleration :)
<Gooberpatrol66>azert: awesome
<nikolar>how hard would it be to have some kind of hardware acceleration for qemu
<youpi>of course that depends what you call "hard"
<youpi>for somebody that already knows all about x86's support for acceleration and the gnumach workings, that'd be much easier
<nikolar>we are talking about stuff like kvm right
<youpi>kvm just being the linux name for it
<youpi>intel calls it vmx, amd calls it svm, etc.
<nikolar>yeah of course
<nikolar>or hyper-v or behyve or whatever it's called on freebsd
<nikolar>bhyve apparently, i was close
<azert>Not easy at all, but maybe port netbsd nvmm to gnumach
<azert>The api looks like it could use Mach extensions
<azert>In a way, vcpus are special tasks, aren’t them?
<youpi>(threads, even)
<gnucode>morning hurd people!
<gnucode>I was thinking about our conversation that we all had yesterday about porting qemu to the Hurd...
<gnucode>The goal of course was to do testing of the Hurd via the Hurd.
<gnucode>Could we use a subhurd to test code changes of the Hurd? Or do we need the qemu route?
<youpi>for translator tests, you can use subhurds, sure
<gnucode>Also the prospect of porting NetBSD's libnvmm sounds cool.
<youpi>for gnumach tests, you can't
<gnucode>hmmm. That makes sense.
<azert>youpi: right, machine is a gnumach task, vcpu is a thread
<azert>a task and some threads that cannot do ipc?
<azert>that cannot do many things.. but more or less the mapping is that
<youpi>well, they will need to do some ipcs to interact with the rest of the system :)
<youpi>but that's for the hypervising part
<youpi>which runs as normal thread
<youpi>the vcpu is just about switching the thread into virtualized mode (vmenter)
<youpi>whenever the vcpu accesses some hardware or such that svm/vmx cannot simulate itself, you get a vmexit, back to the hypervising part
<azert>Ok, I see this in qemu. But there could be a different design. I would be glad if you could comment. The hypervising part could be listening to some port, the vcpu is vmentered with thread_resume, when it vmexits gnumach  notifies the hypervising part on the port
<youpi>I didn't talk about kernel/user parts :)
<youpi>I just talk about how things do work in hardware
<azert>Ok, gotcha
<youpi>vmenter really is an instruction that has to be done by the thread that runs the vcpu
<azert>We don’t have kernel threads in gnumach
<youpi>we do
<azert>I thought we had continuations
<youpi>see the threads under the gnumach task
<youpi>we can use continuations yes, but we still use thread notions
<azert>I’m confused.. let’s say the hypervisor calls thread_resume on a “vcpu” port, can the kernel task dealing with that call directly do vmenter? Would this deplete gnumach threads?
<youpi>the way for the kernel to start emulating is calling vmenter, yes
<youpi>the cpu will then be running the guest
<youpi>until interrup/hw access etc./ which will trigger a vmexit
<azert>Ok, while this happens, do you have gnumach_threads— until it is equal to zero? Or gnumach continuously creates new threads
<youpi>why new threads?
<youpi>it *will* be a thread
<youpi>just running the guest
<youpi>and whatever happens in the guest is hidden
<azert>ok I think it makes sense
<azert>Do you agree that you need a task for the machine? A shared pmap between all the vcpus of the machine?
<youpi>you would want a task so that it nicely shows up for the user
<youpi>being able to terminate it, etc.
<azert>Im not sure you want that: the user normally deal with qemu
<youpi>as for the pmap, you'll need a pmap for the userland hypervisor itself, and a pmap for the guest
<youpi>qemu is that process
<youpi>creating a different task doesn't really bring much
<azert>Then it would be one process with two pmaps.
<youpi>only one which would be actually used normally
<youpi>the second map is given to vmenter
<youpi>I don't think it would really be useful to make it a "pmap" strictly
<youpi>you want an intel page table, indeed, that's how vmenter takes it iirc
<azert>Having two tasks instead of one would allow you to use most gnumach api dealing with tasks end memory transparently
<youpi>but a lot of the plumbing of gnumach around pmap won't make sense
<youpi>the thing is: you don't need that
<youpi>if you want to read/write the guest memory, you can use a memory object for that
<azert>How do you map the memory object in the guest if you don’t have that task?
<youpi>you don't
<azert>Sorry if it is obvious for you
<youpi>you give a page table to the vmenter instruction
<youpi>which is machine-to-physical, nothing virtual here
<azert>nvmm has all those _map and _unmap functions
<youpi>and then the guest can create its own page table that sits on top of that
<youpi>nvmm ?
<azert>The netbsd hypervisor
<azert>Mmh nothing virtual.
<youpi>their example with mmap + nvmm_hva_map + nvmm_gpa_map looks a bit convoluted
<azert>Right, you don’t map the host into the guest. You map the guest into the host
<youpi>you'd rather create a memory object, and tell the kernel part to expose it to the guest
<youpi>and if you want to access it, you can always map the memory object
<azert>They don’t have memory objects anymore in netbsd afaik
<azert>There might be a rationale behind the netbsd convoluted design
<azert>It mentions two virtual maps in page 5
<azert>So it is virtual memory in netbsd
<gnucode>ok this is weird.
<gnucode>dkpg-buildpackage -b -uc -us
<gnucode>bash: dkpg-buildpackage: command not found
<gnucode>sudo aptitude install dpkg-dev
<gnucode>dpkg-dev is already installed at the requested version (1.22.2)
<gnucode>let me make sure I have build-essential installed
<gnucode>build-essential is already installed at the requested version (12.10)
<gnucode>joshua@pippin:~$ ls -lha /usr/bin/dpkg-buildpackage
<gnucode>-rwxr-xr-x 1 root root 35K Dec 17 21:37 /usr/bin/dpkg-buildpackage
<gnucode>I just ran this command a week or two ago?
<gnucode>maybe a day ago.
<gnucode>that's weird. i had to run /usr/bin/dpkg-buildpackage -us -uc
<gnucode>it's working I guess.
<dsmith>Maybe s/dkpg-buildpackage/dpkg-buildpackage/ ? Or was that just a typo in pasting to the channel?
<gnucode>dsmith: yeeah...I just spelled it wrong. rookie mistake.
<etno>dsmith: hahaha, I had to read it 5 times to spot the difference in your sed stanza