SCIENTIFIC-LINUX-USERS Archives

January 2016

SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
David Sommerseth <[log in to unmask]>
Reply To:
Date:
Tue, 26 Jan 2016 10:12:26 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (156 lines)
On 26/01/16 08:13, Yasha Karant wrote:
> On 01/25/2016 04:30 PM, David Sommerseth wrote:
[...snip...]
>> But .... KVM is the core hypervior.  It is in fact just a kernel
>> module which
>> you can load at any time on systems with CPUs supporting hardware
>> virtualization (VT-d or similar, most modern Intel, AMD and IBM Power 7/8
>> supports KVM).
>>
>> libvirt is the management backend, which provides a generic API. 
>> libvirt can
>> be used against other hypervisors as well, such as Xen, but probably more
>> often used with KVM.
>>
>> qemu-kvm is the KVM virtual machine process.  Each qemu-kvm process is
>> started
>> per VM.  You seldom start these processes manually, but they are
>> kicked off by
>> libvirt.
>>
>> virt-manager is a management GUI front-end.  And virsh is a console based
>> management tool.  Both connects to the libvirt API.
>>
>> Further, you can also download an oVirt Live image and boot that on a
>> bare-metal or virtual machine.  oVirt can then connect to libvirt and
>> provide
>> an even more feature rich management tool.
>>
>> virt-manager and oVirt can also connect to several systems running
>> libvirt
>> simultaneously, so you can manage more hypervisors from a single
>> front-end.
>> And there are probably even more front-ends, like "Boxes" (not really
>> tried it).
>>
>> I dunno much about vmware stuff, so I will refrain to comment that.  But
>> VirtualBox is also two-fold.  My experience with VirtualBox is now
>> quite old
>> (5-6 years ago).  You can start VirtualBox guests without a kernel
>> support
>> module loaded, which would work on most hardware.  But performance was
>> not too
>> good at all.  If you got the init.d script to build the kernel module,
>> you
>> could get quite acceptable performance.  However, I see VirtualBox
>> more like a
>> single package which gives you both hypervisor and management tool in
>> a single
>> software package.
>>
>> Even though VirtualBox is more a "single unit" and KVM/Qemu/libvirt
>> consists
>> of more components ... you normally don't notice that when you start
>> VMs via
>> the management tools.
>>
> Thank you for your detailed exposition.  My primary concern is that I do
> *NOT* want a hypervisor actually controlling the physical hardware; we
> have enough security vulnerabilities with a "hardened" supervisor such
> as EL 7.  

You can run virtual machines without a hypervisor.  But, that will not
give you a good performance in general.  Running in this mode is often
called 'emulation'.  So the hardware a computer needs, is emulated by
software in user space, without anything running in kernel space at all.
 You can do this also with libvirt and qemu too, but then you use 'qemu'
and not 'qemu-kvm'.

As a related side-track.  Running with a hypervisor can only allow
guests to be of the same CPU family as the bare-metal host.  With
emulation, the CPU seen on the inside of the guest can be whatever the
emulator supports.  With emulation you can run powerpc, mips or even
s/390 based environments - but it is slow compared to bare-metal
performance - as everything you do is emulated.

Likewise with VirtualBox, it goes into emulated mode when it does not
have the kernel module (vbox.ko? don't recall right now).  This also
provides a much poorer performance.

I do not know enough about vmware, but their early products did run on
hardware before hardware had any virtualization features at all.  But I
suspect they also needed some kind of kernel module to provide a decent
performance.  Once the bare-metal hardware got virtualization support,
you still need the kernel module - but now the module takes advantage of
the hardware capabilities in addition, increasing the performance even more.

So to simplify it a bit: Qemu, VirtualBox and vmware (I suspect) needs a
kernel module to provide decent performance, and these modules
instruments the kernel with at least hypervisor-like capabilities.

> My secondary issue is the actual human clock execution time in
> the VM as contrasted with the same OS/environment running on the
> physical hardware.  I have found that current production releases of
> VirtualBox and VMware (e.g., VMware player) provide acceptable
> performance, although the USB interface on VMware now does seem better
> than VirtualBox that evidently still has issues (one of the mysteries).

And this is what the hypervisor does.  It provides a channel from the
hardware on the bare-metal to the guest VM.

And to get an acceptable human clock execution time inside a virtual
guest OS, you will need a hypervisor.  So you have most likely been
running both wmware and virtualbox with the kernel support modules.
Otherwise you would not get such a good performance.

> As neither VMware player nor VirtualBox seem capable of providing a MS
> Win guest with any form of Internet access to an 802.11 connection from
> the host (in both cases, the claim from a MS Win 7 Pro guest is that
> there is no networking hardware, despite being shown by the guest as
> existing), it is possible that the "native" (ships with) vm
> functionality of EL 7 may address this issue.

So you want the guest to have full control over the wireless network
adapter?  That is possible, but only through a hypervisor ... and these
days, unless the adapter supports PCI SR-IOV [1], you need to disable
the interface (unload all drivers, unconfigure it) and allow your guest
to access the PCI interface directly (so called PCI passthrough).

With PCI SR-IOV support (this requires hardware support), you can
actually split a physical PCI device also supporting SR-IOV into
multiple "virtual functions" (VF) which results in more PCI devices
appearing on your bare-metal host and you can then grant a VM access to
this VF based PCI device.  For network cards, that also includes a
separate MAC address per VF.

[1] <http://blog.scottlowe.org/2009/12/02/what-is-sr-iov/>

But the downside, from your perspective, all this requires a hypervisor.

> Note that older versions of VirtualBox with older (pre EL 7) releases
> of the host and guest (MS Win XP) did work with a host 802.11 connection.

Was the wireless interface active/in use on the bare metal when you did
this?

> As a technical point, the VMs I currently have are in vmdk format for
> VirtualBox, as well as ova and vmx format for VMware.  Can the native EL
> 7 vm handle any of these formats or must I again transform to another
> format?

I don't know if qemu{,-kvm} can use vmdk files directly, but it should
be capable of converting it.  From the man page of qemu-img:

    QEMU also supports various other image file
    formats for compatibility with older QEMU
    versions or other hypervisors, including VMDK,
    VDI, VHD (vpc), VHDX, qcow1 and QED. For a full
    list of supported formats see "qemu-img
    --help".


-- 
kind regards,

David Sommerseth

ATOM RSS1 RSS2