WHY KVM
KVM is a full virtualization technology which uses the linux kernel for efficient hardware virtualization.
It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.
Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.
* Ensure that your processor support virtualization
[root@fedora ~]# egrep '(vmx|svm)' --color=always /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority
* Install the required kvm packages
yum install kvm qemu libvirt python-virtinst
qemu: we use qemu for processor emulation. We use full system emulation. QEMU emulates a full system (for example a PC), including one or several processors and various peripherals. It can be used to launch different Operating Systems without rebooting the PC or to debug system code.
libvirt: It is a tool to interact with virtualization capabilities of linux kernel.
In order to use Qemu support in libvirt we have to use the libvirt daemon.
/etc/init.d/libvirtd start
Qemu communicates with libvirt daemon using URIS.
So to connect to the daemon, one of two different URIs is used:
* qemu:///system connects to a system mode daemon.
* qemu:///session connects to a session mode daemon.
Inorder to check whether they communicate without any issues. Check the following command and output.
[root@fedora ~]# virsh -c qemu:///system list
Id Name State
----------------------------------
Usually all the documents mention to use a bridged setup so that internet is available to the hosts. In my setup I configured iptables with SNAT so that the VMS can communicate to the external network.
Install Virt Manager:
Virt Manager provides you a GUI to manage the VMS.
yum install virt-manager
Creating a centos VM.
virt-install --connect qemu:///system -n centos5 -r 512 --vcpus=2 -f /opt/kvm/centos5.qcow2 -s 12 -c CentOS-5.4-i386-netinstall.iso --vnc --noautoconsole --os-type linux --os-variant rhel5.4 --accelerate --hvm
-r 512 = RAM
--vcpus = virtual cpus
-f /opt/kvm/centos5.qcow2 = Storage file used to store VM data
-c ISO file to install the VM from
--os-variant = You can find the OS - Variant list by using man virt-install
If you have additional partitions then you can use it as a LVM disk and configure individual LVM disks for individual hosts.
You can use virt-manager to proceed with the rest of the installation process. You can also use vncviewer to proceed with the installation
yum install vnc
vncviewer localhost:0 # If you are having a single host.
Suppose that you are having multiple hosts then you find the vnc ip and port and then connect using vncviewer
[root@fedora ~]# virsh vncdisplay centos5
:1
root@fedora ~]$ vncviewer localhost:1
TigerVNC Viewer for X version 1.0.0 - built Oct 26 2009 10:57:15
Copyright (C) 2002-2005 RealVNC Ltd.
Copyright (C) 2000-2006 TightVNC Group
Copyright (C) 2004-2009 Peter Astrand for Cendio AB
See http://www.tigervnc.org for information on TigerVNC.
Wed May 5 22:20:18 2010
CConn: connected to host localhost port 5901
CConnection: Server supports RFB protocol version 3.8
CConnection: Using RFB protocol version 3.8
TXImage: Using default colormap and visual, TrueColor, depth 24.
CConn: Using pixel format depth 24 (32bpp) little-endian rgb888
CConn: Using Tight encoding
Configuring Network for the VM.
In my case virt-manager uses the ip range 192.168.122.0/24 which is default. You can change the default by running the following commands and modifying the entries.
virsh net-edit default
virsh net-destroy default
virsh net-start default
I used SNAT based forwarding. You can also use MASQUERADE.
My eth0 interface ip was 192.168.1.221 and hence my iptables command was
iptables -t nat -A POSTROUTING -s 192.168.122.0/24 -j SNAT --to 192.168.1.221
echo 1 >/proc/sys/net/ipv4/ip_forward
Now you should be able to connect to external network from VM.
This seems to be the future to virtualization as we are using the kernel to manage the hardware virtualization which would improve effeciency of the system.