About virtualization on the base of kvm
Below we consider a client-server version of KVM virtualization. Under this simple phrase means that there is a machine that may be running with ubuntu and it will act as virtualization server, because some virtual machines will be running on it. There will be also another machine (PC), which will act as a client and can be connected to the server.
We should install the following on the server and on the client:
sudo apt-get install python-virtinst kvm libvirt-bin bridge-utils
add a user:
sudo adduser $USER libvirtd
As well as graphic utilities virt-manager and Vinagre.
virt-manager — is a graphic utility used to control virtual mashines on the server of virtual mashines.
Vinagre — is a graphic utility used for connecting the virtual mashines to the server. It can be running both on the server and on the remote client user. For this purpose we can also use the utility - remmina
Creating of a virtual mashine.
Before creating a virtual machine we need to create a bridge br0 on the server and to integrate the network interface used by virtual machine to communicate with the world.
The creation itself of a virtual machine is just one line:
sudo virt-install -n vuserv -r 384 -f vuserv.img -s 10 -c ubuntu-14.04.2-server-powerpc.iso --accelerate --os-type=linux --os-variant=generic26 -v --vnc -w bridge:br0
If we have installed virt-viewer then a window appears where the process of ubuntu installation is displayed. Installation can be completed directly either in the window, or by connecting as a client (the connection process is described below).
To check ourself we can run:
virsh list —all
либо:
virsh -c qemu:///system list —all
This command will list the VMS and their current status. We can also open the graphic Manager (virt-manager) of virtual machines.
Connecting as a client.
Local connection, i.е. getting started with the client on the VMS server:
Running the program Vinagre. Protocol — vnc. Host — localhost:5900. Click on «Connect». That's all. We can see a virtual mashine on the screen.
Another way using a console. Check the access via console:
$ virsh ttyconsole vuserv /dev/pts/3
Connect:
sudo virsh console vuserv Выполнено подключение к домену tstVM1 Escape character is ^] Ubuntu 14.04.2 LTS ubserv ttyS0 ubserv login:
or:
$ sudo socat - /dev/pts/3 Ubuntu 14.04.2 LTS ubserv ttyS0 ubserv login:
Distant connection, i.е. connection to the virtual mashine server using network.
We should configurate ssh channel for the client:
ssh -f -N -L 59000:localhost:5900 user1@192.168.1.99
user1@192.168.1.99 — server address of virtual mashines and the user name on the server.
localhost:5900 — is the same localhost:5900 as using local connection.
59000 — is any port number.
Then we run the program Vinagre. Protocol — vnc. Host — localhost:59000. Click on «Connect». That's all. We can see a virtual mashine on the screen.
Below is just a list of useful commands:
Starting a virtual mashine: virsh start vuserv
Stopping a VM: virsh destroy vuserv
Removing a VM: virsh undefine vuserv
Obtaining xml description of VM: virsh dumpxml vuserv > vuserv.xml
Editing of xml description of virtual mashine: virsh edit DevVM-01
Some more words about virtualization kvm.
Kvm virtualization is based on starting qemu-system-xxx, i.е. if we run the following command:
qemu-system-ppc -enable-kvm -m 512 -nographic -M ppce500 -kernel /home/user/uImage -initrd /home/user/rootfs.ext2.gz -append "root=/dev/ram rw console=ttyS0,115200" -serial tcp::4444,server,telnet (qemu)QEMU waiting for connection on: telnet::0.0.0.0:4444,server
There will be a virtual mashine created. It will get started at once as soon as the connection via telnet is completed. (we run qemu-system-ppc on the virtual mashine server, and on the client: telnet 192.168.1.234 4444 then we can see the standard console. In this case the console of virtual mashine is transmitted in telnet connection.)
Save this long command line to the file while removing the output of the debug information not in telnet, but in pty:
$ cat kvm1.args /usr/bin/qemu-system-ppc -m 256 -nographic -M ppce500 -kernel /boot/uImage -initrd /home/root/my.rootfs.ext2.gz -append "root=/dev/ram rw console=ttyS0,115200" -serial pty -enable-kvm -name kvm1
Now based on this file, we can generate the xml file of the virtual machine as shown below:
virsh domxml-from-native qemu-argv kvm1.args > kvm1.xml $ cat kvm1.xml <domain type='kvm'> <name>kvm1</name> <uuid>f5b7cf86-41eb-eb78-4284-16501ff9f0e1</uuid> <memory unit='KiB'>262144</memory> <currentMemory unit='KiB'>262144</currentMemory> <vcpu placement='static'>1</vcpu> <os> <type arch='ppc' machine='ppce500'>hvm</type> <kernel>/boot/uImage</kernel> <initrd>/home/root/rootfs.ext2.gz</initrd> <cmdline>root=/dev/ram rw console=ttyS0,115200</cmdline> </os> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/qemu-system-ppc</emulator> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <memballoon model='virtio'/> </devices> </domain>
Further steps are simple:
1. Create a vm based on xml file:
# virsh define kvm1.xml Domain kvm1 defined from kvm1.xml # virsh list --all Id Name State ---------------------------------------------------- - kvm1 shut off
2. Start virtual mashine:
# virsh start kvm1 Domain kvm1 started # virsh list Id Name State ---------------------------------------------------- 3 kvm1 running
3. Connect to the virtual mashine (we can see the same debug console as in the example with Telnet):
# virsh console kvm1 Connected to domain kvm1 Escape character is ^]
Some more simple examples for testing qemu:
1. Start from the command line qemu-system-ppc64, qemu-system-x86_64 and other should cause the opening of a new window where we can see the start of bios.
2. The simplest booting of the system. There is the kernel and file system in ext2 image. Running this script in the new opened window we can see the kernel loading, running of file system and running of command line. (kernel and file system have been created using buildroot)
#! /bin/sh LOCATION="buildroot_x86/output/images" KERNEL="bzImage" DISK="rootfs_my.ext2" qemu-system-i386 -kernel $LOCATION/$KERNEL \ -hda $LOCATION/$DISK \ -boot c \ -m 128 \ -append "root=/dev/sda rw" \ -localtime \ -no-reboot \ -name rtlinux Script for ext2 image creating of file system #!/bin/sh rm -f /tmp/ext2img.img RDSIZE=30000 BLKSIZE=1024 dd if=/dev/zero of=/tmp/ext2img.img bs=$BLKSIZE count=$RDSIZE mke2fs -F -m 0 -b $BLKSIZE /tmp/ext2img.img $RDSIZE sudo mount /tmp/ext2img.img /mnt/ext2img -t ext2 -o loop sudo cp -R ./rootfs/* /mnt/ext2img sudo umount /mnt/ext2img cp /tmp/ext2img.img ./rootfs_my.ext2
Mounting of qemu virtio image in ubuntu.
The virtio image is an image of file system that is sent to the virtual mashine by the parameter:
-drive file=guest_disk_image,cache=none,if=virtio
after system booting the image is defined as /dev/vda (/dev/vda1).
For the device /dev/vda all standard manipulations with storage devices are aviablesuch such as fdisk, mkfs ect.
So we can mount that image in ubuntu as follows:
sudo modprobe nbd sudo qemu-nbd -c /dev/nbd0 ./guest_disk_image mount /dev/nbd0p1 ./mnt
for unmounting:
umount ./mnt sudo qemu-nbd -d /dev/nbd0
That mount procedure can be used for mounting of vdi images too.
How to work with virtual disks Virtio.
Parameter transmission while starting qemu looks like this:
-drive file=my_guest_disk,cache=none,if=virtio
After system booting the disk will be shown as a device vda:
# ls -l /dev/vda brw-r----- 1 root disk 254, 0 Jan 1 00:02 /dev/vda
How can we create an image of the virtio disk:
It can be done in this way:
qemu-img create -f qcow2 virtio_disk.qcow2 0.5G
Or in this way:
dd if=/dev/zero of= virtio_disk bs=4K count=4K
How to partitition/formatting the image of virtio disk:
After linux loading the common operations for the device /dev/vda are aviable to work with the disks.
Manipulate disk partition table:
fdisk /dev/vda
Creating of a file system:
mkfs.ext3 /dev/vda1
Mounting:
mount /dev/vda1 /mnt/virtio
Example of starting ubuntu 14 for ppc plattform using qemu.
Testing was performed on the version of qemu-2.4.0.
1. Create a disk:
qemu-img create -f qcow2 ubuntu14server.qcow2 2G
2. Run install process:
qemu-2.4.0-rc0/ppc64-softmmu/qemu-system-ppc64 -m 1024 -hda ./ ubuntu14server.qcow2 -cdrom /home/user1/Загрузки/ubuntu-14.04.3-server-powerpc.iso -boot d
After starting up a window will appear, which will show us the installation process. In the dialog window we click on "tab" (or type help), then we should select live-powerpc64. Then a long process of installation will begin.
After installing process we run the virtual image as shown below:
qemu-system-i386 -hda ubuntu14server.qcow2