Wednesday, June 3, 2009

Virtualization in 2009.06 (Part 2, VirtualBox)

To continue the "getting started" tutorials on virtualization on OSOL 2009.06 we will take a look at VirtualBox. VirtualBox might be a good Solution for remote desktop installations, fully virtualized machines with graphical console over the network. As with the previous posts this one will be focused on how to deploy it on your server. If you run virtualbox on your desktop it is probably easier to use the supplied GUI, which is on pair with the VMWare workstation GUI.

VirtualBox only supports full virtualization, so the performance will not be as good as a paravirtualized xVM domain or Solaris zone, but it works good for graphical access to virtual machines over network with lighter workloads.

Install VirtualBox

VirtualBox is not hosted in the standard OSOL repositories, it is in the extra repository (along with flash, JavaFX SDK and others). There is some minor hassle to get access to this repository, you will ned to register an account with sun then download and install certificates. You can register here and download the certificate, there are install instructions on the site, but here is the procedure:
$ pfexec mkdir -m 0755 -p /var/pkg/ssl
$ pfexec cp -i ~/Desktop/OpenSolaris_extras.key.pem /var/pkg/ssl
$ pfexec cp -i ~/Desktop/OpenSolaris_extras.certificate.pem /var/pkg/ssl
$ pfexec pkg set-authority \
-k /var/pkg/ssl/OpenSolaris_extras.key.pem \
-c /var/pkg/ssl/OpenSolaris_extras.certificate.pem \
-O https://pkg.sun.com/opensolaris/extra/ extra
Now install VirtualBox:
$ pfexec pkg install virtualbox virtualbox/kernel
Create virtual machines

Create and install a virtual machine:
$ cd /opt/VirtualBox
$ VBoxManage createvm --name "WinXP" --register
$ VBoxManage modifyvm "WinXP" --memory "512MB" \
--acpi on --boot1 dvd --nic1 nat
$ VBoxManage createhd --filename "WinXP.vdi" \
--size 5000 --remember
$ VBoxManage modifyvm "WinXP" --hda "WinXP.vdi"
$ VBoxManage controlvm "WinXP" dvdattach \
/path_to_dvd/winxp.iso
$ VBoxHeadLess --startvm "WinXP"

Connect to the server with an RDP client and perform the installation. There are free RDP Clients available for most Operating systems:
Solaris/OpenSolaris/*BSD: rdekstop (pkg install SUNWrdesktop)
MacOS X: Remote Desktop Connection
After the installation is done, the guest can be controlled with VBoxManage. VBoxManage can control all aspects of the guest, snapshots, power, USB sound. Here are a few basic commands:
Start: VBoxHeadLess -s winxp
Poweroff: VboxManage controlvm WinXP poweroff
Reset: VboxManage controlvm WinXP reset
If you want to install the VirtualBox Guest additions, just inject the DVD:

$ VBoxManage controlvm "WinXP" dvdattach \
/opt/VirtualBox/additions/VBoxGuestAdditions.iso


Tuesday, June 2, 2009

Virtualization in 2009.06 (Part 1, xVM)

Install xVM

It seems that i have started a little tutorial trail for working with OSOL 2009.06. A few of my friends are about to install this release, so i thought i might as well make it a few blog entries, there are probably other people out there that want some help to get a quick start of OSOL 2009.06.

This will be the first entry about virtualization, first we get xVM running, later entries will describe some basic setup of Solaris zones and VirtualBox.

Install the xVM packages:
pfexec pkg install xvm-gui SUNWvdisk
Edit /rpool/boot/grub/menu.lst, copy your current entry and modify it to something similar to this:
title OpenSolaris 2009.06 xVM
findroot (pool_rpool,1,a)
bootfs rpool/ROOT/opensolaris
kernel$ /boot/$ISADIR/xen.gz
module$ /platform/i86xpv/kernel/$ISADIR/unix /platform/i86xpv/kernel/$ISADIR/unix -B $ZFS-BOOTFS,console=text
module$ /platform/i86pc/$ISADIR/boot_archive
Reboot into this grub entry and if everything works set this as your default boot entry:
$ bootadm list-menu
the location for the active GRUB menu is: /rpool/boot/grub/menu.lst
default 0
timeout 30
0 OpenSolaris 2009.06
1 OpenSolaris 2009.06 xVM

$ bootadm set-menu default=1
Enable the xVM services

If you want to be able to connect to VNC over the network to perform the installation, make xen listen on external addresses for VNC connections:
$ pfexec svccfg -s xvm/xend setprop config/vnc-listen = astring: \"0.0.0.0\"
Enable the xVM services and set a password for VNC connections:
$ pfexec svccfg -s xvm/xend setprop config/vncpasswd = astring \"yourpass\"
$ pfexec svcadm refresh xvm/xend
$ pfexec svcadm enable -r xvm/virtd
$ pfexec svcadm enable -r xvm/domains
( Ignore messages about multiple instances for dependencies )

Installing domains

Now we should be able to create our domU instances, First we create a zvol to be used as disk for the domU:
$ pfexec zfs create -o compression=on -V 10G zpool01/myzvol
Install a paravirtualized domain e.g. OSOL 2009.06:
$ pfexec virt-install --nographics -p -r 1024 -n osol0906 -f /dev/zvol/zpool01/myzvol -l /zpool01/dump/osol-0906-x86.iso
Connect to the console and answer the language questions:

$ pfexec xm console osol0906

Back on dom0 get the address, port and password for the VNC console for the OSOL installation, first get the domain id:

$ pfexec virsh domid osol0906

Get the address, port and password using the domain id:
$ pfexec /usr/lib/xen/bin/xenstore-read /local/domain//ipaddr/0
$ pfexec /usr/lib/xen/bin/xenstore-read /local/domain//guest/vnc/port
$ pfexec /usr/lib/xen/bin/xenstore-read /local/domain//guest/vnc/passwd

Connect with a VNC client to address and port, authenticate using the password.

Install an OS without support for paravirtualization, e.g. Windows:
$ pfexec virt-install -v --vnc -n windows -r 1024 -f /dev/zvol/dsk/zpool01/myzvol -c /zpool01/dump/windows.iso --os-type=windows
Connect to the xVM VNC console using the password provided earlier with svccfg/vncpasswd.

When installation is done domains can be listed with xm list and started with xm start.

Monday, June 1, 2009

OpenSolaris 2009.06 quick guide

OpenSolaris 2009.06 was released today! I have written a very quick guide for customizing and adding some basic services to an OSOL 2009.06 server from the shell.

Package operations

Install the storage-nas cluster (CIFS,iSCSI, NDMP etc.)
$ pfexec pkg install storage-nas
Add compilers (sunstudio, it can be replaced with e.g. gcc-dev-4)
$ pfexec pkg install sunstudio
Add the contrib repository for contributed packages
$ pfexec pkg set-publisher -O http://pkg.opensolaris.org/contrib contrib
Other packages that can be of interest:
SUNWmysql51, ruby-dev, SUNWPython26, SUNWapch22m-dtrace, amp-dev, gcc-dev-4

List available and installed packages with search string
$ pkg list -a SUNWgzip
NAME (PUBLISHER) VERSION STATE UFIX
SUNWgzip 1.3.5-0.111 installed ----
$ pkg list -a '*Python26*'
NAME (PUBLISHER) VERSION STATE UFIX
SUNWPython26 2.6.1-0.111 known ----
SUNWPython26-extra 0.5.11-0.111 known ----
Sharing

Create a ZFS filesystem with compression enabled
$ pfexec zfs create -o compression=on rpool/export/share
Share with NFS

Enable NFS service:
$ pfexec svcadm enable -r nfs/server
Enable sharing over NFS for the share filesystem
$ zfs set sharenfs=on rpool/export/share
Share with CIFS
$ pfexec svcadm enable smb/server
$ pfexec zfs set sharesmb=on rpool/export/share
$ pfexec zfs set sharesmb=name=mysharename rpool/export/share
To enable users to access the CIFS share add the following line to /etc/pam.conf and reset users pw with passwd(1):
other password required pam_smb_passwd.so.1 nowarn
Enable auto ZFS-snapshots

Disable snapshots globally for the whole pool:
$ pfexec zfs set com.sun:auto-snapshot=false rpool
Enable snapshots for the share:
$ pfexec zfs set com.sun:auto-snapshot=true rpool/export/share
Enable daily snapshots (can be frequent, hourly, daily, weekly or monthly):
$ pfexec svcsadm enable auto-snapshot:daily
List snapshots:
$ zfs list -t snapshot

If you are unfamiliar with Solaris, read the manual pages for the following commands:
prstat, fsstat, pkg, powertop, zfs, zpool, sharemgr, ipfilter, dladm, fmdump

NOTE: There are graphical options for snapshot setup and the package manager that can be used from graphical console, VNC or forwarded X. Launch them with with "time-slider-setup" or "packagemanager".