Showing posts with label virtualization. Show all posts
Showing posts with label virtualization. Show all posts

Saturday, June 06, 2009

Best Linux Virtualization for Netbooks?

So I use my Lenovo Ideapad S10 as my main Linux box nearly 40% of the time. I've 1.5GB of RAM and 120GB drive so this a decent machine. My current setup is two Linux partitions, one for Ubuntu 9.04 and the other for Debian 5.0. Ubuntu is my production distro and Debian is for bleeding edge stuff. My main requirement is to run Linux VM's (of other distros than what I run on the host) because if I need to run Windows or Solaris or whatever I can connect to a remote system. For Linux systems I want "server virtualization" meaning I don't have to have a console up. Realistically there is no single solution that will meet my requirements, but here are my thoughts on the alteratives for running on a Linux Atom-based Netbook.

1) OpenVZ - this would be my first choice. Unfortunately there are only kernel for Ubuntu 8.04 LTS and Debian for the these and Ubuntu LTS is too old to work well for a desktop on netbooks. I have yet to get the Broadcom drivers working yet on Debian and the latest stable OpenVZ kernel patches are 2.6.18. I guess the real issue is if I could get the Broadcom drivers working on the stock kernel that would be the way to go.

2) VMware Player - I don't want to put VMWare Server 2.x on my laptop and this seems like the logical choice. I already have this for BSD or Windows.

3) lguest - this is something new that I've just discovered. Can I run a CentOS VM under this. Not sure.

I don't care for VirtualBox and Qemu is too damn slow. Is there anything else I'm missing?

Thursday, February 19, 2009

Installing OpenSolaris on Lenny dom0 (sort of)

Here is my domain config file (open1.py)

mfranz-61lenny:/alt/xen/domains/opensol# cat open1.py
name = "solaris"
memory = "1024"
disk = [ 'file:/alt/isos/osol-0811.iso,6:cdrom,r', 'file:/alt/xen/domains/opensol/disk.img,0,w' ]
vif = [ '' ]
bootloader = '/usr/lib/xen-3.2-1/bin/pygrub'
kernel = '/platform/i86xpv/kernel/unix'
ramdisk = '/boot/x86.microroot'
extra = '/platform/i86xpv/kernel/unix - nowin -B install_media=cdrom'


And here is proof that I did it


mfranz-61lenny:/alt/xen/domains/opensol# xm create -c open1.py
Using config file "./open1.py".
Started domain solaris
v3.2-1 chgset 'unavailable'
SunOS Release 5.11 Version snv_101b 32-bit
Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: opensolaris
Remounting root read/write
Probing for device nodes ...
Preparing live image for use
Done mounting Live image
USB keyboard
1. Albanian 22. Latvian
2. Belarusian 23. Macedonian
3. Belgian 24. Malta_UK
4. Bulgarian 25. Malta_US
5. Croatian 26. Norwegian
6. Czech 27. Polish
7. Danish 28. Portuguese
8. Dutch 29. Russian
9. Finnish 30. Serbia-And-Montenegro
10. French 31. Slovenian
11. French-Canadian 32. Slovakian
12. Hungarian 33. Spanish
13. German 34. Swedish
14. Greek 35. Swiss-French
15. Icelandic 36. Swiss-German
16. Italian 37. Traditional-Chinese
17. Japanese-type6 38. TurkishQ
18. Japanese 39. TurkishF
19. Korean 40. UK-English
20. Latin-American 41. US-English
21. Lithuanian
To select the keyboard layout, enter a number [default 41]:


[snip]
User selected: English
Configuring devices.
Mounting cdroms
Reading ZFS config: done.

opensolaris console login: root


Now what do i do?

Tuesday, December 16, 2008

OpenVZ Virtual Ethernet Devices

By default, OpenVZ uses the venet devices which on the network have the same mac address as the host (VE0/CT0). This actually proved to be a problem when I was trying to [Nessus] scan OpenVZ containers from the host.

(Basically I'm trying to migrate Linux VM's some of which are targets that we scan in class away from VMWare Server, and the behavior was that students were only able to scan the VE's if they were connected to a Nessus scanner that was not on the same physical system as the other containers. Got it?)

So I had seen an eth0 within the container and wondered what it was and how it is configured. Well the virtual ethernet device wiki page has the answers although I was not unable to get this working after waking up at 1:30 AM and being unable to go back to sleep. Will try again tomorrow.

Tuesday, December 02, 2008

A Nice Xen vs. OpenVZ Comparison

Why OpenVZ and not XEN has a nice summary of some of the differences that are relevant to some of the comments made in response to OpenVZ fever

OpenVZ has one strong limit compare to XEN, it is not a full visualization and therefore you're limited to Linux only containers. People working with Sun will recognize Solaris zones concept, that was introduced few years ago. Like for Solaris every OpenVZ zones shared the same kernel, which at OVH translate in a Linux-2.6.24.7 kernel. This being said, it is important to understand that Linux distributions are independent of kernel, you can therefore run any Linux distributions you want under a unique kernel. While OVH ships Debian Etch with OpenVZ hyperviseur, you can chose any other distribution for your zones, new version of Fridu mostly operated with Ubuntu, but nothing prevents you from running multiple distributions. OVH ships template for Debian, CentOS, Gentoo and Ubuntu, but if this is not enough you can either create your own template or download one from Internet (OpenVz-WIKI)

OpenVZ includes a set of scripts to create/manage virtual machines, unlike Xen that is shipped naked and where I had to write more or less equivalent scripts by myself (cf: Fridu Xen Quick Start). Furthermore OVH ships OpenVZ with a web console from Proxmox, not that I'm a big fan of having a GUI, but as you can see on the video, it is great to make sexy demos.This console allows you to create a new virtual instances literally in a mater of seconds :) It allows you to start/stop change ram size, IP adresses, etc. on any instances without forcing you to remember any special commands. While Proxmox console misses few features like an SSH applet, a firewall config, or a java VPN. I must say that I get used to it and create every virtual machine through the web GUI.

OpenVZ is very light weight, not only it shares the same kernel, but also the same filesystem and networking stack. Direct result is that, on a given server you can run more OpenVZ zones than you could run XEN virtual-machines. From a user point of view when a zone is up, wether you run OpenVZ or XEN is fairly transparent, this being said they are nevertheless some fundamental differences:

Monday, December 01, 2008

OpenVZ Fever


Just like the recession, it's official. I'm totally hooked on OpenVZ and here are the reasons.
  1. Performance, performance, performance. IO-intensive apps are really sluggish on VMWare. I have OpenVZ running quite nicely on an old AMDK7. I had two VE's running (one with Security Center managing a Nessus scan, in progress) and other Ubuntu server and I was playing a game of Netpanzer with my son with no issues. Now that is a benchmark.
  2. Ease of use and broad Linux distribution support. Debian 4.0r3 slightly edges out Ubuntu 8.0.4.1 and it looks like CentOS (as the host) is straightforward as well. There is also a rich library of Linux OS templates to choose from.
  3. Non-Disruptiveness - a lot of Linux other virtualization solutions don't play well with others. VirtualBox doesn't work if kvm is running. Hell, I've yet to find a Linux distribution where Xen works out of the box, but on Ubuntu I can have VMware Server and OpenVZ together with no issues.
Obviously OpenVZ is only for running Linux VM's but I'm sold.

Sunday, November 16, 2008

VMWare Server 1.08 on Etch-n-Half

Well, my main server (running Ubuntu 8.04LTS) was giving me grief so I switched back to Debian. And I ran across this blog helped out a lot.

I always do a minimal net install and then select nothing with tasksel, so after going into dselect, doing an update, then installing all new packages that showed up, then I installed the following.


build-essential
linux-headers-`uname -r`
libx11-6
libxtst6
libxt6
libxrender1
libxi6
libdb3
psmisc

Saturday, November 01, 2008

SELinux and a Xen Vuln (CVE-2008-1943) Adventure

Given products like VM Fortess and the SVirt project I ran across today, I've been curious about the impact of application sandboxing/mandatory access control regimes against attacks against/using VMs.

Luckily, I happened to run across Adventures with a certain Xen vulnerability (in the PVFB backend). Now I don't claim to be able to understand even 20% of this paper, but I was pleased to see the impact of SELinux on the attack against dom0. Very cool. Plus, unlike so much vuln work it talks about the limitations of exploits and avoids all the media whoring that tends to characterize so much vuln work these days and turns me off.


Using the above guidelines, the exploit has been built. When SELinux was in permissive mode, it worked properly, handing out a connect-back root shell. However, an unsettling message was logged:

SELinux is preventing /usr/lib/xen/bin/qemu-dm (xend_t) "execmem"

And indeed, the exploit failed when SELinux was in enforcing mode. It turns out that by default the ability to map anonymous memory with rwx protection is denied by SELinux.

Thus, the call to mmap in the return-into-libc from the previous subsection failed.

There are workarounds for "execmem" protection, dutifully explained in, but I did not nd any le that can be opened with write permission and executed in xend t domain6. So, a less ecient return-into-libc payload has been created that does not use mmap. It returns into PLT entry for execv. The arguments for execv must be rebuilt at a xed address. Using repetitive returns into "assign %eax from the stack; ret" and "stosl; ret" (these sequences must be present in the qemu-dm binary) it is possible to create a payload of size const+4*length of execv arguments.

Monday, June 02, 2008

Random Thoughts on Hardy Virtualization (Redux) and Other Topics

Although the shoddy support of dom0 across most distributions not just Hardy (at least my hardware) is depressing (but not as depressing as the last episode of Season 4 of The Wire, talk about bleak, although after reading the summary I may have misinterpreted much of the episode) the one bright spot has been KVM with Virtual Machine Manager. Everything is built in, no threat of custom kernel. (VirtualBox worked reasonably well but even when I installed the xVM binary off the Sun site, ubuntu kept wanting to install a new *-rt kernel). Trying again on Ubuntu, since my first attempt to get Tracks (an interesting-looking rails GTD app) on CentOS failed miserably, much like my attempts to do the cgi version of MoinMoin running under Apache. Disabled selinux, disabled suExec and it stilled didn't work -- before deciding just to run the Desktop Edition until I have time mess with it again. The good thing was a got some custom themes working for the first time (had never tried) and ended up getting sinorca4moin working pretty easily.

Friday, May 23, 2008

Is there a distro that supports dom0 out of the box (on my T-61)?

Not Hardy, Not Edgy, Not FC 8 or 9, Not OpenSolaris 2008.5, Not OpenSUSE 10.3/11. Not CentOS 5.1.

I give up.

Monday, May 12, 2008

Simon vs. Hoff: Who is Baiting? Who is Switching? And why does this smell like SCADA?

Mainly because I need to get a non-political blog in the top position again (because I'm certainly no expert on virtualization security, but I am a virtualization end user who wants to know! ) but Simon Crosby's reaction to Hoff gives me a feeling of deja view in terms of the bait-and-switch approach to vulns I've heard from some SCADA vendors (or control systems standards efforts) over the years.

Although slightly more sophisticated than spouting off how many bits of encryption a protocol uses, saying that a given protocol is not Internet-facing, or claiming that to fix an implementation flaw in a weak protocol you should upgrade to protocol that uses SSL, some of the security cliches (or at worst, half truths) that undermine his credibility, and even I can recognize include:
  • Open source is more secure...
  • He mentions viruses and virus vendors in his first breath.
  • Equating security fixes with security/insecurity (and slamming VMWare!)
  • Bringing up EAL something or other
  • Mentioning TPM in any context
Knowing a thing or two about mania, I'm also curious about this sort of manic efforts (apart from making it so small you can't even see it) to secure the hypervisor, and whether he is willing to admit that there are some classes of attacks against guests (or, obviously, against the hypervisor) that are unique (or perhaps only possible) in a virtualized environment and that they care about? Or will the AV vendors solve these, too?

Done. There no more faux Hillary (or Hilter) on top. Can sleep now.

Saturday, April 26, 2008

Open Source Virtualization in Ubuntu Hardy (First Impressions)




So Open Source Virtualiztion in Ubuntu hasn't been a smooth ride. The linux-virtual and linux-xen kernels panic my T-61 due to what appear to be SATA and USB issues, respectively, but KVM seems to be working a little better (you need to apt virt-manager, kvm, libvirt-bin, and probably a few others) and actually surprisingly snappy.

Virtual Manager is now included and works reasonably well (if run as root, there are some permission issues) to provide VMWare-console like experience. I was able to boot (from the .iso) a Debian Etch install but I aborted due to hda (DSC Timeouts) but Hardy Server worked amazing well:

root@ubuntu:~# uname -a
Linux ubuntu 2.6.24-16-server #1 SMP Thu Apr 10 13:58:00 UTC 2008 i686 GNU/Linux
root@ubuntu:~# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 2
model name : QEMU Virtual CPU version 0.9.1
stepping : 3
cpu MHz : 1994.872
cache size : 2048 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 up pni
bogomips : 4007.02
clflush size : 64

Networking "just worked" a pleasant suprise thinking about what a pain it used to be back in the day with User Mode Linux. You'll also note that you connect into the console via VNC, which was nice.

Update
So Hardy seems to be the anomaly. OpenBSD 4.2 failed to install, Ubuntu 7.10 failed to boot the CD. Also Virtual Manager frequently wouldn't start up or shutdown VM's.

So all in all a pretty dismal picture.