Just like the recession, it's official. I'm totally hooked on OpenVZ and here are the reasons.
- Performance, performance, performance. IO-intensive apps are really sluggish on VMWare. I have OpenVZ running quite nicely on an old AMDK7. I had two VE's running (one with Security Center managing a Nessus scan, in progress) and other Ubuntu server and I was playing a game of Netpanzer with my son with no issues. Now that is a benchmark.
- Ease of use and broad Linux distribution support. Debian 4.0r3 slightly edges out Ubuntu 8.0.4.1 and it looks like CentOS (as the host) is straightforward as well. There is also a rich library of Linux OS templates to choose from.
- Non-Disruptiveness - a lot of Linux other virtualization solutions don't play well with others. VirtualBox doesn't work if kvm is running. Hell, I've yet to find a Linux distribution where Xen works out of the box, but on Ubuntu I can have VMware Server and OpenVZ together with no issues.
8 comments:
I'd be careful about referring to OpenVZ contexts as virtual machines, which they are not. It's a single kernel with a whole load of extra fields added to the structs, hence the wonderful performance.
While the OpenVZ guys work hard on isolation, its worth keeping in mind that an attack against one of these contexts has potentially much worse effects on the parent machine as opposed to a standard VM (although I guess historically VM security hasn't been that amazing either :).
OpenVZ versus Xen/ESX{i}. Hrmn. Ok, I'll take this bait.
All three support scheduling and clustering. Great.
Ok, wait. I just disqualified OpenVZ. Turns out that it doesn't have a hypervisor. How is this useful on a multi-core, multi-processor system? Second question: what else is there besides multi-core, multi-processor systems?
This would be great for my pre-2004 CPU machines. Too bad all of those will be retired in about 3 weeks.
David,
Understood. You'll notice security wasn't one of my reasons :)
- mdf
Andre,
Good points about about the hardware but with my unscientific benchmarks on my 2000 (AMD K7 1.8) and 2006 T-61/T72000 it is no contest.
Haven't tried it on my 2008 Optiplex 755 (E8400) but I'm guessing on low-end (non ESX[i]-supported) hardware there is a place for OpenVZ.
Plus, OpenVZ is an "apt-get" away which in my experience has *not* been the case with dom0. Maybe things have improved but 6 months ago dom0 didn't work with distribution kernels on any of the distros I tried.
Also, can I run VMWare Server and dom0 concurrently? I'm guessing not.
Matt,
ESX{i} has fairly good whitebox support, but it is hit-or-miss sometimes. In the latest build (123630) of ESX Server 3.5 Update 3, VMware added a lot more SATA controller support. If you had troubles in the past, I suggest trying with this new build, released two weeks ago.
I have been running Xen dom0 fine with CentOS since the beta release of CentOS 5 about 2 years ago. The beta had a few kinks (I got it working by force, but it was some work as I recall) that were worked out in the full release of 5. Once I proved that version 5 worked, I made it production in a few places. The only thing to install is the xen kernel and tools. Then you can immediately start domUs with only the configuration file and a virtual disk (I boot from iSCSI as the preferred method).
I'm really not concerned with VMware Server besides ESX{i}. A true hypervisor is worth its weight in gold. I have also been running VirtualIron for just about 2 years now, but I've only tried the free version. It's ok, but I'd rather pay for VI4 once it comes out.
Xen has some significant advantages over ESX Server (Citrix XepApp and Oracle's hypervisors are based on Xen -- hell even VirtualIron was originally based on Xen), but if you want true Enterprise support then ESX is the best. When VMware releases VI4 (including the new ESX Server that they plan to name "VDOS"), it will be a significant improvement in many areas. In many cases, it will be worth the very expensive cost, especially to organizations that can demonstrate ROI with NPV, IRR, or even a long (2 year+) payback period.
The significant benefit that I see of VI4 comes along with the recent pricing structures on HP blade servers. You can get a two-proc dual-core with 128GB of memory for around $12k. In most environments, memory use is much more intensive than CPU (especially for virtualized environments). There are notable exceptions were applications or application environments are more CPU intensive than what 12 cores can muster (VI4 recommends 3 servers -- using the HP blade system above 3 machines times 4 cores is 12). Even storage infrastructure is at least half of the cost it used to be when consolidating with VI4 using LeftHand Networks (recently acquired by HP).
Additionally, the pricing structure of Windows Server 2003/2008 Datacenter Editions, while priced per-CPU (very beneficial with the HP blade center prescribed above), do allow for an unlimited amount of virtual machines. If you factor VDI into the equation (using thin clients on the desk or Safebook LVOs as your laptops), then you're looking at near-total consolidation of almost any infrastructure.
I've setup all-Linux VDI environments over the past two years using Ubuntu with LTSP. I normally prefer CentOS or SuSE Enterprise, but the LTSP configuration has been much easier with Ubuntu. Fortunately, both Xen and VI3 play nice with all of these popular OSes.
If you are desiring application virtualization besides VDI, I suggest looking into Microsoft SoftGrid technology along with Hyper-V. It's the one "win area" for Microsoft in the virtualization world, but I haven't played much with it yet, only read about it.
Really, I don't consider the performance of the VMs as much as I do the ability to cluster and boot with iSCSI. I don't do a lot of CPU intensive stuff, and I don't think most Enteprises do either. It's probably easy to consolidate 80 VMs into 3 physical blade servers, including VDI with about 100 user-specific environments.
Your use of virtualization is just different. I think you are trying to get the maximum benefit out of your CPU for some reason. What's the application? Fuzzing/fault-injection testing? Network/app vuln scanning?
I have put some time into the above concepts along with virtualization. Looking back at my advice, I wouldn't change anything. Although it would be interesting to see something like WebInspect in VI3's DRS with VMotion. It would probably be possible to test ten times as many apps in such an environment, especially if the network has latency, packet loss, or other inconsistencies. Dedicated hardware should be a thing of the past. Optimizing applications under hypervisor-based virtualization is just different than old-school performance tuning. It doesn't really matter how much CPU each VM gets when they can moved around a cluster.
Andre,
Thanks for the thoughtful response.
I don't fuzz anymore ;)
Basically I want to come up with the optimal Linux virtualization solution to run two different RHEL/CentOS boxes for Tenable products in some virtual environment with multiple Nessus servers on bare metal along Snort and our passive vulnerability scanner (a pcap sniffer) and one of the boxes is an iptables gateway. And then run a half-dozen targets to scan with Nessus both Linux & Windows.
All on two portable, quiet, ultra small-form factor Dell desktop-class machines.No clustering. No DR and I need to minimize the IO-penalty of hypervisors. (Yep, I know VMWare Server isn't a real platform compared to ESX, but in my experience ESX sucks if the SAN sucks, systems lock up, filesystems get mounted read only, etc.) and Nessus (with 8-10 students connecting to it at once, launching scans, with 100-150 nessusd processes running on each baremetal scanner which often runs with a system load of 8-12) is resource hungry and doesn't do well under virtualization.
A harsh environment ;)
So ESXi doesn't meet these requirements, even if it ran on the fastest CPU/most RAM available/most on Dell Optiplex USFF 755's.
I do need to look at dom0 on CentOS5 some more.
I run training classes every week on multiple Dell Optiplex USFF 7xx's (not always the most efficient configurations either, btw) using ESX 3.5 with VirtualCenter 2.5 Update 3. Depending on the number of students, I implement 2-5 OpenFiler instances (usually 1 for every 3 students) for the iSCSI. Clearly, this has to be GbE, and it helps to have at least two interfaces (separate the VMkernel and the VM Network). ESX's install also supports Kickstart, and it takes literally seconds. A CentOS install with Xen would take much longer for your purposes. I don't think ESXi would be worthwhile for what you are doing, either -- but VI3/VI4 would be great for what you want to do. It's great for training, period.
I had this great idea a few weeks ago about running rpcapd (potentially wrapped in stunnel) on every VM (which you can compile on various Unixes that have libpcap support, as well as under Windows machine with winpcap libs). Earlier this week (coincidentally before this thread started up), I was thinking about how useful it would be to use PVS with these collected files.
Chris Hoff's BlackHat talk got me thinking about potential solutions to the "no network" virtualization problem. For example, there could be vulnerable services in the cloud (i.e. in the consolidated virtualized environment) between VMs but not between Hosts. And thus, came out my idea of using rpcapd, stunnel, and Tenable PVS together.
This will change drastically with VI4 assuming vNetwork and/or VMSafe API work out properly. Of course, Xen has had XenAccess for hypervisor introspection (i.e. vIDS) since almost the start -- probably 5 years or so. Although I'm still not sure what virt vendors/devs are going to do about network scanning. The network is changing, and thus, so must this technology.
Running nessusd on every host in a non-VMnic configured vSwitch before release into the production vSwitch would be mighty clever as well.
Well, I've said enough for now. Ping me if you want to talk more.
I'm not sure how old this thread is, but Andre Gironda had a serious bit of misinformation:
> Ok, wait. I just disqualified
> OpenVZ. Turns out that it doesn't
> have a hypervisor. How is this
> useful on a multi-core, multi-
> processor system?
Each VM on an OpenVZ machine can use as many CPUs as the host machine has. Scheduler-wise, it is just as if all of the processes are running on the same kernel - since they are.
Mark
Post a Comment